mclust/0000755000176200001440000000000013205247703011564 5ustar liggesusersmclust/inst/0000755000176200001440000000000013205037573012543 5ustar liggesusersmclust/inst/CITATION0000644000176200001440000000152713175042631013702 0ustar liggesuserscitHeader("To cite 'mclust' R package in publications, please use::") citEntry(entry = "Article", title = "{mclust} 5: clustering, classification and density estimation using {G}aussian finite mixture models", author = personList(person(given="Luca", family="Scrucca"), person(given="Michael", family="Fop"), person(given=c("Thomas", "Brendan"), family="Murphy"), person(given=c("Adrian", "E."), family="Raftery")), journal = "The {R} Journal", year = "2016", volume = "8", number = "1", pages = "205--233", # textVersion = paste("Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016)", "mclust 5: clustering, classification and density estimation using Gaussian finite mixture models", "The R Journal", "8/1, pp. 205-233")) mclust/inst/NEWS0000644000176200001440000003072013205020617013233 0ustar liggesusersVersion 5.4 o Model-based hierarchical clustering used to start the EM-algorithm is now based on the scaled SVD transformation proposed by Scrucca and Raftery (2016). This change is not backward compatible. However, previous results can be easily obtained by issuing the command: mclust.options(hcUse = "VARS") For more details see help("mclust.options"). o Added 'subset' parameter in mclust.options() to control the maximal sample size to be used in the initial model-based hierarchical phase. o predict.densityMclust() can optionally returns the density on a logarithm scale. o removed normalization of mixing proportions for new models in single mstep. o Internal rewrite of code used by packageStartupMessage(). o Fix a small bug in MclustBootstrap() in the univariate data case. o Fix bugs when both the noise and subset are provided for initialization. o Vignette updated to include references, startup message, css style, etc. o Various bugs fix in plotting methods when noise is present. o Update references in citation() and man pages. Version 5.3 (2017-05) o added gmmhd() function and relative methods. o added MclustDRsubsel() function and relative methods. o added option to use subset in the hierarchical initialization step when a noise component is present. o plot.clustCombi() presents a menu in interactive sessions, no more need of data for classification plots but extract the data from the 'clustCombi' object. o added combiTree() plot for 'clustCombi' objects. o clPairs() now produces a single scatterplot in the bivariate case. o fix a bug in imputeData() when seed is provided. Now if a seed is provided the data matrix is reproducible. o in imputeData() and imputePairs() some name of arguments have been modified to be coherent with the rest of the package. o added functions matchCluster() and majorityVote(). o rewriting of print and summary methods for 'clustCombi' class object. o added clustCombiOptim(). o fix a bug in randomPairs() when nrow of input data is odd. o fix a bug in plotDensityMclust2(), plotDensityMclustd() and surfacePlot() when a noise component is present. Version 5.2.3 (2017-03) o added native routine registration for Fortran code. o fix lowercase argument PACKAGE in .Fortran() calls. Version 5.2.2 (2017-01) o fix a bug in rare case when performing an extra M step at the end of EM algorithm. Version 5.2.1 (2017-01) o replaced "structure(NULL, *)" with "structure(list(), *)" Version 5.2 (2016-03) o added argument 'x' to Mclust() to use BIC values from previous computations to avoid recomputing for the same models. The same argument and functionality was already available in mclustBIC(). o added argument 'x' to mclustICL() to use ICL values from previous computations to avoid recomputing for the same models. o corrected a bug on plot.MclustBootstrap for the "mean" and "var" in the univariate case. o modified uncertainty plots. o introduction of as.Mclust and as.densityMclust to convert object to specific mclust classes. o solved a numerical accuracy problem in qclass when the scale of x is (very) large by making the tolerance eps scale dependent. o use transpose subroutine instead of non-Fortran 77 TRANSPOSE function in mclustaddson.f o predict.Mclust and predict.MclustDR implement a more efficient and accurate algorithm for computing the densities. Version 5.1 (2015-10) o fix slow convergence for VVE and EVE models. o fix a bug in orientation for model VEE. o add an extra M-step and parameters update in Mclust call via summaryMclustBIC. Version 5.0.2 (2015-07) o add option to MclustBootstrap for using weighted likelihood bootstrap. o add a plot method to MclustBootstrap. o add errorBars function. o add clPairsLegend function. o add covw function. o fix rescaling of mixing probabilities in new models. o bug fixes. Version 5.0.1 (2015-04) o bug fixes. o add print method to hc. Version 5.0.0 (2015-03) o added the four missing models (EVV, VEE, EVE, VVE) to the mclust family. A noise component is allowed, but no prior is available. o added mclustBootstrapLRT function (and print and plot methods) for selecting the number of mixture components based on the bootstrap sequential likelihood ratio test. o added MclustBootstrap function (and print and summary methods) for performing bootstrap inference. This provides standard errors for parameters and confidence intervals. o a "A quick tour of mclust" vignette is included as html generated using rmarkdown and knitr. Older vignettes are included as other documentation for the package. o modified arguments to mvn2plot to control colour, lty, lwd, and pch of ellipses and mean point. o added functions emX, emXII, emXXI, emXXX, cdensX, cdensXII, cdensXXI, and cdensXXX, to deal with single-component cases, so calling the em function works even if G = 1. o small changes to icl.R, now icl is a generic method, with specialized methods for 'Mclust' and 'MclustDA' objects. o bug fixes for transformations in the initialization step when some variables are constant (i.e. the variance is zero) or a one-dimensional data is provided. o change the order of arguments in hc (and all the functions calling it). o small modification to CITATION file upon request of CRAN maintainers. o small bug fixes. Version 4.4 (2014-09) o add option for using transformation of variables in the hierarchical initialization step. o add quantileMclust for computing the quantiles from a univariate Gaussian mixture distribution. o bug fixes on summaryMclustBIC, summaryMclustBICn, Mclust to return a matrix of 1s on a single column for z even in the case of G = 1. This is to avoid error on some plots. o pdf files (previously included as vignettes) moved to inst/doc with corresponding index.html. Version 4.3 (2014-03) o bug fix for logLik.MclustDA() in the univariate case. o add argument "what" to predict.densityMclust() function for choosing what to retrieve, the mixture density or component density. o hc function has an additional parameter to control if the original variables or a transformation of them should be used for hierarchical clustering. o included "hcUse" in mclust.options to be passed as default to hc(). o original data (and class for classification models) are stored in the object returned by the main functions. o add component "hypvol"" to Mclust object which provide the hypervolume of the noise component when required, otherwise is set to NA. o add a warning when prior is used and BIC returns NAs. o bug fixes for summary.Mclust(), print.summary.Mclust(), plot.Mclust() and icl() in the case of presence of a noise component. o some plots on plot.MclustDR() require plot.new() before calling plot.window(). o bug fixes for MclustDR() when p=1. o correction to Mclust man page. o bug fixes. Version 4.2 (2013-07) o fix bug in sim* functions when no obs are assigned to a component. o MclustDA allows to fit a single class model. o fix bug in summary.Mclust when a subset is used for initialization. o fix a bug in the function qclass when ties are present in quantiles, so it always return the required number of classes. o various small bug fixes. Version 4.1 (2013-04) o new icl function for computing the integrated complete-data likelihood o new mclustICL function with associated print and plot methods o print.mclustBIC shows also the top models based on BIC o modified summary.Mclust to return also the icl o rewrite of adjustedRandIndex function. This version is more efficient for large vectors o updated help for adjustedRandIndex o modifications to MclustDR and its summary method o changed behavior of plot.MclustDR(..., what = "contour") o improved plot of uncertainty for plot.MclustDR(..., what = "boundaries") o corrected a bug for malformed GvHD data o corrected version of qclass for selecting initial values in case of 1D data when successive quantiles coincide o corrected version of plot BIC values when only a single G component models are fitted o various bug fixes Version 4.0 (2012-08) o new summary and print methods for Mclust. o new summary and print methods for densityMclust. o included MclustDA function and methods. o included MclustDR function and methods. o included me.weighted function. o restored hierarchical clustering capability for the EEE model (hcEEE). o included vignettes for mclust version 4 from Technical Report No. 597 and for using weights in mclust. o adoption of GPL (>= 2) license. Version 3.5 (2012-07) o added summary.Mclust o new functions for plotting and summarizing density estimation o various bug fixes o clustCombi (code and doc provided by Jean-Patrick Baudry) o bug fix: variable names lost when G = 1 Version 3.4.11 (2012-01) o added NAMESPACE Version 3.4.10 (2011-05) o removed intrinsic gamma Version 3.4.9 (2011-05) o fixed hypvol function to avoid overflow o fixed hypvol helpfile value description o removed unused variables and tabs from source code o switched to intrinsic gamma in source code o fixed default warning in estepVEV and mstepVEV Version 3.4.8 (2010-12) o fixed output when G = 1 (it had NA for the missing "z" component) Version 3.4.7 (2010-10) o removed hierarchical clustering capability for the EEE model (hcEEE) o The R 2.12.0 build failed due to a 32-bit Windows compiler error, forcing removal of the underlying Fortran code for hcEEE from the package, which does not contain errors and compiles on other platforms. Version 3.4.6 (2010-08) o added description of parameters output component to Mclust and o summary.mclustBIC help files Version 3.4.5 (2010-07) o added densityMclust function Version 3.4.4 (2010-04) o fixed bug in covariance matrix output for EEV and VEV models Version 3.4.3 (2010-02) o bug fixes Version 3.4.2 (2010-02) o moved CITATION to inst and used standard format o BibTex entries are in inst/cite o fixed bug in handling missing classes in mclustBIC o clarified license wording Version 3.4.1 (2010-01) o corrected output description in mclustModel help file o updated mclust manual reference to show revision Version 3.4 (2009-12) o updated defaultPrior help file o added utility functions for imputing missing data with the mix package o changed default max # of mixture components in each class from 9 to 3 Version 3.3.2 (2009-10) o fixed problems with \cr in mclustOptions help file Version 3.3.1 (2009-06) o fixed plot.mclustBIC/plot.Mclust to handle modelNames o changed "orientation" for VEV, VVV models to be consistent with R eigen() and the literature o fixed some problems including doc for the noise option o updated the unmap function to optionally include missing groups Version 3.3 (2009-06) o fixed bug in the "errors" option for randProj o fixed boundary cases for the "noise" option Version 3.2 (2009-04) o added permission for CRAN distribution to LICENSE o fixed problems with help files found by new parser o changed PKG_LIBS order in src/Makevars o fixed Mclust to handle sampling in data expression in call Version 3.1.10 (2008-11) o added EXPR = to all switch functions that didn't already have it Version 3.1.9 (2008-10) o added pro component to parameters in dens help file o fixed some problems with the noise option Version 3.1.1 (2007-03) o Default seed changed in sim functions. o Model name check added to various functions. o Otherwise backward compatible with version 3.0 Version 3.1 (2007-01) o Most plotting functions changed to use color. o Mclust/mclustBIC fixed to work with G=1 o Otherwise backward compatible with version 3.0. Version 3.0 (2006-10) o New functionality added, including conjugate priors for Bayesian regularization. o Backward compatibility is not guaranteed since the implementation of some functions has changed to make them easier to use or maintain. mclust/inst/doc/0000755000176200001440000000000013205037571013306 5ustar liggesusersmclust/inst/doc/mclust.R0000644000176200001440000001070113205037563014740 0ustar liggesusers## ----setup, include=FALSE------------------------------------------------ library(knitr) opts_chunk$set(fig.align = "center", out.width = "90%", fig.width = 6, fig.height = 5.5, dev.args=list(pointsize=10), par = TRUE, # needed for setting hook collapse = TRUE, # collapse input & ouput code in chunks warning = FALSE) knit_hooks$set(par = function(before, options, envir) { if(before && options$fig.show != "none") par(family = "sans", mar=c(4.1,4.1,1.1,1.1), mgp=c(3,1,0), tcl=-0.5) }) ## ---- message = FALSE, echo=-2------------------------------------------- library(mclust) cat(mclust:::mclustStartupMessage(), sep="") ## ---- par=TRUE----------------------------------------------------------- data(diabetes) class <- diabetes$class table(class) X <- diabetes[,-1] head(X) clPairs(X, class) BIC <- mclustBIC(X) plot(BIC) summary(BIC) mod1 <- Mclust(X, x = BIC) summary(mod1, parameters = TRUE) plot(mod1, what = "classification") table(class, mod1$classification) par(mfrow = c(2,2)) plot(mod1, what = "uncertainty", dimens = c(2,1), main = "") plot(mod1, what = "uncertainty", dimens = c(3,1), main = "") plot(mod1, what = "uncertainty", dimens = c(2,3), main = "") par(mfrow = c(1,1)) ICL = mclustICL(X) summary(ICL) plot(ICL) LRT = mclustBootstrapLRT(X, modelName = "VVV") LRT ## ------------------------------------------------------------------------ data(iris) class <- iris$Species table(class) X <- iris[,1:4] head(X) mod2 <- MclustDA(X, class, modelType = "EDDA") summary(mod2) plot(mod2, what = "scatterplot") plot(mod2, what = "classification") ## ------------------------------------------------------------------------ data(banknote) class <- banknote$Status table(class) X <- banknote[,-1] head(X) mod3 <- MclustDA(X, class) summary(mod3) plot(mod3, what = "scatterplot") plot(mod3, what = "classification") ## ------------------------------------------------------------------------ unlist(cvMclustDA(mod2, nfold = 10)[2:3]) unlist(cvMclustDA(mod3, nfold = 10)[2:3]) ## ------------------------------------------------------------------------ data(acidity) mod4 <- densityMclust(acidity) summary(mod4) plot(mod4, what = "BIC") plot(mod4, what = "density", data = acidity, breaks = 15) plot(mod4, what = "diagnostic", type = "cdf") plot(mod4, what = "diagnostic", type = "qq") ## ------------------------------------------------------------------------ data(faithful) mod5 <- densityMclust(faithful) summary(mod5) plot(mod5, what = "BIC") plot(mod5, what = "density") plot(mod5, what = "density", type = "image", col = "dodgerblue3", grid = 100) plot(mod5, what = "density", type = "persp") ## ------------------------------------------------------------------------ boot1 <- MclustBootstrap(mod1, nboot = 999, type = "bs") summary(boot1, what = "se") summary(boot1, what = "ci") par(mfrow=c(4,3)) plot(boot1, what = "pro") plot(boot1, what = "mean") ## ------------------------------------------------------------------------ boot4 <- MclustBootstrap(mod4, nboot = 999, type = "bs") summary(boot4, what = "se") summary(boot4, what = "ci") par(mfrow=c(2,2)) plot(boot4, what = "pro") plot(boot4, what = "mean") ## ------------------------------------------------------------------------ mod1dr <- MclustDR(mod1) summary(mod1dr) plot(mod1dr, what = "pairs") plot(mod1dr, what = "boundaries", ngrid = 200) mod1dr <- MclustDR(mod1, lambda = 1) summary(mod1dr) plot(mod1dr, what = "scatterplot") plot(mod1dr, what = "boundaries", ngrid = 200) ## ------------------------------------------------------------------------ mod2dr <- MclustDR(mod2) summary(mod2dr) plot(mod2dr, what = "scatterplot") plot(mod2dr, what = "boundaries", ngrid = 200) mod3dr <- MclustDR(mod3) summary(mod3dr) plot(mod3dr, what = "scatterplot") plot(mod3dr, what = "boundaries", ngrid = 200) ## ------------------------------------------------------------------------ mclust.options("bicPlotColors") mclust.options("classPlotColors") ## ------------------------------------------------------------------------ cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#999999", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") bicPlotColors <- mclust.options("bicPlotColors") bicPlotColors[1:14] <- c(cbPalette, cbPalette[1:6]) mclust.options("bicPlotColors" = bicPlotColors) mclust.options("classPlotColors" = cbPalette) clPairs(iris[,-5], iris$Species) mod <- Mclust(iris[,-5]) plot(mod, what = "BIC") plot(mod, what = "classification") mclust/inst/doc/mclust.Rmd0000644000176200001440000001471213201045253015256 0ustar liggesusers--- title: "A quick tour of **mclust**" author: "Luca Scrucca" date: "`r format(Sys.time(), '%d %b %Y')`" output: rmarkdown::html_vignette: toc: true number_sections: false css: "vignette.css" vignette: > %\VignetteIndexEntry{A quick tour of mclust} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE} library(knitr) opts_chunk$set(fig.align = "center", out.width = "90%", fig.width = 6, fig.height = 5.5, dev.args=list(pointsize=10), par = TRUE, # needed for setting hook collapse = TRUE, # collapse input & ouput code in chunks warning = FALSE) knit_hooks$set(par = function(before, options, envir) { if(before && options$fig.show != "none") par(family = "sans", mar=c(4.1,4.1,1.1,1.1), mgp=c(3,1,0), tcl=-0.5) }) ``` # Introduction **mclust** is a contributed R package for model-based clustering, classification, and density estimation based on finite normal mixture modelling. It provides functions for parameter estimation via the EM algorithm for normal mixture models with a variety of covariance structures, and functions for simulation from these models. Also included are functions that combine model-based hierarchical clustering, EM for mixture estimation and the Bayesian Information Criterion (BIC) in comprehensive strategies for clustering, density estimation and discriminant analysis. Additional functionalities are available for displaying and visualizing fitted models along with clustering, classification, and density estimation results. This document gives a quick tour of **mclust** (version `r packageVersion("mclust")`) functionalities. It was written in R Markdown, using the [knitr](https://cran.r-project.org/package=knitr) package for production. See `help(package="mclust")` for further details and references provided by `citation("mclust")`. ```{r, message = FALSE, echo=-2} library(mclust) cat(mclust:::mclustStartupMessage(), sep="") ``` # Clustering ```{r, par=TRUE} data(diabetes) class <- diabetes$class table(class) X <- diabetes[,-1] head(X) clPairs(X, class) BIC <- mclustBIC(X) plot(BIC) summary(BIC) mod1 <- Mclust(X, x = BIC) summary(mod1, parameters = TRUE) plot(mod1, what = "classification") table(class, mod1$classification) par(mfrow = c(2,2)) plot(mod1, what = "uncertainty", dimens = c(2,1), main = "") plot(mod1, what = "uncertainty", dimens = c(3,1), main = "") plot(mod1, what = "uncertainty", dimens = c(2,3), main = "") par(mfrow = c(1,1)) ICL = mclustICL(X) summary(ICL) plot(ICL) LRT = mclustBootstrapLRT(X, modelName = "VVV") LRT ``` # Classification ## EDDA ```{r} data(iris) class <- iris$Species table(class) X <- iris[,1:4] head(X) mod2 <- MclustDA(X, class, modelType = "EDDA") summary(mod2) plot(mod2, what = "scatterplot") plot(mod2, what = "classification") ``` ## MclustDA ```{r} data(banknote) class <- banknote$Status table(class) X <- banknote[,-1] head(X) mod3 <- MclustDA(X, class) summary(mod3) plot(mod3, what = "scatterplot") plot(mod3, what = "classification") ``` ## Cross-validation error ```{r} unlist(cvMclustDA(mod2, nfold = 10)[2:3]) unlist(cvMclustDA(mod3, nfold = 10)[2:3]) ``` # Density estimation ## Univariate ```{r} data(acidity) mod4 <- densityMclust(acidity) summary(mod4) plot(mod4, what = "BIC") plot(mod4, what = "density", data = acidity, breaks = 15) plot(mod4, what = "diagnostic", type = "cdf") plot(mod4, what = "diagnostic", type = "qq") ``` ## Multivariate ```{r} data(faithful) mod5 <- densityMclust(faithful) summary(mod5) plot(mod5, what = "BIC") plot(mod5, what = "density") plot(mod5, what = "density", type = "image", col = "dodgerblue3", grid = 100) plot(mod5, what = "density", type = "persp") ``` # Bootstrap inference ```{r} boot1 <- MclustBootstrap(mod1, nboot = 999, type = "bs") summary(boot1, what = "se") summary(boot1, what = "ci") par(mfrow=c(4,3)) plot(boot1, what = "pro") plot(boot1, what = "mean") ``` ```{r} boot4 <- MclustBootstrap(mod4, nboot = 999, type = "bs") summary(boot4, what = "se") summary(boot4, what = "ci") par(mfrow=c(2,2)) plot(boot4, what = "pro") plot(boot4, what = "mean") ``` # Dimension reduction ## Clustering ```{r} mod1dr <- MclustDR(mod1) summary(mod1dr) plot(mod1dr, what = "pairs") plot(mod1dr, what = "boundaries", ngrid = 200) mod1dr <- MclustDR(mod1, lambda = 1) summary(mod1dr) plot(mod1dr, what = "scatterplot") plot(mod1dr, what = "boundaries", ngrid = 200) ``` ## Classification ```{r} mod2dr <- MclustDR(mod2) summary(mod2dr) plot(mod2dr, what = "scatterplot") plot(mod2dr, what = "boundaries", ngrid = 200) mod3dr <- MclustDR(mod3) summary(mod3dr) plot(mod3dr, what = "scatterplot") plot(mod3dr, what = "boundaries", ngrid = 200) ``` # Using colorblind-friendly palettes Most of the graphs produced by **mclust** use colors that by default are defined in the following options: ```{r} mclust.options("bicPlotColors") mclust.options("classPlotColors") ``` The first option controls colors used for plotting BIC, ICL, etc. curves, whereas the second option is used to assign colors for indicating clusters or classes when plotting data. Color-blind-friendly palettes can be defined and assigned to the above options as follows: ```{r} cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#999999", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") bicPlotColors <- mclust.options("bicPlotColors") bicPlotColors[1:14] <- c(cbPalette, cbPalette[1:6]) mclust.options("bicPlotColors" = bicPlotColors) mclust.options("classPlotColors" = cbPalette) clPairs(iris[,-5], iris$Species) mod <- Mclust(iris[,-5]) plot(mod, what = "BIC") plot(mod, what = "classification") ``` The above color definitions are adapted from http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/, but users can easily define their own palettes if needed. # References Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, *The R Journal*, 8/1, pp. 205-233. https://journal.r-project.org/archive/2016/RJ-2016-021/RJ-2016-021.pdf Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, *Journal of the American Statistical Association*, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. *Technical Report* No. 597, Department of Statistics, University of Washington. mclust/inst/doc/mclust.html0000644000176200001440001267407313205037571015527 0ustar liggesusers A quick tour of mclust

A quick tour of mclust

Luca Scrucca

21 Nov 2017

Introduction

mclust is a contributed R package for model-based clustering, classification, and density estimation based on finite normal mixture modelling. It provides functions for parameter estimation via the EM algorithm for normal mixture models with a variety of covariance structures, and functions for simulation from these models. Also included are functions that combine model-based hierarchical clustering, EM for mixture estimation and the Bayesian Information Criterion (BIC) in comprehensive strategies for clustering, density estimation and discriminant analysis. Additional functionalities are available for displaying and visualizing fitted models along with clustering, classification, and density estimation results.

This document gives a quick tour of mclust (version 5.4) functionalities. It was written in R Markdown, using the knitr package for production. See help(package="mclust") for further details and references provided by citation("mclust").

library(mclust)
##     __  ___________    __  _____________
##    /  |/  / ____/ /   / / / / ___/_  __/
##   / /|_/ / /   / /   / / / /\__ \ / /   
##  / /  / / /___/ /___/ /_/ /___/ // /    
## /_/  /_/\____/_____/\____//____//_/    version 5.4
## Type 'citation("mclust")' for citing this R package in publications.

Clustering

data(diabetes)
class <- diabetes$class
table(class)
## class
## Chemical   Normal    Overt 
##       36       76       33
X <- diabetes[,-1]
head(X)
##   glucose insulin sspg
## 1      80     356  124
## 2      97     289  117
## 3     105     319  143
## 4      90     356  199
## 5      90     323  240
## 6      86     381  157
clPairs(X, class)


BIC <- mclustBIC(X)
plot(BIC)

summary(BIC)
## Best BIC values:
##              VVV,3       VVV,4       EVE,6
## BIC      -4751.316 -4784.32213 -4785.24591
## BIC diff     0.000   -33.00573   -33.92951

mod1 <- Mclust(X, x = BIC)
summary(mod1, parameters = TRUE)
## ----------------------------------------------------
## Gaussian finite mixture model fitted by EM algorithm 
## ----------------------------------------------------
## 
## Mclust VVV (ellipsoidal, varying volume, shape, and orientation) model with 3 components:
## 
##  log.likelihood   n df       BIC       ICL
##       -2303.496 145 29 -4751.316 -4770.169
## 
## Clustering table:
##  1  2  3 
## 81 36 28 
## 
## Mixing probabilities:
##         1         2         3 
## 0.5368974 0.2650129 0.1980897 
## 
## Means:
##              [,1]     [,2]       [,3]
## glucose  90.96239 104.5335  229.42136
## insulin 357.79083 494.8259 1098.25990
## sspg    163.74858 309.5583   81.60001
## 
## Variances:
## [,,1]
##          glucose    insulin       sspg
## glucose 57.18044   75.83206   14.73199
## insulin 75.83206 2101.76553  322.82294
## sspg    14.73199  322.82294 2416.99074
## [,,2]
##           glucose   insulin       sspg
## glucose  185.0290  1282.340  -509.7313
## insulin 1282.3398 14039.283 -2559.0251
## sspg    -509.7313 -2559.025 23835.7278
## [,,3]
##           glucose   insulin       sspg
## glucose  5529.250  20389.09  -2486.208
## insulin 20389.088  83132.48 -10393.004
## sspg    -2486.208 -10393.00   2217.533

plot(mod1, what = "classification")

table(class, mod1$classification)
##           
## class       1  2  3
##   Chemical  9 26  1
##   Normal   72  4  0
##   Overt     0  6 27

par(mfrow = c(2,2))
plot(mod1, what = "uncertainty", dimens = c(2,1), main = "")
plot(mod1, what = "uncertainty", dimens = c(3,1), main = "")
plot(mod1, what = "uncertainty", dimens = c(2,3), main = "")
par(mfrow = c(1,1))


ICL = mclustICL(X)
summary(ICL)
## Best ICL values:
##              VVV,3       EVE,6       EVE,7
## ICL      -4770.169 -4797.38232 -4797.50566
## ICL diff     0.000   -27.21342   -27.33677
plot(ICL)


LRT = mclustBootstrapLRT(X, modelName = "VVV")
LRT
## Bootstrap sequential LRT for the number of mixture components
## -------------------------------------------------------------
## Model        = VVV 
## Replications = 999 
##               LRTS bootstrap p-value
## 1 vs 2   361.16739             0.001
## 2 vs 3   123.49685             0.001
## 3 vs 4    16.76161             0.482

Classification

EDDA

data(iris)
class <- iris$Species
table(class)
## class
##     setosa versicolor  virginica 
##         50         50         50
X <- iris[,1:4]
head(X)
##   Sepal.Length Sepal.Width Petal.Length Petal.Width
## 1          5.1         3.5          1.4         0.2
## 2          4.9         3.0          1.4         0.2
## 3          4.7         3.2          1.3         0.2
## 4          4.6         3.1          1.5         0.2
## 5          5.0         3.6          1.4         0.2
## 6          5.4         3.9          1.7         0.4
mod2 <- MclustDA(X, class, modelType = "EDDA")
summary(mod2)
## ------------------------------------------------
## Gaussian finite mixture model for classification 
## ------------------------------------------------
## 
## EDDA model summary:
## 
##  log.likelihood   n df       BIC
##       -187.7097 150 36 -555.8024
##             
## Classes       n Model G
##   setosa     50   VEV 1
##   versicolor 50   VEV 1
##   virginica  50   VEV 1
## 
## Training classification summary:
## 
##             Predicted
## Class        setosa versicolor virginica
##   setosa         50          0         0
##   versicolor      0         47         3
##   virginica       0          0        50
## 
## Training error = 0.02
plot(mod2, what = "scatterplot")

plot(mod2, what = "classification")

MclustDA

data(banknote)
class <- banknote$Status
table(class)
## class
## counterfeit     genuine 
##         100         100
X <- banknote[,-1]
head(X)
##   Length  Left Right Bottom  Top Diagonal
## 1  214.8 131.0 131.1    9.0  9.7    141.0
## 2  214.6 129.7 129.7    8.1  9.5    141.7
## 3  214.8 129.7 129.7    8.7  9.6    142.2
## 4  214.8 129.7 129.6    7.5 10.4    142.0
## 5  215.0 129.6 129.7   10.4  7.7    141.8
## 6  215.7 130.8 130.5    9.0 10.1    141.4
mod3 <- MclustDA(X, class)
summary(mod3)
## ------------------------------------------------
## Gaussian finite mixture model for classification 
## ------------------------------------------------
## 
## MclustDA model summary:
## 
##  log.likelihood   n df       BIC
##       -646.0801 200 66 -1641.849
##              
## Classes         n Model G
##   counterfeit 100   EVE 2
##   genuine     100   XXX 1
## 
## Training classification summary:
## 
##              Predicted
## Class         counterfeit genuine
##   counterfeit         100       0
##   genuine               0     100
## 
## Training error = 0
plot(mod3, what = "scatterplot")

plot(mod3, what = "classification")

Cross-validation error

unlist(cvMclustDA(mod2, nfold = 10)[2:3])
##     error        se 
## 0.0200000 0.0101835
unlist(cvMclustDA(mod3, nfold = 10)[2:3])
## error    se 
##     0     0

Density estimation

Univariate

data(acidity)
mod4 <- densityMclust(acidity)
summary(mod4)
## -------------------------------------------------------
## Density estimation via Gaussian finite mixture modeling 
## -------------------------------------------------------
## 
## Mclust E (univariate, equal variance) model with 2 components:
## 
##  log.likelihood   n df       BIC       ICL
##       -185.9493 155  4 -392.0723 -398.5554
## 
## Clustering table:
##  1  2 
## 98 57
plot(mod4, what = "BIC")

plot(mod4, what = "density", data = acidity, breaks = 15)

plot(mod4, what = "diagnostic", type = "cdf")

plot(mod4, what = "diagnostic", type = "qq")

Multivariate

data(faithful)
mod5 <- densityMclust(faithful)
summary(mod5)
## -------------------------------------------------------
## Density estimation via Gaussian finite mixture modeling 
## -------------------------------------------------------
## 
## Mclust EEE (ellipsoidal, equal volume, shape and orientation) model with 3 components:
## 
##  log.likelihood   n df       BIC       ICL
##       -1126.326 272 11 -2314.316 -2357.824
## 
## Clustering table:
##   1   2   3 
##  40  97 135
plot(mod5, what = "BIC")

plot(mod5, what = "density")

plot(mod5, what = "density", type = "image", col = "dodgerblue3", grid = 100)

plot(mod5, what = "density", type = "persp")

Bootstrap inference

boot1 <- MclustBootstrap(mod1, nboot = 999, type = "bs")
summary(boot1, what = "se")
## ----------------------------------------------------------
## Resampling standard errors 
## ----------------------------------------------------------
## Model                      = VVV 
## Num. of mixture components = 3 
## Replications               = 999 
## Type                       = nonparametric bootstrap 
## 
## Mixing probabilities:
##          1          2          3 
## 0.05419824 0.05076478 0.03439980 
## 
## Means:
##                 1         2         3
## glucose 0.9971095  3.495092 16.968865
## insulin 7.5755190 28.509588 66.873839
## sspg    7.6801499 31.104490  9.850315
## 
## Variances:
## [,,1]
##          glucose  insulin      sspg
## glucose 10.98617  52.3023  55.67768
## insulin 52.30230 514.5611 423.81468
## sspg    55.67768 423.8147 638.96053
## [,,2]
##           glucose   insulin      sspg
## glucose  73.19889  713.4218  454.1136
## insulin 713.42183 8223.6427 3408.2040
## sspg    454.11357 3408.2040 6701.2674
## [,,3]
##           glucose   insulin      sspg
## glucose 1050.8228  4222.914  656.1974
## insulin 4222.9140 19090.309 2472.4658
## sspg     656.1974  2472.466  490.7419
summary(boot1, what = "ci")
## ----------------------------------------------------------
## Resampling confidence intervals 
## ----------------------------------------------------------
## Model                      = VVV 
## Num. of mixture components = 3 
## Replications               = 999 
## Type                       = nonparametric bootstrap 
## Confidence level           = 0.95 
## 
## Mixing probabilities:
##               1         2         3
## 2.5%  0.4391053 0.1522011 0.1304827
## 97.5% 0.6531662 0.3523459 0.2650163
## 
## Means:
## [,,1]
##        glucose  insulin     sspg
## 2.5%  89.22400 343.2214 150.0103
## 97.5% 93.19995 373.9562 181.4140
## [,,2]
##         glucose  insulin     sspg
## 2.5%   98.82161 446.5122 258.6823
## 97.5% 112.49144 556.9686 372.5100
## [,,3]
##        glucose   insulin      sspg
## 2.5%  196.9730  967.6157  62.84289
## 97.5% 264.2195 1231.6510 101.30441
## 
## Variances:
## [,,1]
##        glucose  insulin     sspg
## 2.5%  38.30658 1261.947 1555.379
## 97.5% 81.45462 3217.946 4146.033
## [,,2]
##         glucose   insulin     sspg
## 2.5%   86.63346  3138.875 12422.98
## 97.5% 385.78229 33757.819 38375.98
## [,,3]
##        glucose   insulin     sspg
## 2.5%  3308.589  47536.77 1348.638
## 97.5% 7375.754 121281.74 3292.333

par(mfrow=c(4,3))
plot(boot1, what = "pro")
plot(boot1, what = "mean")

boot4 <- MclustBootstrap(mod4, nboot = 999, type = "bs")
summary(boot4, what = "se")
## ----------------------------------------------------------
## Resampling standard errors 
## ----------------------------------------------------------
## Model                      = E 
## Num. of mixture components = 2 
## Replications               = 999 
## Type                       = nonparametric bootstrap 
## 
## Mixing probabilities:
##          1          2 
## 0.03977832 0.03977832 
## 
## Means:
##          1          2 
## 0.04488707 0.06680960 
## 
## Variances:
##          1          2 
## 0.02458687 0.02458687
summary(boot4, what = "ci")
## ----------------------------------------------------------
## Resampling confidence intervals 
## ----------------------------------------------------------
## Model                      = E 
## Num. of mixture components = 2 
## Replications               = 999 
## Type                       = nonparametric bootstrap 
## Confidence level           = 0.95 
## 
## Mixing probabilities:
##               1         2
## 2.5%  0.5452403 0.2994824
## 97.5% 0.7005176 0.4547597
## 
## Means:
##              1        2
## 2.5%  4.280150 6.191873
## 97.5% 4.455027 6.456946
## 
## Variances:
##               1         2
## 2.5%  0.1436647 0.1436647
## 97.5% 0.2379912 0.2379912

par(mfrow=c(2,2))
plot(boot4, what = "pro")
plot(boot4, what = "mean")

Dimension reduction

Clustering

mod1dr <- MclustDR(mod1)
summary(mod1dr)
## -----------------------------------------------------------------
## Dimension reduction for model-based clustering and classification 
## -----------------------------------------------------------------
## 
## Mixture model type: Mclust (VVV, 3)
##         
## Clusters  n
##        1 81
##        2 36
##        3 28
## 
## Estimated basis vectors:
##              Dir1     Dir2      Dir3
## glucose -0.988671  0.76532 -0.966565
## insulin  0.142656 -0.13395  0.252109
## sspg    -0.046689  0.62955  0.046837
## 
##                Dir1     Dir2      Dir3
## Eigenvalues  1.3506  0.75608   0.53412
## Cum. %      51.1440 79.77436 100.00000
plot(mod1dr, what = "pairs")

plot(mod1dr, what = "boundaries", ngrid = 200)


mod1dr <- MclustDR(mod1, lambda = 1)
summary(mod1dr)
## -----------------------------------------------------------------
## Dimension reduction for model-based clustering and classification 
## -----------------------------------------------------------------
## 
## Mixture model type: Mclust (VVV, 3)
##         
## Clusters  n
##        1 81
##        2 36
##        3 28
## 
## Estimated basis vectors:
##              Dir1     Dir2
## glucose  0.764699  0.86359
## insulin -0.643961 -0.22219
## sspg     0.023438 -0.45260
## 
##                Dir1      Dir2
## Eigenvalues  1.2629   0.35218
## Cum. %      78.1939 100.00000
plot(mod1dr, what = "scatterplot")

plot(mod1dr, what = "boundaries", ngrid = 200)

Classification

mod2dr <- MclustDR(mod2)
summary(mod2dr)
## -----------------------------------------------------------------
## Dimension reduction for model-based clustering and classification 
## -----------------------------------------------------------------
## 
## Mixture model type: EDDA 
##             
## Classes       n Model G
##   setosa     50   VEV 1
##   versicolor 50   VEV 1
##   virginica  50   VEV 1
## 
## Estimated basis vectors:
##                  Dir1      Dir2     Dir3     Dir4
## Sepal.Length  0.17425 -0.193663  0.64081 -0.46231
## Sepal.Width   0.45292  0.066561  0.34852  0.57110
## Petal.Length -0.61629 -0.311030 -0.42366  0.46256
## Petal.Width  -0.62024  0.928076  0.53703 -0.49613
## 
##                 Dir1     Dir2      Dir3       Dir4
## Eigenvalues  0.94747  0.68835  0.076141   0.052607
## Cum. %      53.69408 92.70374 97.018700 100.000000
plot(mod2dr, what = "scatterplot")

plot(mod2dr, what = "boundaries", ngrid = 200)


mod3dr <- MclustDR(mod3)
summary(mod3dr)
## -----------------------------------------------------------------
## Dimension reduction for model-based clustering and classification 
## -----------------------------------------------------------------
## 
## Mixture model type: MclustDA 
##              
## Classes         n Model G
##   counterfeit 100   EVE 2
##   genuine     100   XXX 1
## 
## Estimated basis vectors:
##              Dir1      Dir2      Dir3      Dir4       Dir5      Dir6
## Length   -0.10139 -0.328225  0.797068 -0.033629 -0.3174275  0.085062
## Left     -0.21718 -0.305014 -0.303111 -0.893349  0.3700659 -0.565410
## Right     0.29222 -0.018401 -0.495891  0.407413 -0.8612986  0.480799
## Bottom    0.57591  0.445352  0.120173 -0.034595  0.0043174 -0.078640
## Top       0.57542  0.385535  0.100865 -0.103623  0.1359128  0.625902
## Diagonal -0.44089  0.672250 -0.047784 -0.151252 -0.0443255  0.209691
## 
##                 Dir1     Dir2     Dir3     Dir4      Dir5       Dir6
## Eigenvalues  0.87242  0.55373  0.48546  0.13291  0.053075   0.027273
## Cum. %      41.05755 67.11689 89.96377 96.21866 98.716489 100.000000
plot(mod3dr, what = "scatterplot")

plot(mod3dr, what = "boundaries", ngrid = 200)

Using colorblind-friendly palettes

Most of the graphs produced by mclust use colors that by default are defined in the following options:

mclust.options("bicPlotColors")
##       EII       VII       EEI       EVI       VEI       VVI       EEE 
##    "gray"   "black" "#218B21" "#41884F" "#508476" "#58819C" "#597DC3" 
##       EVE       VEE       VVE       EEV       VEV       EVV       VVV 
## "#5178EA" "#716EE7" "#9B60B8" "#B2508B" "#C03F60" "#C82A36" "#CC0000" 
##         E         V 
##    "gray"   "black"
mclust.options("classPlotColors")
##  [1] "dodgerblue2"    "red3"           "green3"         "slateblue"     
##  [5] "darkorange"     "skyblue1"       "violetred4"     "forestgreen"   
##  [9] "steelblue4"     "slategrey"      "brown"          "black"         
## [13] "darkseagreen"   "darkgoldenrod3" "olivedrab"      "royalblue"     
## [17] "tomato4"        "cyan2"          "springgreen2"

The first option controls colors used for plotting BIC, ICL, etc. curves, whereas the second option is used to assign colors for indicating clusters or classes when plotting data.

Color-blind-friendly palettes can be defined and assigned to the above options as follows:

cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#999999", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
bicPlotColors <- mclust.options("bicPlotColors")
bicPlotColors[1:14] <- c(cbPalette, cbPalette[1:6])
mclust.options("bicPlotColors" = bicPlotColors)
mclust.options("classPlotColors" = cbPalette)

clPairs(iris[,-5], iris$Species)

mod <- Mclust(iris[,-5])
plot(mod, what = "BIC")

plot(mod, what = "classification")

The above color definitions are adapted from http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/, but users can easily define their own palettes if needed.

References

Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 205-233. https://journal.r-project.org/archive/2016/RJ-2016-021/RJ-2016-021.pdf

Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, Journal of the American Statistical Association, 97/458, pp. 611-631.

Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington.

mclust/src/0000755000176200001440000000000013205037573012355 5ustar liggesusersmclust/src/Makevars0000644000176200001440000000006213205037573014047 0ustar liggesusersPKG_LIBS = $(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS) mclust/src/mclustaddson.f0000644000176200001440000021450613205037573015234 0ustar liggesusers* ===================================================================== subroutine transpose(X, p) * * Compute transpose of a matrix * * ===================================================================== implicit NONE integer :: p, i, j double precision :: X(p,p), temp do j = 2, p do i = 1, j-1 temp = X(i,j) X(i,j) = X(j,i) X(j,i) = temp end do end do return end * ===================================================================== subroutine crossprodf(X, Y, n, p, q, XTY) * * Given matrices X and Y of dimension (n x p) and (n x q) computes * the matrix of cross-product, i.e. X' Y * * ===================================================================== implicit NONE integer n, p, q double precision X(n,p), Y(n,q), XTY(p,q) * Compute X'Y using DGEMM blas subroutine call DGEMM('T', 'N', p, q, n, 1.d0, X, n, Y, n, 0.d0, XTY, p) end * ====================================================================== subroutine covwf ( X, Z, n, p, G, M, S, W ) * * Given data matrix X(n x p) and weight matrix Z(n x G) computes * weighted means M(p x G), weighted covariance matrices S(p x p x G) * and weighted scattering matrices W(p x p x G) * * ====================================================================== implicit none integer :: n, p, G double precision :: X(n,p), Z(n,G) double precision :: M(p,G), S(p,p,G), W(p,p,G) integer :: j, k double precision :: sumZ(G), temp(n,p) * compute X'Z using BLAS call dgemm('T', 'N', p, G, n, 1.d0, X, n, Z, n, 0.d0, M, p) * compute row sums of Z sumZ = sum(Z, DIM = 1) do k = 1,G * compute means call dscal(p, (1.d0/sumZ(k)), M(:,k), 1) do j = 1,p * compute sqrt(Z) * (X - M) temp(:,j) = sqrt(Z(:,k)) * (X(:,j) - M(j,k)) end do * compute scattering matrix call dgemm('T', 'N', p, p, n, 1.d0, temp, n, temp, n, * 0.d0, W(:,:,k), p) * compute covariance matrix S(:,:,k) = W(:,:,k)/sumZ(k) end do return end ************************************************************************ **** EVV model ************************************************************************ * ====================================================================== subroutine msevv (x,z, n,p,G, mu,O,U,scale,shape,pro, lwork,info, * eps) * Maximization step for model EEV * ====================================================================== implicit none integer :: n, p, G double precision :: x(n,p), z(n,G) double precision :: mu(p,G), O(p,p,*), U(p,p,*), pro(G) double precision :: scale(G), shape(p,G) double precision :: sumz(G) integer :: i, j, k, info, lwork, dummy, l double precision :: temp(p), wrk(lwork), eps double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) * * double precision :: BIGLOG * parameter (BIGLOG = 709.d0) * * double precision :: SMALOG * parameter (SMALOG = -708.d0) *----------------------------------------------------------------------- * colsums of z sumz = sum(z, dim = 1) * a priori probabilities pro = sumz / dble(n) * pro = sumz / sum(sumz) * if there is noise sum(sumz) does not sum to n. See help(mstep) do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) * U(:,:,k) = U(:,:,k) + * * spread(temp, dim = 2, ncopies = p)* * * spread(temp, dim = 1, ncopies = p) * outer product, Press et al. (1992), p. 970 call dger(p, p, 1.d0, temp, 1, temp, 1, U(:,:,k), p) * more efficient end do * U contains the weighted scattering matrix O(:,:,k) = U(:,:,k) * call dgesvd('O', 'N', p, p, O(:,:,k), p, shape(:,k), * * dummy, 1, dummy, 1, wrk, lwork, info) call dgesvd('N', 'O', p, p, O(:,:,k), p, shape(:,k), * dummy, 1, dummy, 1, wrk, lwork, info) * O now contains eigenvectors of the scattering matrix * ##### NOTE: O is transposed * shape contains the eigenvalues * check if dgesvd converged (info == 0) if (info .ne. 0) then l = info else scale(k) = exp( sum( log(shape(:,k)) ) )**(1.d0/p) call dscal(p*p, 1.d0/scale(k), U(:,:,k), 1) call dscal(p, 1.d0/scale(k), shape(:,k), 1) * now U is the matrix Ck (Celeux, Govaert 1995, p.787) * and shape is the proper scaled shape (matrix A) end if end do * check very small eigenvalues (singular covariance) if (minval(shape) .le. sqrt(eps) .or. * minval(scale) .le. sqrt(eps)) then shape = FLMAX scale = FLMAX return end if scale(1) = sum(scale) / sum(sumz) return end * ====================================================================== subroutine esevv (x,z, n,p,G,Gnoise, mu,O,scale,shape,pro, Vinv, * loglik, eps) * Expectation step for model EVV * ====================================================================== implicit none integer :: n, p, G, Gnoise double precision :: x(n,p), z(n,Gnoise) double precision :: mu(p,G), O(p,p,G), scale, shape(p,G) double precision :: Vinv, pro(Gnoise) double precision :: temp1(p), temp2(p), temp3 integer :: i, k, j double precision :: const, logdet, loglik, eps double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- * check very small eigenvalues (singular covariance) if (minval(shape) .le. sqrt(eps) .or. scale .le. sqrt(eps)) then loglik = FLMAX return end if const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O(:,:,k), p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/sqrt(scale*shape(:,k)) temp3 = ddot(p, temp2, 1, temp2, 1) * temp3 contains the mahalanobis distance * z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) z(i,k) = const - logdet/2.d0 - temp3/2.d0 * help(cdens) --> The densities are not scaled by mixing proportions end do * z contains the log-density log(N(x|theta_k)) end do if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then call dcopy( n, log(Vinv), 0, z(:,Gnoise), 1) end if * now column Gnoise of z contains log(Vinv) do i = 1,n z(i,:) = z(i,:) + log( pro(:) ) * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do return end * ====================================================================== subroutine meevv (x,z, n,p,G,Gnoise, mu,O,U,scale,shape,pro,Vinv, * loglik, eqpro,itmax,tol,eps, * niterout,errout,lwork,info) * Maximization-expectation algorithm for model EVV * ====================================================================== implicit none logical :: eqpro integer :: n, p, G, Gnoise double precision :: x(n,p), z(n,Gnoise) double precision :: mu(p,G), O(p,p,G),scale(G),shape(p,G) double precision :: Vinv, pro(Gnoise) double precision :: U(p,p,G), sumz(Gnoise) double precision :: temp1(p), temp2(p), temp3, scsh(p) * double precision :: temp(*) integer :: i, j, k, info, lwork, dummy, l, itmax, niterout double precision :: tol, eps, errout, rteps double precision :: const, logdet, loglik, lkprev, wrk(lwork) double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- l = 0 rteps = sqrt(eps) niterout = 0 errout = FLMAX lkprev = FLMAX/2 loglik = FLMAX const = (-dble(p)/2.d0)*log2pi * WHILE loop using goto statement 100 continue niterout = niterout + 1 sumz = sum(z, dim = 1) if ( eqpro ) then if ( Vinv .gt. 0 ) then pro(Gnoise) = sumz(Gnoise) / dble(n) pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) sumz = pro * dble(n) else pro = 1 / dble(G) sumz = pro * dble(n) end if else pro = sumz / dble(n) end if * re-initialise U call dcopy(p*p*G, 0.d0, 0, U, 1) * M step.......................................................... do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1, 1, temp1, 1, U(:,:,k), p) end do * U contains the weighted scattering matrix O(:,:,k) = U(:,:,k) call dgesvd('N', 'O', p, p, O(:,:,k), p, shape(:,k), * dummy, 1, dummy, 1, wrk, lwork, info) * O now contains eigenvectors of the scattering matrix * ##### NOTE: O is transposed * shape contains the eigenvalues * check if dgesvd converged (info == 0) if (info .ne. 0) then l = info return else scale(k) = exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) call dscal(p*p, 1.d0/scale(k), U(:,:,k), 1) call dscal(p, 1.d0/scale(k), shape(:,k), 1) * now U is the matrix Ck (Celeux, Govaert 1995, p.787) * and shape is the proper scaled shape (matrix A) end if end do if ( Vinv .gt. 0.d0 ) then scale(1) = sum(scale) / sum(sumz(1:G)) else scale(1) = sum(scale)/dble(n) end if * if noise lambda = num/sum_{k=1}^{G} n_k; pag. 787 Celeux, Govaert * ................................................................ * check very small eigenvalues (singular covariance) if (minval(shape) .le. rteps .or. minval(scale) .le. rteps) then loglik = FLMAX return end if * E step.......................................................... do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale(1)) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed scsh = sqrt(scale(1)*shape(:,k)) do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O(:,:,k), p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/scsh temp3 = ddot(p, temp2, 1, temp2, 1) * temp3 contains the mahalanobis distance z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) end do * z contains the log-density log(N(x|theta_k)) + log(p_k) end do * noise component if (Vinv .gt. 0.d0) then * call dcopy( n, log(Vinv) + log(pro(Gnoise)), 0, z(:,Gnoise), 1) z(:,Gnoise) = log(Vinv) + log( pro(Gnoise) ) end if * now column Gnoise of z contains log(Vinv) + log(p_0) * with p_0 the proportion of noise loglik = 0.d0 do i = 1,n * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do * ................................................................ errout = abs(loglik - lkprev)/(1.d0 + abs(loglik)) * errout = abs(loglik - lkprev) lkprev = loglik * temp(niterout) = loglik * Chris F (June 2015): pro should not be computed in the E-step * sumz = sum(z, dim = 1) * if ( eqpro ) then * if ( Vinv .gt. 0 ) then * pro(Gnoise) = sumz(Gnoise) / dble(n) * pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) * sumz = pro * dble(n) * else * pro = 1 / dble(G) * sumz = pro * dble(n) * end if * else * pro = sumz / dble(n) * end if * check if empty components * if ( minval(pro) .lt. rteps ) then if ( any(sumz .lt. rteps, 1) ) then loglik = -FLMAX return end if * WHILE condition if ( errout .gt. tol .and. niterout .lt. itmax ) goto 100 return end ************************************************************************ **** VEE model ************************************************************************ * ====================================================================== subroutine msvee (x,z, n,p,G, mu,U,C,scale,pro, lwork,info, * itmax,tol, niterin,errin, eps) * Maximization step for model VEE * ====================================================================== implicit none integer :: n, p, G double precision :: x(n,p), z(n,G) double precision :: mu(p,G), U(p,p,G), C(p,p), pro(G) * ### NOTE: shape and orientation parameters are computed in R double precision :: scale(G) double precision :: sumz(G) integer :: i, j, k, info, lwork, dummy, l double precision :: temp1(p), temp2(p,p), temp3 double precision :: wrk(lwork), tol, errin, trgt, trgtprev, eps integer :: itmax, niterin double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot *----------------------------------------------------------------------- * colsums of z sumz = sum(z, dim = 1) * a priori probabilities pro = sumz / dble(n) * pro = sumz / sum(sumz) * if there is noise sum(sumz) does not sum to n. See help(mstep) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1, 1, temp1, 1, U(:,:,k), p) end do * U contains the weighted scattering matrix * check if U is positive definite (see help of dpotrf) * (through Choleski is more efficient) temp2 = U(:,:,k) call dpotrf('U', p, temp2, p, info) if ( info .ne. 0 ) then if ( info .lt. 0) then l = info return else if ( info .gt. 0 ) then info = 0 scale = FLMAX return end if end if end do * covariance matrix components estimation niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * WHILE loop using goto statement 100 continue niterin = niterin + 1 * initialise C call dcopy(p*p, 0.d0, 0, C, 1) * ### NOTE: scale is initialised in R do k = 1,G C = C + U(:,:,k)/scale(k) end do * C contains the numerator of matrix C in pag.785, Celeux, Govaert temp2 = C call dsyev('N', 'U', p, temp2, p, temp1, wrk, lwork, info) temp1 = temp1(p:1:-1) * temp1 contains the (decreasing) ordered eigenvalues of C * check if dsyev converged or illegal value if ( info .ne. 0 ) then l = info return end if temp3 = exp( sum(log(temp1)) )**(1/dble(p)) * temp3 is the denominator of C C = C/temp3 * C is now the actual matrix C of pag.785 * compute the inverse of C via Choleski temp2 = C call dpotrf('U', p, temp2, p, info) if ( info .ne. 0 ) then if ( info .lt. 0) then l = info return else if ( info .gt. 0 ) then info = 0 scale = FLMAX return end if end if call dpotri('U', p, temp2, p, info) if ( info .ne. 0 ) return do j = 2,p do k = 1,(j-1) temp2(j,k) = temp2(k,j) end do end do * temp2 is now the inverse of C scale = 0.d0 do k = 1,G do j = 1,p scale(k) = scale(k) + ddot(p, U(j,:,k), 1, temp2(:,j), 1) end do scale(k) = scale(k) / (dble(p)*sumz(k)) end do * scale contains now the lambdas (pag.784 of Celeux, Govaert) * evaluate target function * trgt = dble(n)*dble(p) + dble(p)*SUM(log(scale)*sumz) trgt = sum(sumz)*dble(p) + dble(p)*SUM(log(scale)*sumz) * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition if ( errin .gt. tol .and. niterin .lt. itmax ) goto 100 return end * ====================================================================== subroutine esvee (x,z, n,p,G,Gnoise, mu,O,scale,shape,pro, Vinv, * loglik, eps) * Expectation step for model VEE * ====================================================================== implicit none integer :: n,p,G,Gnoise double precision :: x(n,p), z(n,Gnoise), pro(Gnoise), Vinv double precision :: mu(p,G), O(p,p), scale(G), shape(p) double precision :: temp1(p), temp2(p), temp3 integer :: i, k, j double precision :: const, logdet, loglik, eps double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- * check very small eigenvalues (cannot compute E step) if ( minval(shape) .le. sqrt(eps) .or. * minval(scale) .le. sqrt(eps) ) then loglik = FLMAX return end if const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j)) + log(scale(k)) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O, p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/sqrt(scale(k)*shape) temp3 = ddot(p, temp2, 1, temp2, 1) * temp3 contains the mahalanobis distance * z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) z(i,k) = const - logdet/2.d0 - temp3/2.d0 * help(cdens) --> The densities are not scaled by mixing proportions end do * z contains the log-density log(N(x|theta_k)) end do if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then call dcopy( n, log(Vinv), 0, z(:,Gnoise), 1) end if * now column Gnoise of z contains log(Vinv) do i = 1,n z(i,:) = z(i,:) + log( pro(:) ) * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do return end * ====================================================================== subroutine mevee ( x,z, n,p,G,Gnoise, mu,C,U,scale,shape,pro,Vinv, * loglik, eqpro,itmaxin,tolin,itmaxout,tolout,eps, * niterin,errin,niterout,errout,lwork,info ) * Maximization-expectation algorithm for model VEE * ====================================================================== implicit none logical :: eqpro integer :: n,p,G,Gnoise double precision :: x(n,p), z(n,Gnoise), pro(Gnoise), Vinv double precision :: mu(p,G), C(p,p), scale(G), shape(p) double precision :: U(p,p,G), sumz(Gnoise) double precision :: temp1(p), temp2(p,p), temp3, temp4(p) integer :: i, j, k, info, lwork, dummy, l integer :: itmaxin, itmaxout, niterin, niterout double precision :: tolin, tolout, errin, errout, eps, rteps double precision :: const, logdet, loglik, lkprev, wrk(lwork) double precision :: trgt, trgtprev double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- l = 0 rteps = sqrt(eps) niterout = 0 errout = FLMAX lkprev = FLMAX/2 loglik = FLMAX const = (-dble(p)/2.d0)*log2pi * WHILE loop for EM algorithm 100 continue niterout = niterout + 1 sumz = sum(z, dim = 1) if ( eqpro ) then if ( Vinv .gt. 0 ) then pro(Gnoise) = sumz(Gnoise) / dble(n) pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) sumz = pro * dble(n) else pro = 1 / dble(G) sumz = pro * dble(n) end if else pro = sumz / dble(n) end if * re-initialise U call dcopy(p*p*G, 0.d0, 0, U, 1) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1, 1, temp1, 1, U(:,:,k), p) end do * U contains the weighted scattering matrix * check if U is positive definite (see help of dpotrf) * (through Choleski is more efficient) temp2 = U(:,:,k) call dpotrf('U', p, temp2, p, info) if ( info .ne. 0 ) then if ( info .lt. 0) then l = info return else if ( info .gt. 0 ) then info = 0 loglik = FLMAX return end if end if end do * M step.......................................................... * covariance matrix components estimation niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * initialise scale call dcopy(G, 1.d0, 0, scale, 1) * WHILE loop for M step 110 continue niterin = niterin + 1 * initialise C call dcopy(p*p, 0.d0, 0, C, 1) do k = 1,G C = C + U(:,:,k)/scale(k) end do * C contains the numerator of matrix C in pag.785, Celeux, Govaert temp2 = C call dsyev('N', 'U', p, temp2, p, temp1, wrk, lwork, info) temp1 = temp1(p:1:-1) * temp1 contains the (decreasing) ordered eigenvalues of C * check if dsyev converged or illegal value if ( info .ne. 0 ) then l = info return end if temp3 = exp( sum(log(temp1)) )**(1/dble(p)) * temp3 is the denominator of C C = C/temp3 * C is now the actual matrix C of pag.785 * compute the inverse of C via Choleski temp2 = C call dpotrf('U', p, temp2, p, info) if ( info .ne. 0 ) then if ( info .lt. 0) then l = info return else if ( info .gt. 0 ) then info = 0 loglik = FLMAX return end if end if call dpotri('U', p, temp2, p, info) if ( info .ne. 0 ) return do j = 2,p do k = 1,(j-1) temp2(j,k) = temp2(k,j) end do end do * temp2 is now the inverse of C scale = 0.d0 do k = 1,G do j = 1,p scale(k) = scale(k) + ddot(p, U(j,:,k), 1, temp2(:,j), 1) end do scale(k) = scale(k) / (dble(p)*sumz(k)) end do * scale contains now the lambdas (pag.784 of Celeux, Govaert) * evaluate target function * trgt = dble(n)*dble(p) + dble(p)*SUM(log(scale)*sumz) trgt = sum(sumz(1:G))*dble(p) + dble(p)*SUM(log(scale)*sumz(1:G)) * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition for M step if ( errin .gt. tolin .and. niterin .lt. itmaxin ) goto 110 * ................................................................ * eigenvalues of C shape = temp1 / temp3 * check very small eigenvalues (singular covariance) if (minval(shape) .le. rteps .or. minval(scale) .le. rteps) then loglik = FLMAX return end if * E step.......................................................... do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + log(shape(j)) + log(scale(k)) end do * compute mahalanobis distance for each observation do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp4, 1) call dgemv('N', p, p, 1.d0, * temp2, p, temp1, 1, 0.d0, temp4, 1) temp4 = temp4/scale(k) temp3 = ddot(p, temp4, 1, temp1, 1) * temp3 contains the mahalanobis distance z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) * z(i,k) = const - logdet/2.d0 - temp3/2.d0 end do * z contains the log-density log(N(x|theta_k)) + log(p_k) end do * if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then z(:,Gnoise) = log(Vinv) + log( pro(Gnoise) ) end if * now column Gnoise of z contains log(Vinv) + log(p_0) * with p_0 the proportion of noise loglik = 0.d0 do i = 1,n * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do * ................................................................ errout = abs(loglik - lkprev)/(1.d0 + abs(loglik)) * errout = abs(loglik - lkprev) lkprev = loglik * temp(niterout) = loglik * Chris F (June 2015): pro should not be computed in the E-step * sumz = sum(z, dim = 1) * if ( eqpro ) then * if ( Vinv .gt. 0 ) then * pro(Gnoise) = sumz(Gnoise) / dble(n) * pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) * sumz = pro * dble(n) * else * pro = 1 / dble(G) * sumz = pro * dble(n) * end if * else * pro = sumz / dble(n) * end if * check if empty components if ( minval(sumz) .lt. rteps ) then loglik = -FLMAX return end if * WHILE condition EM if ( errout .gt. tolout .and. niterout .lt. itmaxout ) goto 100 return end ************************************************************************ **** EVE model ************************************************************************ * ====================================================================== subroutine mseve (x,z, n,p,G, mu,U,O,scale,shape,pro, lwork,info, * itmax,tol, niterin,errin, eps) * Maximization step for model EVE * ====================================================================== implicit none integer :: n, p, G double precision :: x(n,p), z(n,G) double precision :: mu(p,G), U(p,p,G), pro(G), O(p,p) double precision :: scale, shape(p,G) double precision :: sumz(G), omega(G) integer :: i, j, k, info, lwork double precision :: temp1(p,p), temp2(p,p), temp3(p,p), temp4(p) double precision :: wrk(lwork), tol, errin, trgt, trgtprev, eps integer :: itmax, niterin double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot *----------------------------------------------------------------------- * colsums of z sumz = sum(z, dim = 1) * a priori probabilities pro = sumz / dble(n) * pro = sumz / sum(sumz) * if there is noise sum(sumz) does not sum to n. See help(mstep) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1(:,1), 1, temp1(:,1), 1, * U(:,:,k), p) end do * U contains the weighted scattering matrix * compute the eigenvalues of U to be stored in omega temp2 = U(:,:,k) call dsyev('N', 'U', p, temp2, p, temp1(:,1), wrk, lwork, info) * now temp1 contains all the eigenvalues of U * check if dsyev converged and positive definite if ( info .ne. 0 ) then return else if ( minval(temp1(:,1)) .lt. sqrt(eps) ) then info = 0 scale = FLMAX return end if end if omega(k) = temp1(p,1) end do * omega contains the largest eigenvalue of each scattering matrix niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * covariance matrix components estimation * we consider algorithm MM 1 and MM 2 of Browne, McNicholas 2013 * with a modification in computing the orientation matrix in the MM 2 step * shape (matrix A) and orientation (matrix D) initialised in R * shape = matrix(1, p,G) * O = diag(p) * WHILE loop using goto statement 100 continue * ### NOTE: O is transposed niterin = niterin + 1 temp2 = 0.d0 temp3 = 0.d0 * temp3 will contain matrix F * Algorithm MM 1 ...................................... do k = 1,G do j = 1,p * temp1(j,:) = O(:,j) / shape(j,k) temp1(j,:) = O(j,:) / shape(j,k) end do * temp1 contains inv(A)t(D) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, U(:,:,k),p, * 0.d0, temp2,p ) * temp2 contains inv(A) %*% t(D) %*% W temp1 = temp2 - omega(k)*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = t( R %*% t(P) ) = P %*% t(R) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * Algorithm MM 2 ...................................... * call dgemm( 'T','T', p,p,p, 1.d0, temp2,p, temp1,p, * * 0.d0, O,p ) call transpose(O, p) * O contains matrix D of Browne, McNicholas * Algorithm MM 2 temp1 = 0.d0 temp3 = 0.d0 do k = 1,G call dgemm( 'N','N', p,p,p, 1.d0, U(:,:,k),p, O,p, * 0.d0, temp1,p ) * temp1 contains W %*% D do j = 1,p temp2(:,j) = temp1(:,j) / shape(j,k) end do * temp2 contains W %*% D %*% inv(A) temp1 = temp2 - maxval( 1/shape(:,k) )*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = R %*% t(P) * call dgemm( 'T','T', p,p,p, 1.d0, temp2,p, temp1,p, * * 0.d0, O,p ) O = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * NOTE: we compute the TRANSPOSED of the matrix in the output in the paper call transpose(O, p) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * compute shape (matrix A) and target function trgt = 0.d0 do k = 1,G temp1 = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, O,p, U(:,:,k),p, * 0.d0, temp1,p ) * temp1 contains t(D) %*% W do j = 1,p shape(j,k) = ddot(p, temp1(j,:), 1, O(j,:), 1) end do shape(:,k) = shape(:,k)/ * exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) * now shape contains matrix A of Celeux, Govaert pag. 785 * check positive values if ( minval(shape(:,k)) .lt. sqrt(eps) ) then info = 0 shape = FLMAX return end if temp4(1) = 0.d0 do j = 1,p * temp2(:,j) = O(:,j) * 1.d0/shape(j,k) temp2(:,j) = O(j,:) * 1.d0/shape(j,k) temp4(1) = temp4(1) + ddot(p, temp1(j,:), 1, temp2(:,j), 1) end do trgt = trgt + temp4(1) end do * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition if ( errin .gt. tol .and. niterin .lt. itmax ) goto 100 scale = trgt / ( sum(sumz)*dble(p) ) return end * ====================================================================== subroutine eseve (x,z, n,p,G,Gnoise, mu,O,scale,shape,pro, Vinv, * loglik, eps) * Expectation step for model EVE * ====================================================================== implicit none integer :: n, p, G, Gnoise double precision :: x(n,p), z(n,Gnoise) double precision :: mu(p,G), O(p,p), scale, shape(p,G) double precision :: Vinv, pro(Gnoise) double precision :: temp1(p), temp2(p), temp3, temp4(n) integer :: i, k, j double precision :: const, logdet, loglik, eps double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- * check very small eigenvalues (singular covariance) if (minval(shape) .le. sqrt(eps) .or. scale .le. sqrt(eps)) then loglik = FLMAX return end if const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O, p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/sqrt(scale*shape(:,k)) temp3 = ddot(p, temp2, 1, temp2, 1) temp4(i) = temp3 * temp3 contains the mahalanobis distance * z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) z(i,k) = const - logdet/2.d0 - temp3/2.d0 * help(cdens) --> The densities are not scaled by mixing proportions end do * z contains the log-density log(N(x|theta_k)) end do if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then call dcopy( n, log(Vinv), 0, z(:,Gnoise), 1) end if * now column Gnoise of z contains log(Vinv) do i = 1,n z(i,:) = z(i,:) + log( pro ) * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do return end * ====================================================================== subroutine meeve ( x,z, n,p,G,Gnoise, mu,O,U,scale,shape,pro,Vinv, * loglik, eqpro,itmaxin,tolin,itmaxout,tolout,eps, * niterin,errin,niterout,errout,lwork,info ) * Maximization-expectation algorithm for model EVE * ====================================================================== implicit none logical :: eqpro integer :: n,p,G,Gnoise double precision :: x(n,p), z(n,Gnoise), pro(Gnoise), Vinv double precision :: mu(p,G), O(p,p), scale, shape(p,G) double precision :: U(p,p,G), sumz(Gnoise), omega(G) double precision :: temp1(p,p), temp2(p,p), temp3(p,p), temp4(p) integer :: i, j, k, info, lwork integer :: itmaxin, itmaxout, niterin, niterout double precision :: tolin, tolout, errin, errout, eps, rteps double precision :: const, logdet, loglik, lkprev, wrk(lwork) double precision :: trgt, trgtprev double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- rteps = sqrt(eps) niterout = 0 errout = FLMAX lkprev = FLMAX/2 loglik = FLMAX const = (-dble(p)/2.d0)*log2pi * WHILE loop for EM algorithm 100 continue niterout = niterout + 1 sumz = sum(z, dim = 1) if ( eqpro ) then if ( Vinv .gt. 0 ) then pro(Gnoise) = sumz(Gnoise) / dble(n) pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) sumz = pro * dble(n) else pro = 1 / dble(G) sumz = pro * dble(n) end if else pro = sumz / dble(n) end if * re-initialise U call dcopy(p*p*G, 0.d0, 0, U, 1) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1(:,1), 1, temp1(:,1), 1, * U(:,:,k), p) end do * U contains the weighted scattering matrix * compute the eigenvalues of U to be stored in omega temp2 = U(:,:,k) call dsyev('N', 'U', p, temp2, p, temp1(:,1), wrk, lwork, info) * now temp1 contains all the eigenvalues of U * check if dsyev converged and positive definite if ( info .ne. 0 ) then return else if ( minval(temp1(:,1)) .lt. rteps ) then info = 0 scale = FLMAX return end if end if omega(k) = temp1(p,1) end do * omega contains the largest eigenvalue of each scattering matrix * M step.......................................................... niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * covariance matrix components estimation * we consider algorithm MM 1 and MM 2 of Browne, McNicholas 2013 * with a modification in computing the orientation matrix in the MM 2 step * shape (matrix A) and orientation (matrix D) initialised in R * shape = matrix(1, p,G) * O = diag(p) * ### NOTE: we don't re-initialize shape and orientation at each * outer iteration of the EM algorithm * WHILE loop for M step 110 continue * ### NOTE: O is transposed niterin = niterin + 1 temp2 = 0.d0 temp3 = 0.d0 * temp3 will contain matrix F * Algorithm MM 1 ...................................... do k = 1,G do j = 1,p * temp1(j,:) = O(:,j) / shape(j,k) temp1(j,:) = O(j,:) / shape(j,k) end do * temp1 contains inv(A)t(D) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, U(:,:,k),p, * 0.d0, temp2,p ) * temp2 contains inv(A) %*% t(D) %*% W temp1 = temp2 - omega(k)*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = t( R %*% t(P) ) = P %*% t(R) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * Algorithm MM 2 ...................................... * call dgemm( 'T','T', p,p,p, 1.d0, temp2,p, temp1,p, * * 0.d0, O,p ) call transpose(O, p) * O contains matrix D of Browne, McNicholas * Algorithm MM 2 temp1 = 0.d0 temp3 = 0.d0 do k = 1,G call dgemm( 'N','N', p,p,p, 1.d0, U(:,:,k),p, O,p, * 0.d0, temp1,p ) * temp1 contains W %*% D do j = 1,p temp2(:,j) = temp1(:,j) / shape(j,k) end do * temp2 contains W %*% D %*% inv(A) temp1 = temp2 - maxval( 1/shape(:,k) )*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = R %*% t(P) * call dgemm( 'T','T', p,p,p, 1.d0, temp2,p, temp1,p, * * 0.d0, O,p ) O = 0.d0 * NOTE: we compute the TRANSPOSED of the matrix in the output in the paper call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) call transpose(O, p) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * compute shape (matrix A) and target function trgt = 0.d0 do k = 1,G temp1 = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, O,p, U(:,:,k),p, * 0.d0, temp1,p ) * temp1 contains t(D) %*% W do j = 1,p shape(j,k) = ddot(p, temp1(j,:), 1, O(j,:), 1) end do shape(:,k) = shape(:,k)/ * exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) * now shape contains matrix A of Celeux, Govaert pag. 785 * check positive values if ( minval(shape(:,k)) .lt. rteps ) then info = 0 loglik = FLMAX return end if temp4(1) = 0.d0 do j = 1,p * temp2(:,j) = O(:,j) * 1.d0/shape(j,k) temp2(:,j) = O(j,:) * 1.d0/shape(j,k) temp4(1) = temp4(1) + ddot(p, temp1(j,:), 1, temp2(:,j), 1) end do trgt = trgt + temp4(1) end do * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition M step if ( errin .gt. tolin .and. niterin .lt. itmaxin ) goto 110 scale = trgt / ( sum(sumz(1:G))*dble(p) ) * ................................................................ * E step.......................................................... const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2(:,1), 1) call dgemv('N', p, p, 1.d0, * O, p, temp1(:,1), 1, 0.d0, temp2(:,1), 1) temp2(:,1) = temp2(:,1)/sqrt(scale*shape(:,k)) temp3(1,1) = ddot(p, temp2(:,1), 1, temp2(:,1), 1) * temp3 contains the mahalanobis distance z(i,k) = const - logdet/2.d0 - temp3(1,1)/2.d0 + log(pro(k)) * z(i,k) = const - logdet/2.d0 - temp3(1,1)/2.d0 end do * z contains the log-density log(N(x|theta_k)) + log(p_k) end do * noise component if (Vinv .gt. 0.d0) then z(:,Gnoise) = log(Vinv) + log( pro(Gnoise) ) end if * now column Gnoise of z contains log(Vinv) + log(p_0) loglik = 0.d0 do i = 1,n * Numerical Recipes pag.844 temp3(1,1) = maxval(z(i,:)) temp1(1,1) = temp3(1,1) + log( sum(exp(z(i,:) - temp3(1,1))) ) loglik = loglik + temp1(1,1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1,1) ) * re-normalize probabilities temp3(1,1) = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3(1,1), z(i,:), 1 ) end do * ................................................................ errout = abs(loglik - lkprev)/(1.d0 + abs(loglik)) lkprev = loglik * Chris F (June 2015): pro should not be computed in the E-step * sumz = sum(z, dim = 1) * if ( eqpro ) then * if ( Vinv .gt. 0 ) then * pro(Gnoise) = sumz(Gnoise) / dble(n) * pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) * sumz = pro * dble(n) * else * pro = 1 / dble(G) * sumz = pro * dble(n) * end if * else * pro = sumz / dble(n) * end if * check if empty components if ( minval(sumz) .lt. rteps ) then loglik = -FLMAX return end if * WHILE condition EM if ( errout .gt. tolout .and. niterout .lt. itmaxout ) goto 100 return end ************************************************************************ **** VVE model ************************************************************************ * ====================================================================== subroutine msvve (x,z, n,p,G, mu,U,O,scale,shape,pro, lwork,info, * itmax,tol, niterin,errin, eps) * Maximization step for model VVE * ====================================================================== implicit none integer :: n, p, G double precision :: x(n,p), z(n,G) double precision :: mu(p,G), U(p,p,G), pro(G), O(p,p) double precision :: scale(G), shape(p,G) double precision :: sumz(G), omega(G) integer :: i, j, k, info, lwork double precision :: temp1(p,p), temp2(p,p), temp3(p,p), temp4(p) double precision :: wrk(lwork), tol, errin, trgt, trgtprev, eps integer :: itmax, niterin double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot *----------------------------------------------------------------------- * colsums of z sumz = sum(z, dim = 1) * a priori probabilities pro = sumz / dble(n) * pro = sumz / sum(sumz) * if there is noise sum(sumz) does not sum to n. See help(mstep) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1(:,1), 1, temp1(:,1), 1, * U(:,:,k), p) end do * U contains the weighted scattering matrix * compute the eigenvalues of U to be stored in omega temp2 = U(:,:,k) call dsyev('N', 'U', p, temp2, p, temp1(:,1), wrk, lwork, info) * now temp1 contains all the eigenvalues of U * check if dsyev converged and positive definite if ( info .ne. 0 ) then return else if ( minval(temp1(:,1)) .lt. sqrt(eps) ) then info = 0 scale = FLMAX return end if end if omega(k) = temp1(p,1) end do * omega contains the largest eigenvalue of each scattering matrix niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * covariance matrix components estimation * we consider algorithm MM 1 and MM 2 of Browne, McNicholas 2013 * with a modification in computing the orientation matrix in the MM 2 step * shape (matrix A) and orientation (matrix D) initialised in R * shape = matrix(1, p,G) * O = diag(p) * WHILE loop using goto statement 100 continue * ### NOTE: O is transposed niterin = niterin + 1 temp2 = 0.d0 temp3 = 0.d0 * temp3 will contain matrix F * Algorithm MM 1 ...................................... do k = 1,G do j = 1,p * temp1(j,:) = O(:,j) / shape(j,k) temp1(j,:) = O(j,:) / shape(j,k) end do * temp1 contains inv(A)t(D) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, U(:,:,k),p, * 0.d0, temp2,p ) * temp2 contains inv(A) %*% t(D) %*% W temp1 = temp2 - omega(k)*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(P %*% t(R)) = R %*% t(P) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * Algorithm MM 2 ...................................... call transpose(O, p) * O contains matrix D of Browne, McNicholas * Algorithm MM 2 temp1 = 0.d0 temp3 = 0.d0 do k = 1,G call dgemm( 'N','N', p,p,p, 1.d0, U(:,:,k),p, O,p, * 0.d0, temp1,p ) * temp1 contains W %*% D do j = 1,p temp2(:,j) = temp1(:,j) / shape(j,k) end do * temp2 contains W %*% D %*% inv(A) temp1 = temp2 - maxval( 1/shape(:,k) )*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = R %*% t(P) O = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) call transpose(O, p) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * compute shape (matrix A) and target function trgt = 0.d0 do k = 1,G temp1 = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, O,p, U(:,:,k),p, * 0.d0, temp1,p ) * temp1 contains t(D) %*% W do j = 1,p shape(j,k) = ddot(p, temp1(j,:), 1, O(j,:), 1) end do * shape(:,k) = shape(:,k)/ * * exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) shape(:,k) = shape(:,k)/sumz(k) * now shape contains matrix A (scale*A) of Celeux, Govaert pag. 785 * compute scale parameter and shape matrix A scale(k) = exp( sum( log(shape(:,k)) ) )**(1/dble(p)) shape(:,k) = shape(:,k)/scale(k) * check positive values if ( minval(shape(:,k)) .lt. sqrt(eps) ) then info = 0 shape = FLMAX return end if temp4(1) = 0.d0 do j = 1,p * temp2(:,j) = O(:,j) * 1.d0/shape(j,k) temp2(:,j) = O(j,:) * 1.d0/shape(j,k) temp4(1) = temp4(1) + ddot(p, temp1(j,:), 1, temp2(:,j), 1) end do trgt = trgt + temp4(1) end do * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition if ( errin .gt. tol .and. niterin .lt. itmax ) goto 100 return end * ====================================================================== subroutine esvve (x,z, n,p,G,Gnoise, mu,O,scale,shape,pro, Vinv, * loglik, eps) * Expectation step for model VVE * ====================================================================== implicit none integer :: n, p, G, Gnoise double precision :: x(n,p), z(n,Gnoise) double precision :: mu(p,G), O(p,p), scale(G), shape(p,G) double precision :: Vinv, pro(Gnoise) double precision :: temp1(p), temp2(p), temp3, temp4(n) integer :: i, k, j double precision :: const, logdet, loglik, eps double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- * check very small eigenvalues (singular covariance) if ( minval(shape) .le. sqrt(eps) .or. * minval(scale) .le. sqrt(eps) ) then loglik = FLMAX return end if const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale(k)) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O, p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/sqrt(scale(k)*shape(:,k)) temp3 = ddot(p, temp2, 1, temp2, 1) temp4(i) = temp3 * temp3 contains the mahalanobis distance * z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) z(i,k) = const - logdet/2.d0 - temp3/2.d0 * help(cdens) --> The densities are not scaled by mixing proportions end do * z contains the log-density log(N(x|theta_k)) end do if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then call dcopy( n, log(Vinv), 0, z(:,Gnoise), 1) end if * now column Gnoise of z contains log(Vinv) do i = 1,n z(i,:) = z(i,:) + log( pro ) * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do return end * ====================================================================== subroutine mevve ( x,z, n,p,G,Gnoise, mu,O,U,scale,shape,pro,Vinv, * loglik, eqpro,itmaxin,tolin,itmaxout,tolout,eps, * niterin,errin,niterout,errout,lwork,info) * Maximization-expectation algorithm for model VVE * ====================================================================== implicit none logical :: eqpro integer :: n,p,G,Gnoise double precision :: x(n,p), z(n,Gnoise), pro(Gnoise), Vinv double precision :: mu(p,G), O(p,p), scale(G), shape(p,G) double precision :: U(p,p,G), sumz(Gnoise), omega(G) double precision :: temp1(p,p), temp2(p,p), temp3(p,p), temp4(p) integer :: i, j, k, info, lwork integer :: itmaxin, itmaxout, niterin, niterout double precision :: tolin, tolout, errin, errout, eps, rteps double precision :: const, logdet, loglik, lkprev, wrk(lwork) double precision :: trgt, trgtprev double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- rteps = sqrt(eps) niterout = 0 errout = FLMAX lkprev = FLMAX/2 loglik = FLMAX const = (-dble(p)/2.d0)*log2pi * WHILE loop for EM algorithm 100 continue niterout = niterout + 1 sumz = sum(z, dim = 1) if ( eqpro ) then if ( Vinv .gt. 0 ) then pro(Gnoise) = sumz(Gnoise) / dble(n) pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) sumz = pro * dble(n) else pro = 1 / dble(G) sumz = pro * dble(n) end if else pro = sumz / dble(n) end if * re-initialise U call dcopy(p*p*G, 0.d0, 0, U, 1) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1(:,1), 1, temp1(:,1), 1, * U(:,:,k), p) end do * U contains the weighted scattering matrix * compute the eigenvalues of U to be stored in omega temp2 = U(:,:,k) call dsyev('N', 'U', p, temp2, p, temp1(:,1), wrk, lwork, info) * now temp1 contains all the eigenvalues of U * check if dsyev converged and positive definite if ( info .ne. 0 ) then return else if ( minval(temp1(:,1)) .lt. rteps ) then info = 0 scale = FLMAX return end if end if omega(k) = temp1(p,1) end do * omega contains the largest eigenvalue of each scattering matrix * M step.......................................................... niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * covariance matrix components estimation * we consider algorithm MM 1 and MM 2 of Browne, McNicholas 2013 * with a modification in computing the orientation matrix in the MM 2 step * shape (matrix A) and orientation (matrix D) initialised in R * shape = matrix(1, p,G) * O = diag(p) * ### NOTE: we don't re-initialize shape and orientation at each * outer iteration of the EM algorithm * WHILE loop for M step 110 continue * ### NOTE: O is transposed niterin = niterin + 1 temp2 = 0.d0 temp3 = 0.d0 * temp3 will contain matrix F * Algorithm MM 1 ...................................... do k = 1,G do j = 1,p * temp1(j,:) = O(:,j) / shape(j,k) temp1(j,:) = O(j,:) / shape(j,k) end do * temp1 contains inv(A)t(D) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, U(:,:,k),p, * 0.d0, temp2,p ) * temp2 contains inv(A) %*% t(D) %*% W temp1 = temp2 - omega(k)*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(P %*% t(R)) = R %*% t(P) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * O contains TRANSPOSED orientation (matrix D of Browne, McNicholas) * ..................................................... * Algorithm MM 2 ...................................... call transpose(O, p) * O contains matrix D of Browne, McNicholas * Algorithm MM 2 temp1 = 0.d0 temp3 = 0.d0 do k = 1,G call dgemm( 'N','N', p,p,p, 1.d0, U(:,:,k),p, O,p, * 0.d0, temp1,p ) * temp1 contains W %*% D do j = 1,p temp2(:,j) = temp1(:,j) / shape(j,k) end do * temp2 contains W %*% D %*% inv(A) temp1 = temp2 - maxval( 1/shape(:,k) )*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = R %*% t(P) O = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) call transpose(O, p) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * compute shape (matrix A) and target function trgt = 0.d0 do k = 1,G temp1 = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, O,p, U(:,:,k),p, * 0.d0, temp1,p ) * temp1 contains t(D) %*% W do j = 1,p shape(j,k) = ddot(p, temp1(j,:), 1, O(j,:), 1) end do * shape(:,k) = shape(:,k)/ * * exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) shape(:,k) = shape(:,k)/sumz(k) * now shape contains matrix A (scale*A) of Celeux, Govaert pag. 785 * compute scale parameter and shape matrix A scale(k) = exp( sum( log(shape(:,k)) ) )**(1/dble(p)) shape(:,k) = shape(:,k)/scale(k) * check positive values if (minval(shape(:,k)) .lt. rteps .or. * scale(k) .lt. rteps) then info = 0 loglik = FLMAX return end if temp4(1) = 0.d0 do j = 1,p * temp2(:,j) = O(:,j) * 1.d0/shape(j,k) temp2(:,j) = O(j,:) * 1.d0/shape(j,k) temp4(1) = temp4(1) + ddot(p, temp1(j,:), 1, temp2(:,j), 1) end do trgt = trgt + temp4(1) end do * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition M step if ( errin .gt. tolin .and. niterin .lt. itmaxin ) goto 110 * do k = 1,G * scale(k) = exp( sum( log(shape(:,k)) ) )**(1/dble(p)) * shape(:,k) = shape(:,k)/scale(k) * end do * ................................................................ * E step.......................................................... const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale(k)) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2(:,1), 1) call dgemv('N', p, p, 1.d0, * O, p, temp1(:,1), 1, 0.d0, temp2(:,1), 1) temp2(:,1) = temp2(:,1)/sqrt(scale(k)*shape(:,k)) temp3(1,1) = ddot(p, temp2(:,1), 1, temp2(:,1), 1) * temp3 contains the mahalanobis distance z(i,k) = const - logdet/2.d0 - temp3(1,1)/2.d0 + log(pro(k)) * z(i,k) = const - logdet/2.d0 - temp3(1,1)/2.d0 end do * z contains the log-density log(N(x|theta_k)) + log(p_k) end do * noise component if (Vinv .gt. 0.d0) then z(:,Gnoise) = log(Vinv) + log( pro(Gnoise) ) end if * now column Gnoise of z contains log(Vinv) + log(p_0) loglik = 0.d0 do i = 1,n * Numerical Recipes pag.844 temp3(1,1) = maxval(z(i,:)) temp1(1,1) = temp3(1,1) + log( sum(exp(z(i,:) - temp3(1,1))) ) loglik = loglik + temp1(1,1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1,1) ) * re-normalize probabilities temp3(1,1) = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3(1,1), z(i,:), 1 ) end do * ................................................................ errout = abs(loglik - lkprev)/(1.d0 + abs(loglik)) lkprev = loglik * Chris F (June 2015): pro should not be computed in the E-step * sumz = sum(z, dim = 1) * if ( eqpro ) then * if ( Vinv .gt. 0 ) then * pro(Gnoise) = sumz(Gnoise) / dble(n) * pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) * sumz = pro * dble(n) * else * pro = 1 / dble(G) * sumz = pro * dble(n) * end if * else * pro = sumz / dble(n) * end if * check if empty components if ( minval(sumz) .lt. rteps ) then loglik = -FLMAX return end if * WHILE condition EM if ( errout .gt. tolout .and. niterout .lt. itmaxout ) goto 100 return end mclust/src/init.c0000644000176200001440000004273013205037573013472 0ustar liggesusers#include #include // for NULL #include /* Routines registration obtained with tools::package_native_routine_registration_skeleton() FIXME: Check these declarations against the C/Fortran source code. */ /* .Fortran calls */ extern void F77_NAME(d2norm)(int*, double*, int*, double*); extern void F77_NAME(es1e)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(es1v)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esevi)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esevv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvvi)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvvv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hc1e)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hc1v)(void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hceee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hceii)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hcvii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hcvvv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mcltrw)(void *, void *, void *, void *, void *); extern void F77_NAME(me1e)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(me1ep)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(me1v)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(me1vp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeeep)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeeip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeevp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeiip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meevi)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meevip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meevv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meveip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevevp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meviip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevvi)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevvip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevvv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevvvp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mnxiip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mnxxip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mnxxxp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(ms1e)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(ms1ep)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(ms1v)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(ms1vp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseee)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseeep)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseei)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseeip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseevp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseii)(void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseiip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msevi)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msevip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msevv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msveip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvevp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvii)(void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msviip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvvi)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvvip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvvv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvvvp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mvn1d)(void *, void *, void *, void *, void *); extern void F77_NAME(mvn1p)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mvnxii)(void *, void *, void *, void *, void *, void *); extern void F77_NAME(mvnxxi)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mvnxxx)(void *, void *, void *, void *, void *, void *); extern void F77_NAME(shapeo)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(uncholf)(void *, void *, void *, void *, void *);// extern void F77_NAME(covwf)(double*, double*, int*, int*, int*, double*, double*, double*); extern void F77_NAME(crossprodf)(void *, void *, void *, void *, void *, void *); static const R_FortranMethodDef FortranEntries[] = { {"d2norm", (DL_FUNC) &F77_NAME(d2norm), 4}, {"es1e", (DL_FUNC) &F77_NAME(es1e), 9}, {"es1v", (DL_FUNC) &F77_NAME(es1v), 9}, {"eseee", (DL_FUNC) &F77_NAME(eseee), 12}, {"eseei", (DL_FUNC) &F77_NAME(eseei), 11}, {"eseev", (DL_FUNC) &F77_NAME(eseev), 14}, {"eseii", (DL_FUNC) &F77_NAME(eseii), 10}, {"eseve", (DL_FUNC) &F77_NAME(eseve), 14}, {"esevi", (DL_FUNC) &F77_NAME(esevi), 11}, {"esevv", (DL_FUNC) &F77_NAME(esevv), 14}, {"esvee", (DL_FUNC) &F77_NAME(esvee), 14}, {"esvei", (DL_FUNC) &F77_NAME(esvei), 11}, {"esvev", (DL_FUNC) &F77_NAME(esvev), 14}, {"esvii", (DL_FUNC) &F77_NAME(esvii), 10}, {"esvve", (DL_FUNC) &F77_NAME(esvve), 14}, {"esvvi", (DL_FUNC) &F77_NAME(esvvi), 11}, {"esvvv", (DL_FUNC) &F77_NAME(esvvv), 12}, {"hc1e", (DL_FUNC) &F77_NAME(hc1e), 7}, {"hc1v", (DL_FUNC) &F77_NAME(hc1v), 8}, {"hceee", (DL_FUNC) &F77_NAME(hceee), 12}, {"hceii", (DL_FUNC) &F77_NAME(hceii), 9}, {"hcvii", (DL_FUNC) &F77_NAME(hcvii), 10}, {"hcvvv", (DL_FUNC) &F77_NAME(hcvvv), 14}, {"mcltrw", (DL_FUNC) &F77_NAME(mcltrw), 5}, {"me1e", (DL_FUNC) &F77_NAME(me1e), 12}, {"me1ep", (DL_FUNC) &F77_NAME(me1ep), 16}, {"me1v", (DL_FUNC) &F77_NAME(me1v), 12}, {"me1vp", (DL_FUNC) &F77_NAME(me1vp), 16}, {"meeee", (DL_FUNC) &F77_NAME(meeee), 14}, {"meeeep", (DL_FUNC) &F77_NAME(meeeep), 18}, {"meeei", (DL_FUNC) &F77_NAME(meeei), 14}, {"meeeip", (DL_FUNC) &F77_NAME(meeeip), 18}, {"meeev", (DL_FUNC) &F77_NAME(meeev), 18}, {"meeevp", (DL_FUNC) &F77_NAME(meeevp), 22}, {"meeii", (DL_FUNC) &F77_NAME(meeii), 13}, {"meeiip", (DL_FUNC) &F77_NAME(meeiip), 17}, {"meeve", (DL_FUNC) &F77_NAME(meeve), 26}, {"meevi", (DL_FUNC) &F77_NAME(meevi), 14}, {"meevip", (DL_FUNC) &F77_NAME(meevip), 18}, {"meevv", (DL_FUNC) &F77_NAME(meevv), 22}, {"mevee", (DL_FUNC) &F77_NAME(mevee), 26}, {"mevei", (DL_FUNC) &F77_NAME(mevei), 17}, {"meveip", (DL_FUNC) &F77_NAME(meveip), 21}, {"mevev", (DL_FUNC) &F77_NAME(mevev), 18}, {"mevevp", (DL_FUNC) &F77_NAME(mevevp), 22}, {"mevii", (DL_FUNC) &F77_NAME(mevii), 13}, {"meviip", (DL_FUNC) &F77_NAME(meviip), 17}, {"mevve", (DL_FUNC) &F77_NAME(mevve), 26}, {"mevvi", (DL_FUNC) &F77_NAME(mevvi), 14}, {"mevvip", (DL_FUNC) &F77_NAME(mevvip), 18}, {"mevvv", (DL_FUNC) &F77_NAME(mevvv), 15}, {"mevvvp", (DL_FUNC) &F77_NAME(mevvvp), 19}, {"mnxiip", (DL_FUNC) &F77_NAME(mnxiip), 10}, {"mnxxip", (DL_FUNC) &F77_NAME(mnxxip), 11}, {"mnxxxp", (DL_FUNC) &F77_NAME(mnxxxp), 11}, {"ms1e", (DL_FUNC) &F77_NAME(ms1e), 7}, {"ms1ep", (DL_FUNC) &F77_NAME(ms1ep), 11}, {"ms1v", (DL_FUNC) &F77_NAME(ms1v), 7}, {"ms1vp", (DL_FUNC) &F77_NAME(ms1vp), 11}, {"mseee", (DL_FUNC) &F77_NAME(mseee), 9}, {"mseeep", (DL_FUNC) &F77_NAME(mseeep), 13}, {"mseei", (DL_FUNC) &F77_NAME(mseei), 9}, {"mseeip", (DL_FUNC) &F77_NAME(mseeip), 13}, {"mseev", (DL_FUNC) &F77_NAME(mseev), 12}, {"mseevp", (DL_FUNC) &F77_NAME(mseevp), 16}, {"mseii", (DL_FUNC) &F77_NAME(mseii), 8}, {"mseiip", (DL_FUNC) &F77_NAME(mseiip), 12}, {"mseve", (DL_FUNC) &F77_NAME(mseve), 18}, {"msevi", (DL_FUNC) &F77_NAME(msevi), 9}, {"msevip", (DL_FUNC) &F77_NAME(msevip), 13}, {"msevv", (DL_FUNC) &F77_NAME(msevv), 14}, {"msvee", (DL_FUNC) &F77_NAME(msvee), 17}, {"msvei", (DL_FUNC) &F77_NAME(msvei), 14}, {"msveip", (DL_FUNC) &F77_NAME(msveip), 18}, {"msvev", (DL_FUNC) &F77_NAME(msvev), 14}, {"msvevp", (DL_FUNC) &F77_NAME(msvevp), 18}, {"msvii", (DL_FUNC) &F77_NAME(msvii), 8}, {"msviip", (DL_FUNC) &F77_NAME(msviip), 12}, {"msvve", (DL_FUNC) &F77_NAME(msvve), 18}, {"msvvi", (DL_FUNC) &F77_NAME(msvvi), 9}, {"msvvip", (DL_FUNC) &F77_NAME(msvvip), 13}, {"msvvv", (DL_FUNC) &F77_NAME(msvvv), 10}, {"msvvvp", (DL_FUNC) &F77_NAME(msvvvp), 14}, {"mvn1d", (DL_FUNC) &F77_NAME(mvn1d), 5}, {"mvn1p", (DL_FUNC) &F77_NAME(mvn1p), 9}, {"mvnxii", (DL_FUNC) &F77_NAME(mvnxii), 6}, {"mvnxxi", (DL_FUNC) &F77_NAME(mvnxxi), 7}, {"mvnxxx", (DL_FUNC) &F77_NAME(mvnxxx), 6}, {"shapeo", (DL_FUNC) &F77_NAME(shapeo), 7}, {"uncholf", (DL_FUNC) &F77_NAME(uncholf), 5}, {"covwf", (DL_FUNC) &F77_NAME(covwf), 8}, {"crossprodf", (DL_FUNC) &F77_NAME(crossprodf), 6}, // {NULL, NULL, 0} }; void R_init_mclust(DllInfo *dll) { R_registerRoutines(dll, NULL, NULL, FortranEntries, NULL); R_useDynamicSymbols(dll, FALSE); } mclust/src/mclust.f0000644000176200001440000144013313205037573014041 0ustar liggesusersC modified to avoid printing for calls from Fortran within R double precision function dgamma (x) c jan 1984 edition. w. fullerton, c3, los alamos scientific lab. c jan 1994 wpp@ips.id.ethz.ch, ehg@research.att.com declare xsml double precision x, gamcs(42), dxrel, pi, sinpiy, sq2pil, xmax, 1 xmin, y, d9lgmc, dcsevl, d1mach, dexp, dint, dlog, 2 dsin, dsqrt, xsml C external d1mach, d9lgmc, dcsevl, dexp, dint, dlog, dsin, dsqrt, C 1 initds external d1mach, d9lgmc, dcsevl c c series for gam on the interval 0. to 1.00000e+00 c with weighted error 5.79e-32 c log weighted error 31.24 c significant figures required 30.00 c decimal places required 32.05 c data gam cs( 1) / +.8571195590 9893314219 2006239994 2 d-2 / data gam cs( 2) / +.4415381324 8410067571 9131577165 2 d-2 / data gam cs( 3) / +.5685043681 5993633786 3266458878 9 d-1 / data gam cs( 4) / -.4219835396 4185605010 1250018662 4 d-2 / data gam cs( 5) / +.1326808181 2124602205 8400679635 2 d-2 / data gam cs( 6) / -.1893024529 7988804325 2394702388 6 d-3 / data gam cs( 7) / +.3606925327 4412452565 7808221722 5 d-4 / data gam cs( 8) / -.6056761904 4608642184 8554829036 5 d-5 / data gam cs( 9) / +.1055829546 3022833447 3182350909 3 d-5 / data gam cs( 10) / -.1811967365 5423840482 9185589116 6 d-6 / data gam cs( 11) / +.3117724964 7153222777 9025459316 9 d-7 / data gam cs( 12) / -.5354219639 0196871408 7408102434 7 d-8 / data gam cs( 13) / +.9193275519 8595889468 8778682594 0 d-9 / data gam cs( 14) / -.1577941280 2883397617 6742327395 3 d-9 / data gam cs( 15) / +.2707980622 9349545432 6654043308 9 d-10 / data gam cs( 16) / -.4646818653 8257301440 8166105893 3 d-11 / data gam cs( 17) / +.7973350192 0074196564 6076717535 9 d-12 / data gam cs( 18) / -.1368078209 8309160257 9949917230 9 d-12 / data gam cs( 19) / +.2347319486 5638006572 3347177168 8 d-13 / data gam cs( 20) / -.4027432614 9490669327 6657053469 9 d-14 / data gam cs( 21) / +.6910051747 3721009121 3833697525 7 d-15 / data gam cs( 22) / -.1185584500 2219929070 5238712619 2 d-15 / data gam cs( 23) / +.2034148542 4963739552 0102605193 2 d-16 / data gam cs( 24) / -.3490054341 7174058492 7401294910 8 d-17 / data gam cs( 25) / +.5987993856 4853055671 3505106602 6 d-18 / data gam cs( 26) / -.1027378057 8722280744 9006977843 1 d-18 / data gam cs( 27) / +.1762702816 0605298249 4275966074 8 d-19 / data gam cs( 28) / -.3024320653 7353062609 5877211204 2 d-20 / data gam cs( 29) / +.5188914660 2183978397 1783355050 6 d-21 / data gam cs( 30) / -.8902770842 4565766924 4925160106 6 d-22 / data gam cs( 31) / +.1527474068 4933426022 7459689130 6 d-22 / data gam cs( 32) / -.2620731256 1873629002 5732833279 9 d-23 / data gam cs( 33) / +.4496464047 8305386703 3104657066 6 d-24 / data gam cs( 34) / -.7714712731 3368779117 0390152533 3 d-25 / data gam cs( 35) / +.1323635453 1260440364 8657271466 6 d-25 / data gam cs( 36) / -.2270999412 9429288167 0231381333 3 d-26 / data gam cs( 37) / +.3896418998 0039914493 2081663999 9 d-27 / data gam cs( 38) / -.6685198115 1259533277 9212799999 9 d-28 / data gam cs( 39) / +.1146998663 1400243843 4761386666 6 d-28 / data gam cs( 40) / -.1967938586 3451346772 9510399999 9 d-29 / data gam cs( 41) / +.3376448816 5853380903 3489066666 6 d-30 / data gam cs( 42) / -.5793070335 7821357846 2549333333 3 d-31 / c data pi / 3.1415926535 8979323846 2643383279 50 d0 / c sq2pil is 0.5*alog(2*pi) = alog(sqrt(2*pi)) data sq2pil / 0.9189385332 0467274178 0329736405 62 d0 / data ngam, xmin, xmax, xsml, dxrel / 0, 4*0.d0 / c if (ngam.ne.0) go to 10 ngam = initds (gamcs, 42, 0.1*sngl(d1mach(3)) ) c call d9gaml (xmin, xmax) xsml = exp (max (log(d1mach(1)), -log(d1mach(2)))+0.01d0) dxrel = sqrt (d1mach(4)) c 10 y = abs(x) if (y.gt.10.d0) go to 50 c c compute gamma(x) for -xbnd .le. x .le. xbnd. reduce interval and find c gamma(1+y) for 0.0 .le. y .lt. 1.0 first of all. c n = int(x) if (x.lt.0.d0) n = n - 1 y = x - dble(float(n)) n = n - 1 dgamma = 0.9375d0 + dcsevl (2.d0*y-1.d0, gamcs, ngam) if (n.eq.0) return c if (n.gt.0) go to 30 c c compute gamma(x) for x .lt. 1.0 c n = -n C if (x.eq.0.d0) call seteru (14hdgamma x is 0, 14, 4, 2) if (x.eq.0.d0) dgamma = d1mach(2) if (x.eq.0.d0) return C if (x.lt.0.0d0 .and. x+dble(float(n-2)).eq.0.d0) call seteru ( C 1 31hdgamma x is a negative integer, 31, 4, 2) if (x.lt.0.0d0 .and. x+dble(float(n-2)).eq.0.d0) 1 dgamma = -d1mach(2) if (x.lt.0.0d0 .and. x+dble(float(n-2)).eq.0.d0) return C if (x.lt.(-0.5d0) .and. dabs((x-dint(x-0.5d0))/x).lt.dxrel) call C 1 seteru (68hdgamma answer lt half precision because x too near n C 2egative integer, 68, 1, 1) C if (y.lt.xsml) call seteru ( C 1 54hdgamma x is so close to 0.0 that the result overflows, C 2 54, 5, 2) if (y.lt.xsml) dgamma = d1mach(2) if (y.lt.xsml) return c do 20 i=1,n dgamma = dgamma/(x+dble(float(i-1)) ) 20 continue return c c gamma(x) for x .ge. 2.0 and x .le. 10.0 c 30 do 40 i=1,n dgamma = (y+dble(float(i))) * dgamma 40 continue return c c gamma(x) for dabs(x) .gt. 10.0. recall y = dabs(x). c C50 if (x.gt.xmax) call seteru (32hdgamma x so big gamma overflows, C 1 32, 3, 2) 50 if (x.gt.xmax) dgamma = d1mach(2) if (x.gt.xmax) return c dgamma = 0.d0 C if (x.lt.xmin) call seteru (35hdgamma x so small gamma underflows C 1 , 35, 2, 0) if (x.lt.xmin) return c dgamma = exp ((y-0.5d0)*log(y) - y + sq2pil + d9lgmc(y) ) if (x.gt.0.d0) return c C if (dabs((x-dint(x-0.5d0))/x).lt.dxrel) call seteru ( C 1 61hdgamma answer lt half precision, x too near negative integer C 2 , 61, 1, 1) c sinpiy = sin (pi*y) C if (sinpiy.eq.0.d0) call seteru ( C 1 31hdgamma x is a negative integer, 31, 4, 2) if (sinpiy.eq.0.d0) dgamma = -d1mach(2) if (sinpiy.eq.0.d0) return c dgamma = -pi/(y*sinpiy*dgamma) c return end C modified to omit priniting for calls from Fortran within R subroutine d9gaml (xmin, xmax) c june 1977 edition. w. fullerton, c3, los alamos scientific lab. c c calculate the minimum and maximum legal bounds for x in gamma(x). c xmin and xmax are not the only bounds, but they are the only non- c trivial ones to calculate. c c output arguments -- c xmin dble prec minimum legal value of x in gamma(x). any smaller c value of x might result in underflow. c xmax dble prec maximum legal value of x in gamma(x). any larger c value of x might cause overflow. c double precision xmin, xmax, alnbig, alnsml, xln, xold, d1mach, 1 dlog C external d1mach, dlog external d1mach c alnsml = log(d1mach(1)) xmin = -alnsml do 10 i=1,10 xold = xmin xln = log(xmin) xmin = xmin - xmin*((xmin+0.5d0)*xln - xmin - 0.2258d0 + alnsml) 1 / (xmin*xln+0.5d0) if (abs(xmin-xold).lt.0.005d0) go to 20 10 continue C call seteru (27hd9gaml unable to find xmin, 27, 1, 2) xmin = d1mach(2) xmax = -d1mach(2) return c 20 xmin = -xmin + 0.01d0 c alnbig = log (d1mach(2)) xmax = alnbig do 30 i=1,10 xold = xmax xln = log(xmax) xmax = xmax - xmax*((xmax-0.5d0)*xln - xmax + 0.9189d0 - alnbig) 1 / (xmax*xln-0.5d0) if (abs(xmax-xold).lt.0.005d0) go to 40 30 continue C call seteru (27hd9gaml unable to find xmax, 27, 2, 2) xmin = d1mach(2) xmax = -d1mach(2) return c 40 xmax = xmax - 0.01d0 xmin = dmax1 (xmin, -xmax+1.d0) c return end double precision function dcsevl (x, a, n) double precision a(n), x, twox, b0, b1, b2 double precision d1mach external d1mach c C if (n.lt.1) call seteru (28hdcsevl number of terms le 0, 28, 2,2) if (n.lt.1) dcsevl = -d1mach(2) if (n.lt.1) return C if (n.gt.1000) call seteru (31hdcsevl number of terms gt 1000, C 1 31, 3, 2) if (n.gt.1000) dcsevl = d1mach(2) if (n.gt.1000) return C if (x.lt.(-1.1d0) .or. x.gt.1.1d0) call seteru ( C 1 25hdcsevl x outside (-1,+1), 25, 1, 1) if (x.lt.(-1.1d0) .or. x.gt.1.1d0) dcsevl = d1mach(2) if (x.lt.(-1.1d0) .or. x.gt.1.1d0) return C added by CF to avoid uninitialized warnings b2 = 0 c twox = 2.0d0*x b1 = 0.d0 b0 = 0.d0 do 10 i=1,n b2 = b1 b1 = b0 ni = n - i + 1 b0 = twox*b1 - b2 + a(ni) 10 continue c dcsevl = 0.5d0 * (b0-b2) c return end double precision function d9lgmc (x) double precision x, algmcs(15), xbig, xmax, dcsevl, d1mach external d1mach, dcsevl, initds c data algmcs( 1) / +.1666389480 4518632472 0572965082 2 d+0 / data algmcs( 2) / -.1384948176 0675638407 3298605913 5 d-4 / data algmcs( 3) / +.9810825646 9247294261 5717154748 7 d-8 / data algmcs( 4) / -.1809129475 5724941942 6330626671 9 d-10 / data algmcs( 5) / +.6221098041 8926052271 2601554341 6 d-13 / data algmcs( 6) / -.3399615005 4177219443 0333059966 6 d-15 / data algmcs( 7) / +.2683181998 4826987489 5753884666 6 d-17 / data algmcs( 8) / -.2868042435 3346432841 4462239999 9 d-19 / data algmcs( 9) / +.3962837061 0464348036 7930666666 6 d-21 / data algmcs( 10) / -.6831888753 9857668701 1199999999 9 d-23 / data algmcs( 11) / +.1429227355 9424981475 7333333333 3 d-24 / data algmcs( 12) / -.3547598158 1010705471 9999999999 9 d-26 / data algmcs( 13) / +.1025680058 0104709120 0000000000 0 d-27 / data algmcs( 14) / -.3401102254 3167487999 9999999999 9 d-29 / data algmcs( 15) / +.1276642195 6300629333 3333333333 3 d-30 / c data nalgm, xbig, xmax / 0, 2*0.d0 / c if (nalgm.ne.0) go to 10 nalgm = initds (algmcs, 15, sngl(d1mach(3)) ) xbig = 1.0d0/sqrt(d1mach(3)) xmax = exp (dmin1(log(d1mach(2)/12.d0), -log(12.d0*d1mach(1)))) c C10 if (x.lt.10.d0) 10 if (x.lt.10.d0) d9lgmc = d1mach(2) if (x.lt.10.d0) return if (x.ge.xmax) go to 20 c d9lgmc = 1.d0/(12.d0*x) if (x.lt.xbig) d9lgmc = dcsevl (2.0d0*(10.d0/x)**2-1.d0, algmcs, 1 nalgm) / x return c 20 d9lgmc = 0.d0 C call seteru (34hd9lgmc x so big d9lgmc underflows, 34, 2, 0) return c end double precision function dlngam (x) double precision x, dxrel, pi, sinpiy, sqpi2l, sq2pil, 1 y, xmax, d9lgmc, d1mach C 1 y, xmax, dgamma, d9lgmc, d1mach external d1mach, d9lgmc c data sq2pil / 0.9189385332 0467274178 0329736405 62 d0 / c sq2pil = alog (sqrt(2*pi)), sqpi2l = alog(sqrt(pi/2)) data sqpi2l / +.2257913526 4472743236 3097614947 441 d+0 / data pi / 3.1415926535 8979323846 2643383279 50 d0 / c data xmax, dxrel / 2*0.d0 / c C added by CF to avoid uninitialized warnings dlngam = 0.d0 if (xmax.ne.0.d0) go to 10 xmax = d1mach(2)/dlog(d1mach(2)) dxrel = dsqrt (d1mach(4)) c 10 y = abs (x) if (y.gt.10.d0) go to 20 c c dlog (dabs (dgamma(x)) ) for dabs(x) .le. 10.0 c dlngam = log (abs (dgamma(x)) ) return c c dlog ( dabs (dgamma(x)) ) for dabs(x) .gt. 10.0 c C20 if (y.gt.xmax) call seteru ( C 1 39hdlngam dabs(x) so big dlngam overflows, 39, 2, 2) 20 if (y.gt.xmax) dlngam = d1mach(2) if (y.gt.xmax) return c if (x.gt.0.d0) dlngam = sq2pil + (x-0.5d0)*log(x) - x + d9lgmc(y) if (x.gt.0.d0) return c sinpiy = abs (sin(pi*y)) C if (sinpiy.eq.0.d0) call seteru ( C 1 31hdlngam x is a negative integer, 31, 3, 2) if (sinpiy.eq.0.d0) dlngam = -d1mach(2) if (sinpiy.eq.0.d0) return c dlngam = sqpi2l + (x-0.5d0)*log(y) - x - log(sinpiy) - d9lgmc(y) c C if (dabs((x-dint(x-0.5d0))*dlngam/x).lt.dxrel) call seteru ( C 1 68hdlngam answer lt half precision because x too near negative C 2integer, 68, 1, 1) return c end function initds (dos, nos, eta) double precision dos(nos) integer i1mach external i1mach c C if (nos.lt.1) call seteru ( C 1 35hinitds number of coefficients lt 1, 35, 2, 2) if (nos.lt.1) initds = i1mach(9) c C added by CF to avoid uninitialized warnings i = 0 err = 0. do 10 ii=1,nos i = nos + 1 - ii err = err + abs(sngl(dos(i))) if (err.gt.eta) go to 20 10 continue c C20 if (i.eq.nos) call seteru (28hinitds eta may be too small, 28, C 1 1, 2) 20 continue initds = i c return end subroutine absrng( l, v, i, vmin, vmax) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE double precision v(*) integer i, j, k, l double precision temp, vmin, vmax c---------------------------------------------------------------------------- temp = abs(v(1)) vmin = temp vmax = temp if (l .eq. 1) return if (i .eq. 1) then do j = 2, l temp = abs(v(j)) vmin = min(vmin,temp) vmax = max(vmax,temp) end do else k = 1 + i do j = 2, l temp = abs(v(k)) vmin = min(vmin,temp) vmax = max(vmax,temp) k = k + i end do end if return end SUBROUTINE D2NORM ( N, X, INCX, VALUE ) * .. Scalar Arguments .. INTEGER INCX, N * .. Array Arguments .. DOUBLE PRECISION X( * ), VALUE * .. * * DNRM2 returns the euclidean norm of a vector via the function * name, so that * * DNRM2 := sqrt( x'*x ) * * THIS FUNCTION MODELLED AFTER DNRM2 BUT WRITTEN AS A SUBROUTINE * * -- This version written on 25-October-1982. * Modified on 14-October-1993 to inline the call to DLASSQ. * Sven Hammarling, Nag Ltd. * * * .. Parameters .. DOUBLE PRECISION ONE , ZERO PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 ) * .. Local Scalars .. INTEGER IX DOUBLE PRECISION ABSXI, NORM, SCALE, SSQ * .. Intrinsic Functions .. INTRINSIC ABS, SQRT * .. * .. Executable Statements .. IF( N.LT.1 .OR. INCX.LT.1 )THEN NORM = ZERO ELSE IF( N.EQ.1 )THEN NORM = ABS( X( 1 ) ) ELSE SCALE = ZERO SSQ = ONE * The following loop is equivalent to this call to the LAPACK * auxiliary routine: * CALL DLASSQ( N, X, INCX, SCALE, SSQ ) * DO 10, IX = 1, 1 + ( N - 1 )*INCX, INCX IF( X( IX ).NE.ZERO )THEN ABSXI = ABS( X( IX ) ) IF( SCALE.LT.ABSXI )THEN SSQ = ONE + SSQ*( SCALE/ABSXI )**2 SCALE = ABSXI ELSE SSQ = SSQ + ( ABSXI/SCALE )**2 END IF END IF 10 CONTINUE NORM = SCALE * SQRT( SSQ ) END IF * VALUE = NORM RETURN * * End of D2NORM. * END subroutine mclrup( l, n, v, r, lr) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer l, n, lr double precision cs, sn c double precision v(n), r(lr,n) double precision v(*), r(lr,*) integer i, j, k, m if (l .eq. 1) return k = l - 1 if (k .le. n) then call dcopy( n, v, 1, r(k,1), lr) if (k .eq. 1) return if (n .gt. 1) then i = 1 m = n do j = 2, k call drotg( r(i,i), r(k,i), cs, sn) m = m - 1 call drot( m, r(i,j), lr, r(k,j), lr, cs, sn) i = j end do else call drotg( r(1,1), r(k,1), cs, sn) end if else if (n .gt. 1) then i = 1 m = n do j = 2, n call drotg( r(i,i), v(i), cs, sn) m = m - 1 call drot( m, r(i,j), lr, v(j), 1, cs, sn) i = j end do end if call drotg( r(n,n), v(n), cs, sn) end if return end subroutine mcltrw( x, n, p, u, ss) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p double precision ss c double precision x(n,p), u(p) double precision x(n,*), u(*) double precision ddot external ddot integer i, j double precision fac double precision zero, one parameter (zero = 0.d0, one = 1.d0) c------------------------------------------------------------------------------ c form mean fac = one / sqrt(dble(n)) call dcopy( p, zero, 0, u, 1) do i = 1, n call daxpy( p, fac, x(i,1), n, u, 1) end do c subtract mean and form sum of squares ss = zero do j = 1, p call daxpy( n, (-fac), u(j), 0, x(1,j), 1) ss = ss + ddot(n, x(1,j), 1, x(1,j), 1) end do return end subroutine mclvol( x, n, p, u, v, w, * work, lwork, iwork, liwork, * info) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, lwork, liwork, info c integer iwork(liwork) integer iwork(*) c double precision x(n,p), u(p), v(p,p), w(p,p), work(lwork), double precision x(n,*), u(*), v(p,*), w(p,p), work(*) integer i, j double precision temp, dummy, cmin, cmax double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision EPSMAX parameter (EPSMAX = 2.2204460492503131d-16) double precision FLMAX parameter (FLMAX = 1.7976931348623157D+308) c------------------------------------------------------------------------------ c form mean temp = one / dble(n) call dcopy( p, zero, 0, u, 1) do i = 1, n call daxpy( p, temp, x(i,1), n, u, 1) end do c subtract mean do j = 1, p call daxpy( n, (-one), u(j), 0, x(1,j), 1) end do c if (.false.) then c this gets the eigenvectors but x is overwritten c get right singular vectors c call dgesvd( 'N', 'A', n, p, x, n, u, c * dummy, 1, w, p, work, lwork, info) c if (info .lt. 0) return c if (info .eq. 0) then c lwork = int(work(1)) c do i = 1, p c v(i,i) = w(i,i) c if (i .gt. 1) then c do j = 1, (i-1) c v(i,j) = w(j,i) c v(j,i) = w(i,j) c end do c end if c end do c goto 100 c end if c end if c form crossproduct call dsyrk( 'U', 'T', p, n, one, x, n, zero, w, p) c get eigenvectors do j = 1, p do i = 1, j v(i,j) = w(i,j) end do end do call dsyevd( 'V', 'U', p, v, p, u, * work, lwork, iwork, liwork, info) if (info .lt. 0) return if (info .eq. 0) then lwork = int(work(1)) liwork = iwork(1) goto 100 end if c EPSMAX = d1mach(4) call dsyevx( 'V', 'A', 'U', p, w, p, dummy, dummy, i, i, * sqrt(EPSMAX), j, u, v, p, * work, lwork, iwork(p+1), iwork, info) if (info .ne. 0) return lwork = int(work(1)) liwork = -1 100 continue c FLMAX = d1mach(2) c form xv c vol = one do j = 1, p call dgemv( 'N', n, p, one, x, n, v(1,j), 1, zero, work, 1) cmax = -FLMAX cmin = FLMAX do i = 1, n temp = work(i) if (temp .gt. cmax) cmax = temp if (temp .lt. cmin) cmin = temp end do u(j) = cmax - cmin c vol = vol * (cmax - cmin) end do return end subroutine sgnrng( l, v, i, vmin, vmax) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE double precision v(*) integer i, j, k, l double precision temp, vmin, vmax c---------------------------------------------------------------------------- temp = v(1) vmin = temp vmax = temp if (l .eq. 1) return if (i .eq. 1) then do j = 2, l temp = v(j) vmin = min(vmin,temp) vmax = max(vmax,temp) end do else k = 1 + i do j = 2, l temp = v(k) vmin = min(vmin,temp) vmax = max(vmax,temp) k = k + i end do end if return end subroutine shapeo( TRANSP, s, O, l, m, w, info) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical TRANSP integer l, m, info c double precision s(l), O(l,l,m), w(l,l) double precision s(*), O(l,l,*), w(l,*) integer j, k double precision temp double precision zero, one parameter (zero = 0.d0, one = 1.d0) c------------------------------------------------------------------------------ if (TRANSP) then do j = 1, l temp = sqrt(s(j)) do k = 1, m call dscal( l, temp, O(j,1,k), l) end do end do do k = 1, m call dsyrk( 'U', 'T', l, l, one, O(1,1,k), l, zero, w, l) do j = 1, l call dcopy( j, w(1,j), 1, O(1,j,k), 1) end do do j = 2, l call dcopy( j-1, w(1,j), 1, O(j,1,k), l) end do end do info = 0 return end if if (.not. TRANSP) then do j = 1, l temp = sqrt(s(j)) do k = 1, m call dscal( l, temp, O(1,j,k), 1) end do end do do k = 1, m call dsyrk( 'U', 'N', l, l, one, O(1,1,k), l, zero, w, l) do j = 1, l call dcopy( j, w(1,j), 1, O(1,j,k), 1) end do do j = 2, l call dcopy( j-1, w(1,j), 1, O(j,1,k), l) end do end do info = 0 return end if info = -1 return end subroutine uncholf ( UPPER, T, l, n, info) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical UPPER integer l, n, info c double precision T(abs(n), abs(n)) double precision T( l , * ) integer i, j, k double precision ddot external ddot c------------------------------------------------------------------------------ if (UPPER) then do i = 2, n do j = 1, (i-1) T(i,j) = ddot( j, T(1,i), 1, T(1,j), 1) end do end do do k = 1, n T(k,k) = ddot( k, T(1,k), 1, T(1,k), 1) end do do k = 1, n-1 call dcopy( n-k, T(k+1,k), 1, T(k,k+1), l) end do info = 0 return end if if (.not. UPPER) then do i = 2, n do j = 1, (i-1) T(j,i) = ddot( j, T(i,1), l, T(j,1), l) end do end do do k = 1, n T(k,k) = ddot( k, T(k,1), l, T(k,1), l) end do do k = 2, n call dcopy( k-1, T(1,k), 1, T(k,1), l) end do return end if info = -1 return end subroutine wardsw( i, n, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer i, n double precision d(*) integer i1, n1, ii, nn, k double precision temp double precision FLMAX parameter (FLMAX = 1.7976931348623157D+308) *----------------------------------------------------------------------------- i1 = i - 1 ii = (i1*(i1-1))/2 + 1 n1 = n - 1 nn = (n1*(n1-1))/2 + 1 c if (i .gt. 1) then call dswap( i1, d(nn), 1, d(ii), 1) c call dcopy( i1, FLMAX, 0, d(nn), 1) ii = ii + i1 + i1 nn = nn + i c end if if (n1 .eq. i) return k = i 100 continue temp = d(ii) d(ii) = d(nn) d(nn) = temp c d(nn) = FLMAX ii = ii + k nn = nn + 1 k = k + 1 if (k .lt. n1) goto 100 c d(nn) = FLMAX return end subroutine es1e ( x, mu, sigsq, pro, n, G, Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G double precision sigsq, hood, Vinv c double precision x(n), mu(G), pro(G[+1]), z(n,G[+1]) double precision x(*), mu(*), pro( * ), z(n, * ) integer i, k, nz double precision temp, const, muk, prok, tmin, tmax, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (sigsq .le. zero) then hood = FLMAX return end if const = pi2log + log(sigsq) do k = 1, G muk = mu(k) c prok = pro(k) do i = 1, n temp = x(i) - muk c z(i,k) = prok*exp(-(const+(temp*temp)/sigsq)/two) if (sigsq .lt. one .and. * abs(temp) .ge. sqrt(sigsq)*RTMAX) then hood = FLMAX return end if z(i,k) = -(const+(temp*temp)/sigsq)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c temp = zero c do k = 1, nz c temp = temp + z(i,k) c end do c hood = hood + log(temp) c call dscal( nz, (one/temp), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine hc1e ( x, n, ic, ng, ns, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, ic(n), ng, ns, nd c double precision x(n), d(ng*(ng-1)/2) double precision x(*), d(*) integer lg, ld, ll, lo, ls integer i, j, k, m integer ni, nj, nij, iopt, jopt, iold, jold integer ij, ici, icj, ii, ik, jk double precision ri, rj, rij, si, sj, sij double precision temp, dij, dopt, dold external wardsw double precision one parameter (one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ iopt = 0 jopt = 0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd c call intpr( 'ic', -1, ic, n) c call intpr( 'no. of groups', -1, lg, 1) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. lg) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 c call dswap( p, x(k,1), n, x(j,1), n) temp = x(k) x(k) = x(j) x(j) = temp ic(j) = ic(k) ic(k) = icj end if end do 3 continue c call intpr( 'ic', -1, ic, n) do j = 1, n i = ic(j) if (i .ne. j) then ic(j) = 0 ni = ic(i) nij = ni + 1 ic(i) = nij ri = dble(ni) rij = dble(nij) sj = sqrt(one/rij) si = sqrt(ri)*sj c update column sum in kth row c call dscal( p, si, x(i,1), n) c call daxpy( p, sj, x(j,1), n, x(i,1), n) x(i) = si*x(i) + sj*x(j) else ic(j) = 1 end if end do c call intpr( 'ic', -1, ic, n) dopt = FLMAX ij = 0 do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) c call dcopy( p, x(i,1), n, v, 1) c call dscal( p, sj, v, 1) c call daxpy( p, (-si), x(j,1), n, v, 1) c dij = ddot(p, v, 1, v, 1) temp = sj*x(i) - si*x(j) dij = temp*temp ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij iopt = i jopt = j end if end do end do c if (.false.) then c i = 1 c ij = 1 c do j = 2, ng c call dblepr( 'dij', -1, d(ij), i) c ij = ij + i c i = j c end do c end if if (ns .eq. 1) then if (iopt .lt. jopt) then x(1) = dble(iopt) ic(1) = jopt else x(1) = dble(jopt) ic(1) = iopt end if d(1) = dopt return end if ls = 1 100 continue ni = ic(iopt) nj = ic(jopt) nij = ni + nj ic(iopt) = nij ic(jopt) = -iopt if (jopt .ne. lg) then call wardsw( jopt, lg, d) m = ic(jopt) ic(jopt) = ic(lg) ic(lg) = m end if si = dble(ni) sj = dble(nj) sij = dble(nij) dold = dopt iold = iopt jold = jopt iopt = -1 jopt = -1 dopt = FLMAX lg = lg - 1 ld = ld - lg ii = (iold*(iold-1))/2 if (iold .gt. 1) then ik = ii - iold + 1 do j = 1, (iold - 1) nj = ic(j) rj = dble(nj) ik = ik + 1 jk = ld + j dij = (rj+si)*d(ik)+(rj+sj)*d(jk) dij = (dij-rj*dold)/(rj+sij) d(ik) = dij end do end if if (iold .lt. lg) then ik = ii + iold i = iold do j = (iold + 1), lg nj = ic(j) rj = dble(nj) jk = ld + j dij = (rj+si)*d(ik)+(rj+sj)*d(jk) dij = (dij-rj*dold)/(rj+sij) d(ik) = dij ik = ik + i i = j end do end if d(lo) = dold lo = lo - 1 d(lo) = dble(iold) lo = lo - 1 d(lo) = dble(jold) lo = lo - 1 c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 do i = 2, ld si = d(i) if (si .le. dopt) then ij = i dopt = si end if end do if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 100 900 continue d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 1 ld = lo si = d(lo) lo = lo - 1 sj = d(lo) ic(int(sj)) = ng if (si .lt. sj) then x(1) = si d(ld) = sj else x(1) = sj d(ld) = si end if ld = ld - 1 lg = ng + 1 do k = 2, ns lo = lo - 1 d(ld) = d(lo) ld = ld - 1 lo = lo - 1 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k) = dble(ici) d(ld) = dble(icj) else x(k) = dble(icj) d(ld) = dble(ici) end if ld = ld - 1 end do ld = nd lo = nd - 1 do k = 1, ns ic(k) = int(d(lo)) lo = lo - 1 ld = ld - 1 d(ld) = d(lo) lo = lo - 1 end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end subroutine me1e ( EQPRO, x, n, G, Vinv, z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, G, maxi double precision Vinv, eps, tol c double precision x(n), z(n,G[+1]), mu(G), sigsq, pro(G[+1]) double precision x(*), z(n, * ), mu(*), sigsq, pro( * ) integer nz, iter, k, i double precision hold, hood, err, prok, tmin, tmax, ViLog double precision const, sum, sumz, smu, temp, term, zsum double precision rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( nz, one/dble(nz), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 sumz = zero sigsq = zero zsum = one do k = 1, G sum = zero smu = zero do i = 1, n temp = z(i,k) sum = sum + temp smu = smu + temp*x(i) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then smu = smu / sum mu(k) = smu do i = 1, n temp = x(i) - smu temp = temp*temp sigsq = sigsq + z(i,k)*temp z(i,k) = temp end do end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if if (Vinv .le. zero) then sigsq = sigsq / dble(n) else sigsq = sigsq / sumz end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (sigsq .le. eps) then tol = err eps = FLMAX maxi = iter return end if const = pi2log + log(sigsq) do k = 1, G c temp = pro(k) do i = 1, n c z(i,k) = temp*exp(-(const+(z(i,k)/sigsq))/two) z(i,k) = -(const+(z(i,k)/sigsq))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine me1ep ( EQPRO, x, n, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, G, maxi double precision pshrnk, pmu, pscale, pdof double precision Vinv, eps, tol c double precision x(n), z(n,G[+1]), mu(G), sigsq, pro(G[+1]) double precision x(*), z(n, * ), mu(*), sigsq, pro( * ) integer nz, iter, k, i double precision hold, hood, err, prok, tmin, tmax, ViLog double precision const, sum, sumz, smu, temp, term, zsum double precision pmupmu, cgam, cmu, rmu, rgam, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision dlngam external dlngam c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( nz, one/dble(nz), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX pmupmu = pmu*pmu iter = 0 100 continue iter = iter + 1 sigsq = zero zsum = one do k = 1, G sumz = zero smu = zero do i = 1, n temp = z(i,k) sumz = sumz + temp smu = smu + temp*x(i) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(zsum,sumz) if (sumz .gt. rteps) then smu = smu/sumz sum = zero do i = 1, n term = x(i) - smu term = term*term sum = sum + z(i,k)*term end do term = (pshrnk*sumz)/(pshrnk+sumz) temp = (pmupmu + smu*smu) - two*pmu*smu sigsq = sigsq + sum + term*temp term = sumz/(pshrnk+sumz) temp = pshrnk/(pshrnk+sumz) mu(k) = term*smu + temp*pmu end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if sigsq = (pscale + sigsq)/(pdof + dble(n+G) + two) c if (Vinv .le. zero) then c sigsq = sigsq / dble(n) c else c sigsq = sigsq / sumz c end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (sigsq .le. eps) then tol = err eps = FLMAX maxi = iter return end if const = pi2log + log(sigsq) do k = 1, G c temp = pro(k) do i = 1, n term = x(i) - mu(k) z(i,k) = -(const+((term*term)/sigsq))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter cmu = dble(G)*(pi2log-log(pshrnk))/two sum = zero do k = 1, G temp = pmu - mu(k) temp = temp*temp sum = sum - (pshrnk/sigsq)*temp end do term = log(sigsq) rmu = (sum - dble(G)*term)/two temp = pdof/two cgam = temp*log(pscale/two) - dlngam(temp) rgam = -(temp+one)*term - (pscale/sigsq)/two pdof = (cmu+cgam) + (rmu+rgam) return end subroutine ms1e ( x, z, n, G, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G c double precision x(n), z(n,G), mu(G), sigsq, pro(G) double precision x(*), z(n,*), mu(*), sigsq, pro(*) integer i, k double precision sum, smu, sumz, temp double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ sumz = zero sigsq = zero do k = 1, G sum = zero smu = zero do i = 1, n temp = z(i,k) sum = sum + temp smu = smu + temp*x(i) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sigsq .gt. one .or. smu .le. sum*FLMAX) then smu = smu / sum mu(k) = smu if (sigsq .ne. FLMAX) then do i = 1, n temp = abs(x(i) - smu) sigsq = sigsq + z(i,k)*(temp*temp) end do end if else mu(k) = FLMAX sigsq = FLMAX end if end do c sumz .eq. n when no noise if (sigsq .ne. FLMAX) sigsq = sigsq / sumz return end subroutine ms1ep ( x, z, n, G, * pshrnk, pmu, pscale, pdof, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G double precision pshrnk, pmu, pscale, pdof c double precision x(n), z(n,G), mu(G), sigsq, pro(G) double precision x(*), z(n,*), mu(*), sigsq, pro(*) integer k, i double precision pmupmu double precision sum, sumz, smu, temp, term double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero pmupmu = pmu*pmu sigsq = zero do k = 1, G sumz = zero smu = zero do i = 1, n temp = z(i,k) sumz = sumz + temp smu = smu + temp*x(i) end do pro(k) = sumz / dble(n) if (sumz .gt. one .or. smu .lt. sumz*FLMAX) then smu = smu/sumz sum = zero term = sumz/(pshrnk+sumz) temp = pshrnk/(pshrnk+sumz) mu(k) = term*smu + temp*pmu if (sigsq .ne. FLMAX) then do i = 1, n term = abs(x(i) - smu) sum = sum + z(i,k)*(term*term) end do term = (pshrnk*sumz)/(pshrnk+sumz) temp = (pmupmu + smu*smu) - two*pmu*smu sigsq = sigsq + sum + term*temp end if else mu(k) = FLMAX sigsq = FLMAX end if end do if (sigsq .ne. FLMAX) then temp = pdof + dble(n) + two if (pshrnk .gt. zero) temp = temp + dble(G) sigsq = (pscale + sigsq)/temp end if return end subroutine eseee ( CHOL, x, mu, Sigma, pro, n, p, G, Vinv, * w, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c character CHOL logical CHOL c integer n, p, G integer n, p, G double precision hood, Vinv c double precision x(n,p), w(p), z(n,G[+1]) double precision x(n,*), w(*), z(n, * ) c double precision mu(p,G), Sigma(p,p), pro(G[+1]) double precision mu(p,*), Sigma(p,*), pro( * ) integer info, i, j, k, nz double precision rteps, detlog, prok, tmin, tmax double precision umin, umax, const, temp, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ c if (CHOL .eq. 'N') then if (.not. CHOL) then c Cholesky factorization call dpotrf( 'U', p, Sigma, p, info) if (info .ne. 0) then w(1) = dble(info) hood = FLMAX return end if end if call absrng( p, Sigma, (p+1), umin, umax) c rc = umin/(one+umax) if (umax .le. one .and. umax .ge. umin*RTMAX) then w(1) = zero hood = FLMAX return end if if (umax .ge. one .and. umin .le. umax*RTMIN) then w(1) = zero hood = FLMAX return end if detlog = zero do j = 1, p detlog = detlog + log(abs(Sigma(j,j))) end do const = dble(p)*pi2log/two + detlog do k = 1, G c prok = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, Sigma, p, w, 1) temp = ddot( p, w, 1, w, 1)/two c z(i,k) = prok*exp(-(const+temp)) z(i,k) = -(const+temp) end do end do w(1) = zero if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then w(1) = zero hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do w(1) = zero return end double precision function detmc2( n, u) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer k, n double precision q double precision u(n,*) double precision zero, two parameter (zero = 0.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) detmc2 = zero do k = 1, n q = u(k,k) if (q .eq. zero) then detmc2 = -FLMAX return end if detmc2 = detmc2 + log(abs(q)) end do detmc2 = two*detmc2 return end subroutine meeee ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, U, pro, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p), pro(G) double precision mu(p,*), U(p,*), pro(*) integer nz, p1, iter, i, j, k, j1 double precision piterm, sclfac, sumz, sum, zsum double precision cs, sn, umin, umax, rc, detlog, rteps double precision const, hold, hood, err, temp, term double precision prok, tmin, tmax, ViLog double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if piterm = dble(p)*pi2log/two p1 = p + 1 eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX c zero out the lower triangle i = 1 do j = 2, p call dcopy( p-i, zero, 0, U(j,i), 1) i = j end do iter = 0 100 continue iter = iter + 1 do j = 1, p call dcopy( j, zero, 0, U(1,j), 1) end do sumz = zero zsum = one do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX hood = eps maxi = iter return end if if (Vinv .le. zero) then sclfac = one/sqrt(dble(n)) else sclfac = one/sqrt(sumz) end if do j = 1, p call dscal( j, sclfac, U(1,j), 1) end do c condition number call absrng( p, U, p1, umin, umax) rc = umin/(one+umax) if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (rc .le. rteps) then tol = err eps = FLMAX hood = eps maxi = iter return end if detlog = zero do j = 1, p detlog = detlog + log(abs(U(j,j))) end do const = piterm + detlog do k = 1, G c temp = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, U, p, w, 1) sum = ddot( p, w, 1, w, 1)/two c z(i,k) = temp * exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meeeep( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, U, pro, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p), pro(G) double precision mu(p,*), U(p,*), pro(*) integer nz, p1, iter, i, j, k, j1 double precision piterm, sclfac, sumz, sum, zsum double precision cs, sn, umin, umax, rc, detlog, rteps double precision const, hold, hood, err, temp, term double precision prok, tmin, tmax, ViLog double precision cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision twolog parameter (twolog = 0.6931471805599453d0) double precision pilog parameter (pilog = 1.144729885849400d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if piterm = dble(p)*pi2log/two p1 = p + 1 sclfac = one/sqrt(dble(n)) eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 c copy pscale to U do j = 1, p call dcopy( p, pscale(1,j), 1, U(1,j), 1) end do sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) const = sum + pshrnk temp = (sum*pshrnk)/const call dscal( p, sqrt(temp), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) call dscal( p, sum/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = pdof + dble(p) + one if (pshrnk .gt. zero) term = term + dble(G) if (Vinv .le. zero) then sclfac = one/sqrt(term+dble(n)) else sclfac = one/sqrt(term+dble(sumz)) end if do j = 1, p call dscal( j, sclfac, U(1,j), 1) end do if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if c condition number call absrng( p, U, p1, umin, umax) rc = umin/(one+umax) if (rc .le. rteps) then tol = err eps = FLMAX maxi = iter return end if detlog = zero do j = 1, p detlog = detlog + log(abs(U(j,j))) end do const = piterm + detlog do k = 1, G c temp = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, U, p, w, 1) sum = ddot( p, w, 1, w, 1)/two c z(i,k) = temp * exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter if (pshrnk .gt. zero) then cmu = dble(p)*(log(pshrnk) - pi2log)/two rmu = zero do k = 1, G call daxpy( p, (-one), mu(1,k), 1, pmu, 1) call dtrsv('U','T','N',p,U,p,pmu,1) rmu = rmu + ddot( p, pmu, 1, pmu, 1) end do sum = zero term = zero temp = zero do j = 1, p call dcopy( p, pscale(j,1), p, pmu, 1) c call dtrsv('U','T','N', p, U, p, pmu, 1) i = p-j+1 c call dtrsv('U','T','N', i, U(j,j),i,pmu(j),1) call dtrsv('U','T','N', i, U(j,j),p,pmu(j),1) sum = sum + ddot(i, pmu(j), 1, pmu(j), 1) temp = temp + log(abs(pscale(j,j))) term = term + dlngam((pdof+one-dble(j))/two) end do rmu = -(detlog+pshrnk*rmu/two) const = -dble(p)*(pdof*twolog+(dble(p)-one)*pilog/two) cgam = (const/two-pdof*temp) - term rgam = -((pdof+dble(p)+one)*detlog + sum/two) pdof = (dble(G)*cmu+rmu) + (cgam+rgam) else pdof = FLMAX end if return end subroutine mseee ( x, z, n, p, G, w, mu, U, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p), pro(G) double precision mu(p,*), U(p,*), pro(*) c------------------------------------------------------------------------------ c c x double (input) (n,p) matrix of observations. c z double (input) (n,G) conditional probabilities. c n integer (input) number of observations. c p integer (input) dimension of the data. c G integer (input) number of Gaussian clusters in the mixture. c w double (scratch) (p) c mu double (output) (p,G) mean for each group. c U double (output) (p,p) upper triangular Cholesky factor of the c common covariance matrix for the groups: transpose(U) * U = Sigma. c pro double (output) (G) mixing proportions (ignore result if equal). integer i, j, k, j1 double precision sum, sumz, zsum, temp, cs, sn double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ do j = 1, p call dcopy( p, zero, 0, U(1,j), 1) end do sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sum .gt. one .or. one .gt. sum*FLMAX) then zsum = min(zsum,sum) call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do else zsum = zero call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .eq. zero) return c sumz .eq. n when no noise do j = 1, p call dscal( j, one/sqrt(sumz), U(1,j), 1) end do return end subroutine mseeep( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * w, mu, U, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof c double precision mu(p,G), U(p,p), pro(G) double precision mu(p,*), U(p,*), pro(*) integer i, j, k, j1 double precision sclfac, const, temp double precision sum, sumz, zsum, cs, sn double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ if (pshrnk .le. zero) pshrnk = zero do j = 1, p call dcopy( p, pscale(1,j), 1, U(1,j), 1) end do sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sum .ge. one .or. one .gt. sum*FLMAX) then zsum = min(zsum,sum) call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) const = sum + pshrnk temp = (sum*pshrnk)/const call dscal( p, sqrt(temp), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) call dscal( p, sum/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else zsum = zero call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .eq. zero) return temp = pdof+dble(n+p+1) if (pshrnk .gt. zero) temp = temp + dble(G) sclfac = one/sqrt(temp) do j = 1, p call dscal( j, sclfac, U(1,j), 1) end do return end subroutine eseei ( x, mu, scale, shape, pro, n, p, G, * Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision scale, hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p), pro(G[+1]) double precision mu(p,*), shape(*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision smin, smax, prok double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (scale .le. zero) then hood = FLMAX return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if temp = sqrt(scale) do j = 1, p shape(j) = temp*sqrt(shape(j)) end do const = dble(p)*(pi2log+log(scale)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (abs(temp) .ge. shape(j)*FLMAX) then hood = FLMAX return end if temp = temp/shape(j) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine meeei ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p), pro(G[+1]) double precision mu(p,*), shape(*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, zsum double precision const, hold, hood, err, smin, smax double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 call dcopy( p, zero, 0, shape, 1) sumz = zero zsum = one do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum/dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j) = shape(j) + sum end do end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) then scale = zero tol = err eps = FLMAX maxi = iter return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (Vinv .le. zero) then scale = temp/dble(n) else scale = temp/sumz end if if (temp .le. eps) then tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape, 1) call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps .or. scale .le. eps) then tol = err eps = FLMAX maxi = iter return end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if const = dble(p)*(pi2log+log(scale)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j) end do c z(i,k) = prok*exp(-(const+(sum/scale))/two) z(i,k) = -(const+(sum/scale))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meeeip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p), pro(G[+1]) double precision mu(p,*), shape(*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, zsum double precision const, hold, hood, err, smin, smax double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 call dcopy( p, pscale, 0, shape, 1) sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum/dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) term = pshrnk + sum const = (pshrnk*sum)/term do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + (temp*temp) end do shape(j) = shape(j) + sum temp = pmu(j) - mu(j,k) shape(j) = shape(j) + const*(temp*temp) end do end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) then scale = zero tol = err eps = FLMAX maxi = iter return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if term = pdof + one if (pshrnk .gt. zero) term = term + one if (Vinv .le. zero) then scale = temp/(term + dble(n)) else scale = temp/(term + sumz) end if if (temp .le. eps) then tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape, 1) call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps .or. scale .le. eps) then tol = err eps = FLMAX maxi = iter return end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if const = dble(p)*(pi2log+log(scale)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j) end do c z(i,k) = prok*exp(-(const+(sum/scale))/two) z(i,k) = -(const+(sum/scale))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine mseei ( x, z, n, p, G, mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale, shape(p), pro(G) double precision mu(p,*), scale, shape(*), pro(*) integer i, j, k double precision sum, sumz, temp, smin, smax double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ sumz = zero do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .gt. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do call dcopy( p, zero, 0, shape, 1) do j = 1, p sum = zero do i = 1, n do k = 1, G if (mu(1,k) .eq. FLMAX) then scale = FLMAX return end if temp = z(i,k)*(x(i,j) - mu(j,k)) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do end do shape(j) = shape(j) + sum end do call sgnrng(p, shape, 1, smin, smax) if (smin .eq. zero) then scale = zero return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (sumz .lt. one .and. temp .ge. sumz*FLMAX) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if scale = temp/sumz if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) return end if call dscal( p, one/temp, shape, 1) return end subroutine mseeip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale, shape(p), pro(G[+1]) double precision mu(p,*), scale, shape(*), pro( * ) integer i, j, k double precision sum, sumz, temp, term double precision const, smin, smax double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero call dcopy( p, pscale, 0, shape, 1) sumz = zero do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .gt. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) term = pshrnk + sum const = (pshrnk*sum)/term do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do shape(j) = shape(j) + sum temp = pmu(j) - mu(j,k) shape(j) = shape(j) + const*(temp*temp) end do else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do call sgnrng(p, shape, 1, smin, smax) if (smin .eq. zero) then scale = zero return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if if (temp .gt. SMALOG) then smin = exp(temp) else smin = zero end if term = pdof + sumz + two if (pshrnk .gt. zero) term = term + dble(G) scale = smin/term if (smin .lt. one .and. one .ge. smin*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) return end if call dscal( p, one/smin, shape, 1) return end subroutine eseev ( x, mu, scale, shape, O, pro, n, p, G, * Vinv, v, w, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p, G integer n, p, G double precision scale, Vinv, hood c double precision x(n,p), v(p), w(p), z(n,G[+1]) double precision x(n,*), v(*), w(*), z(n, * ) c double precision mu(p,G), shape(p), O(p,p,G), pro(G[+1]) double precision mu(p,*), shape(*), O(p,p,*), pro( * ) integer i, j, k, nz double precision const, temp, tmin, tmax double precision smin, smax, prok, eps, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (scale .le. zero) then hood = FLMAX return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if temp = sqrt(scale) do j = 1, p shape(j) = temp*sqrt(shape(j)) end do const = dble(p)*(pi2log + log(scale)) do k = 1, G c prok = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dgemv( 'N', p, p, one, O(1,1,k), p, * w, 1, zero, v, 1) do j = 1, p if (shape(j) .lt. one .and. * abs(v(j)) .ge. shape(j)*FLMAX) then hood = FLMAX return end if v(j) = v(j)/shape(j) end do temp = ddot( p, v, 1, v, 1) c z(i,k) = prok*exp(-(const+temp)/two) z(i,k) = -(const+temp)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine meeev ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * lwork, mu, scale, shape, O, pro, w, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi, lwork double precision Vinv, eps, tol, scale double precision x(n,*), z(n, * ), w( * ), s(*) double precision mu(p,*), shape(*), O(p,p,*), pro( * ) integer nz, p1, iter, i, j, k, l, j1, info double precision dnp, dummy, temp, term, rteps double precision sumz, sum, smin, smax, cs, sn double precision const, rc, hood, hold, err double precision prok, tmin, tmax, ViLog, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if p1 = p + 1 dnp = dble(n*p) eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 call dcopy( p, zero, 0, shape, 1) sumz = zero zsum = one l = 0 do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, zero, 0, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum zsum = min(zsum,sum) if (.not. EQPRO) pro(k) = sum / dble(n) if (sum .ge. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dgesvd( 'N', 'O', p, p, O(1,1,k), p, s, * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = s(j) shape(j) = shape(j) + temp*temp end do end if end if end do if (l .ne. 0 .or. zsum .lt. rteps) then lwork = l c w(1) = FLMAX tol = err if (l .ne. 0) then eps = FLMAX else eps = -FLMAX end if maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then lwork = 0 c w(1) = smin tol = err eps = FLMAX maxi = iter return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (Vinv .le. zero) then scale = temp/dble(n) else scale = temp/sumz end if if (temp .le. eps) then lwork = 0 c w(1) = temp tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape, 1) call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then lwork = 0 c w(1) = -smin tol = err eps = FLMAX maxi = iter return end if if (scale .le. eps) then c w(1) = -scale lwork = 0 tol = err eps = FLMAX maxi = iter return end if temp = sqrt(scale) do j = 1, p w(j) = temp*sqrt(shape(j)) end do call absrng( p, w, 1, smin, smax) rc = smin / (one + smax) if (smin .le. rteps) then c w(1) = -smin lwork = 0 tol = err eps = FLMAX maxi = iter return end if const = dble(p)*(pi2log + log(scale))/two do k = 1, G c temp = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w(p1), 1) call daxpy( p, (-one), mu(1,k), 1, w(p1), 1) call dgemv( 'N', p, p, one, O(1,1,k), p, w(p1), 1, zero, s, 1) do j = 1, p s(j) = s(j) / w(j) end do sum = ddot( p, s, 1, s, 1)/two c z(i,k) = temp*exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 lwork = 0 c w(1) = rc tol = err eps = hood maxi = iter return end subroutine meeevp( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * lwork, mu, scale, shape, O, pro, w, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi, lwork c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]), w(lwork), s(p) double precision x(n,*), z(n, * ), w( * ), s(*) c double precision mu(p,G), shape(p), O(p,p,G), pro(G[+1]) double precision mu(p,*), shape(*), O(p,p,*), pro( * ) integer nz, p1, iter, i, j, k, l, j1, info double precision dnp, dummy, temp, term, rteps double precision sumz, sum, smin, smax, cs, sn double precision const, rc, hood, hold, err double precision prok, tmin, tmax, ViLog, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if p1 = p + 1 dnp = dble(n*p) eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 call dcopy( p, zero, 0, shape, 1) zsum = one sumz = zero l = 0 do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, pscale(1,j), 1, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) term = sum+pshrnk const = (sum*pshrnk)/term call dscal( p, sqrt(const), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) call dscal( p, sum/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) call dgesvd( 'N', 'O', p, p, O(1,1,k), p, s, * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = s(j) shape(j) = shape(j) + temp*temp end do end if end if end do if (l .ne. 0 .or. zsum .le. rteps) then lwork = l c w(1) = FLMAX tol = err if (l .ne. 0) then eps = FLMAX else eps = -FLMAX end if maxi = iter return end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if call sgnrng( p, shape, 1, smin, smax) if (smin .eq. zero) then c w(1) = smin tol = err eps = FLMAX maxi = iter return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if term = pdof + dble(p) + one if (pshrnk .gt. zero) term = term + one if (Vinv .le. zero) then scale = temp/(term + dble(n)) else scale = temp/(term + sumz) end if if (temp .le. eps) then c w(1) = temp tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape, 1) call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then c w(1) = -smin tol = err eps = FLMAX maxi = iter return end if if (scale .le. eps) then c w(1) = -scale tol = err eps = FLMAX maxi = iter return end if temp = sqrt(scale) do j = 1, p w(j) = temp*sqrt(shape(j)) end do call sgnrng( p, w, 1, smin, smax) rc = smin / (one + smax) if (smin .le. rteps) then c w(1) = -smin tol = err eps = FLMAX maxi = iter return end if const = dble(p)*(pi2log + log(scale))/two do k = 1, G c temp = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w(p1), 1) call daxpy( p, (-one), mu(1,k), 1, w(p1), 1) call dgemv( 'N', p, p, one, O(1,1,k), p, w(p1), 1, zero, s, 1) do j = 1, p s(j) = s(j) / w(j) end do sum = ddot( p, s, 1, s, 1)/two c z(i,k) = temp*exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 lwork = 0 c w(1) = rc tol = err eps = hood maxi = iter return end subroutine mseev ( x, z, n, p, G, w, lwork, * mu, scale, shape, O, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, lwork double precision scale c double precision x(n,p), z(n,G), w(lwork) double precision x(n,*), z(n,*), w( * ) c double precision shape(p), O(p,p,G), mu(p,G), pro(G) double precision shape(*), O(p,p,*), mu(p,*), pro(*) integer i, j, k, j1, l, info double precision dummy, sum, sumz, temp double precision cs, sn, smin, smax double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision BIGLOG parameter (BIGLOG = 709.d0) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call dcopy( p, zero, 0, shape, 1) l = 0 sumz = zero scale = zero do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, zero, 0, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else if (scale .ne. FLMAX) then do j = 1, p temp = z(j,k) shape(j) = shape(j) + temp*temp end do end if else scale = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (scale .eq. FLMAX .or. l .ne. 0) then lwork = l if (l .ne. 0) then scale = FLMAX else scale = -FLMAX end if call dcopy( p, FLMAX, 0, shape, 1) return end if lwork = 0 call sgnrng( p, shape, 1, smin, smax) if (smin .eq. zero) then scale = FLMAX return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .ge. sumz*FLMAX) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if scale = temp/sumz if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) return end if call dscal( p, one/temp, shape, 1) return end subroutine mseevp( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * w, lwork, mu, scale, shape, O, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, lwork c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision scale c double precision x(n,p), z(n,G), w(lwork) double precision x(n,*), z(n,*), w( * ) c double precision mu(p,G), shape(p), O(p,p,G), pro(G) double precision mu(p,*), shape(*), O(p,p,*), pro(*) integer p1, i, j, k, l, j1, info double precision dummy, temp, term, const double precision sumz, sum, smin, smax, cs, sn double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision BIGLOG parameter (BIGLOG = 709.d0) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (pshrnk .gt. zero) pshrnk = zero p1 = p + 1 call dcopy( p, zero, 0, shape, 1) l = 0 sumz = zero scale = zero do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, pscale(1,j), 1, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) term = sum+pshrnk const = (sum*pshrnk)/term call dscal( p, sqrt(const), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) call dscal( p, sum/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else if (scale .ne. FLMAX) then do j = 1, p temp = z(j,k) shape(j) = shape(j) + temp*temp end do end if else scale = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (scale .eq. FLMAX .or. l .ne. 0) then lwork = l if (l .ne. 0) then scale = FLMAX else scale = -FLMAX end if call dcopy( p, FLMAX, 0, shape, 1) return end if lwork = 0 call sgnrng( p, shape, 1, smin, smax) if (smin .eq. zero) then scale = FLMAX return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if term = pdof + dble(p) + one if (pshrnk .gt. zero) term = term + one scale = temp/(term + sumz) if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) return end if call dscal( p, one/temp, shape, 1) return end subroutine eseii ( x, mu, sigsq, pro, n, p, G, Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision sigsq, hood, Vinv c double precision x(n,p), mu(p,G), pro(G[+1]), z(n,G[+1]) double precision x(n,*), mu(p,*), pro( * ), z(n, * ) integer i, j, k, nz double precision sum, temp, const, prok, tmin, tmax double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (sigsq .le. zero) then hood = FLMAX return end if const = dble(p)*(pi2log+log(sigsq)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum/sigsq)/two) if (sigsq .lt. one .and. sum .ge. sigsq*FLMAX) then hood = FLMAX return end if z(i,k) = -(const+sum/sigsq)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine hceii ( x, n, p, ic, ng, ns, v, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, ic(n), ng, ns, nd c double precision x(n,p), v(p), d(ng*(ng-1)/2) double precision x(n,*), v(*), d(*) integer lg, ld, ll, lo, ls integer i, j, k, m integer ni, nj, nij, iopt, jopt, iold, jold integer ij, ici, icj, ii, ik, jk double precision ri, rj, rij, si, sj, sij double precision dij, dopt, dold external wardsw double precision one parameter (one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot external ddot c------------------------------------------------------------------------------ iopt = 0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd c call intpr( 'ic', -1, ic, n) c call intpr( 'no. of groups', -1, lg, 1) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. lg) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 call dswap( p, x(k,1), n, x(j,1), n) ic(j) = ic(k) ic(k) = icj end if end do 3 continue c call intpr( 'ic', -1, ic, n) do j = 1, n i = ic(j) if (i .ne. j) then ic(j) = 0 ni = ic(i) nij = ni + 1 ic(i) = nij ri = dble(ni) rij = dble(nij) sj = sqrt(one/rij) si = sqrt(ri)*sj c update column sum in kth row call dscal( p, si, x(i,1), n) call daxpy( p, sj, x(j,1), n, x(i,1), n) else ic(j) = 1 end if end do c call intpr( 'ic', -1, ic, n) dopt = FLMAX ij = 0 do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) call dcopy( p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) dij = ddot(p, v, 1, v, 1) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij iopt = i jopt = j end if end do end do c if (.false.) then c i = 1 c ij = 1 c do j = 2, ng c call dblepr( 'dij', -1, d(ij), i) c ij = ij + i c i = j c end do c end if if (ns .eq. 1) then if (iopt .lt. jopt) then x(1,1) = iopt x(1,2) = jopt else x(1,1) = jopt x(1,2) = iopt end if d(1) = dopt return end if ls = 1 100 continue ni = ic(iopt) nj = ic(jopt) nij = ni + nj ic(iopt) = nij ic(jopt) = -iopt if (jopt .ne. lg) then call wardsw( jopt, lg, d) m = ic(jopt) ic(jopt) = ic(lg) ic(lg) = m end if si = dble(ni) sj = dble(nj) sij = dble(nij) dold = dopt iold = iopt jold = jopt iopt = -1 jopt = -1 dopt = FLMAX lg = lg - 1 ld = ld - lg ii = (iold*(iold-1))/2 if (iold .gt. 1) then ik = ii - iold + 1 do j = 1, (iold - 1) nj = ic(j) rj = dble(nj) ik = ik + 1 jk = ld + j dij = (rj+si)*d(ik)+(rj+sj)*d(jk) dij = (dij-rj*dold)/(rj+sij) d(ik) = dij end do end if if (iold .lt. lg) then ik = ii + iold i = iold do j = (iold + 1), lg nj = ic(j) rj = dble(nj) jk = ld + j dij = (rj+si)*d(ik)+(rj+sj)*d(jk) dij = (dij-rj*dold)/(rj+sij) d(ik) = dij ik = ik + i i = j end do end if d(lo) = dold lo = lo - 1 d(lo) = dble(iold) lo = lo - 1 d(lo) = dble(jold) lo = lo - 1 c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 do i = 2, ld si = d(i) if (si .le. dopt) then ij = i dopt = si end if end do if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 100 900 continue d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 1 ld = lo si = d(lo) lo = lo - 1 sj = d(lo) ic(int(sj)) = ng if (si .lt. sj) then x(1,1) = si x(1,2) = sj else x(1,1) = sj x(1,2) = si end if lg = ng + 1 do k = 2, ns lo = lo - 1 d(ld) = d(lo) ld = ld - 1 lo = lo - 1 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k,1) = dble(ici) x(k,2) = dble(icj) else x(k,1) = dble(icj) x(k,2) = dble(ici) end if end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end subroutine meeii ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol, sigsq c double precision x(n,p), z(n,G[+1]), mu(p,G), pro(G[+1]) double precision x(n,*), z(n, * ), mu(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, prok, tmax, tmin, rteps double precision const, hold, hood, err, dnp, ViLog, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return dnp = dble(n*p) if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 sigsq = zero sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum/dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsq = sigsq + z(i,k)*sum z(i,k) = sum end do else sigsq = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if if (Vinv .le. zero) then sigsq = sigsq / dnp else sigsq = sigsq / (dble(p)*sumz) end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (sigsq .le. eps) then tol = err eps = FLMAX maxi = iter return end if const = dble(p)*(pi2log+log(sigsq)) do k = 1, G c temp = pro(k) do i = 1, n c z(i,k) = temp*exp(-(const+(z(i,k)/sigsq))/two) z(i,k) = -(const+(z(i,k)/sigsq))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meeiip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol, sigsq c double precision x(n,p), z(n,G[+1]), mu(p,G), pro(G[+1]) double precision x(n,*), z(n, * ), mu(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, sumk, sumz, temp, term, tmax, tmin double precision const, hold, hood, err, dnp, ViLog, prok double precision pmupmu, cmu, cgam, rmu, rgam, zsum, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMLOG parameter (SMLOG = -708.d0) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (maxi .le. 0) return dnp = dble(n*p) if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 pmupmu = ddot( p, pmu, 1, pmu, 1) 100 continue iter = iter + 1 sigsq = zero sumz = zero zsum = one do k = 1, G sumk = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumk = sumk + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sumk if (.not. EQPRO) pro(k) = sumk/dble(n) zsum = min(zsum,sumk) if (sumk .gt. rteps) then call dscal( p, (one/sumk), mu(1,k), 1) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + temp*temp end do sigsq = sigsq + z(i,k)*sum end do temp = pmupmu + ddot( p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot( p, mu(1,k), 1, pmu, 1) const = sumk+pshrnk sigsq = sigsq + ((pshrnk*sumk)/const)*temp call dscal( p, (sumk/const), mu(1,k), 1) call daxpy(p, (pshrnk/const), pmu, 1, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .le. zero) then sigsq = sigsq / (pdof + dble((n+G)*p) + two) else sigsq = sigsq / (pdof + (sumz+dble(G))*dble(p) + two) do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (sigsq .le. eps) then tol = err eps = FLMAX maxi = iter return end if const = dble(p)*(pi2log+log(sigsq)) do i = 1, n do k = 1, G sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + temp*temp end do z(i,k) = -(const+(sum/sigsq))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMLOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter cmu = dble(p)*(log(pshrnk)-pi2log)/two const = pdof/two cgam = const*log(pscale/two)-dlngam(const) rmu = zero do k = 1, G temp = pmupmu + ddot( p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot( p, mu(1,k), 1, pmu, 1) rmu = rmu + (pshrnk*temp)/sigsq end do term = log(sigsq) rmu = -(rmu + dble(p)*term)/two rgam = -(const+one)*term - (pscale/sigsq)/two pdof = (dble(G)*cmu+cgam) + (rmu+rgam) return end subroutine mseii ( x, z, n, p, G, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), mu(p,G), sigsq, pro(G) double precision x(n,*), z(n,*), mu(p,*), sigsq, pro(*) integer i, j, k double precision sum, sumz, temp double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ sumz = zero sigsq = zero do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) if (sigsq .ne. FLMAX) then do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsq = sigsq + z(i,k)*sum end do end if else sigsq = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do c sumz .eq. n when no noise if (sigsq .ne. FLMAX) sigsq = sigsq / (sumz*dble(p)) return end subroutine mseiip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G), mu(p,G), sigsq, pro(G) double precision x(n,*), z(n,*), mu(p,*), sigsq, pro(*) integer i, j, k double precision sum, sumz, zsum, pmupmu double precision const, temp, dnp double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision ddot external ddot c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero dnp = dble(n*p) pmupmu = ddot( p, pmu, 1, pmu, 1) sumz = zero sigsq = zero do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .gt. one .or. one .le. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) temp = pmupmu + ddot( p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot( p, mu(1,k), 1, pmu, 1) const = sum+pshrnk call dscal( p, (sum/const), mu(1,k), 1) call daxpy(p, (pshrnk/const), pmu, 1, mu(1,k), 1) if (sigsq .ne. FLMAX) then sigsq = sigsq + ((pshrnk*sum)/const)*temp do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsq = sigsq + z(i,k)*sum end do end if else sigsq = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (sigsq .eq. FLMAX) return temp = pdof + sumz*dble(p) + two if (pshrnk .gt. zero) temp = temp + dble(G*p) sigsq = sigsq / temp return end subroutine esevi ( x, mu, scale, shape, pro, n, p, G, * Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision scale, hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p,G), pro(G[+1]) double precision mu(p,*), shape(p,*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision smin, smax, prok double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (scale .le. zero) then hood = FLMAX return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .eq. zero) then hood = FLMAX return end if end do temp = sqrt(scale) do k = 1, G do j = 1, p shape(j,k) = temp*sqrt(shape(j,k)) end do end do const = dble(p)*(pi2log+log(scale)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (shape(j,k) .lt. one .and. * abs(temp) .ge. shape(j,k)*FLMAX) then hood = FLMAX return end if temp = temp/shape(j,k) if (abs(temp) .ge. RTMAX) then hood = FLMAX return end if if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine meevi ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p,G), pro(G[+1]) double precision mu(p,*), shape(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, epsmin double precision hold, hood, err, smin, smax, const double precision prok, tmin, tmax, ViLog, zsum, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dscal( G, one/dble(G), pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 sumz = zero zsum = one do k = 1, G call dcopy( p, zero, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum /dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) if (sqrt(z(i,k))*abs(temp) .gt. RTMIN) * sum = sum + z(i,k)*(temp*temp) end do shape(j,k) = shape(j,k) + sum end do else call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if scale = zero epsmin = FLMAX do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) epsmin = min(smin,epsmin) if (smin .gt. zero) then sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if scale = scale + temp epsmin = min(temp,epsmin) if (temp .lt. eps) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape(1,k), 1) end if end do term = zero if (Vinv .gt. zero) then scale = scale /sumz do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else scale = scale /dble(n) end if if (scale .le. eps) then tol = epsmin eps = FLMAX maxi = iter return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. eps) then tol = smin eps = FLMAX maxi = iter return end if end do const = dble(p)*(pi2log + log(scale)) do k = 1, G do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j,k) end do c z(i,k) = pro(k)*exp(-(const+(sum/scale))/two) z(i,k) = -(const+(sum/scale))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meevip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p,G), pro(G[+1]) double precision mu(p,*), shape(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, epsmin, zsum double precision hold, hood, err, smin, smax, const double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dscal( G, one/dble(G), pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 sumz = zero zsum = one do k = 1, G call dcopy( p, pscale, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum /dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) term = pshrnk + sum const = (pshrnk*sum)/term do j = 1, p do i = 1, n temp = x(i,j) - mu(j,k) if (abs(temp)*sqrt(z(i,k)) .gt. RTMIN) * shape(j,k) = shape(j,k) + z(i,k)*(temp*temp) end do temp = pmu(j) - mu(j,k) shape(j,k) = shape(j,k) + const*(temp*temp) end do call dscal( p, sum/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if scale = zero epsmin = FLMAX do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) epsmin = min(smin,epsmin) if (smin .gt. zero) then sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if scale = scale + temp epsmin = min(temp,epsmin) if (temp .lt. eps) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape(1,k), 1) end if end do term = pdof + one if (Vinv .le. zero) then term = term + dble(n) else term = term + sumz end if if (pshrnk .gt. zero) term = term + one scale = scale/term if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (scale .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if end do const = dble(p)*(pi2log + log(scale)) do k = 1, G do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j,k) end do c z(i,k) = pro(k)*exp(-(const+(sum/scale))/two) z(i,k) = -(const+(sum/scale))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine msevi ( x, z, n, p, G, mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale, shape(p,G), pro(G) double precision mu(p,*), scale, shape(p,*), pro(*) integer i, j, k double precision smin, smax double precision sum, sumz, temp double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ scale = zero sumz = zero do k = 1, G call dcopy( p, zero, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) else scale = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do if (scale .eq. FLMAX) return c pro(k) now contains n_k do j = 1, p do k = 1, G sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum end do end do scale = zero do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) if (smin .le. zero) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (scale .ne. FLMAX) scale = scale + temp if (temp .lt. one .and. one .ge. temp*FLMAX) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if call dscal( p, one/temp , shape(1,k), 1) 100 continue end do if (sumz .lt. one .and. one .ge. sumz*FLMAX) then scale = FLMAX return end if scale = scale/sumz return end subroutine msevip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale, shape(p,G), pro(G) double precision mu(p,*), scale, shape(p,*), pro(*) integer i, j, k double precision sum, sumz, temp, term double precision smin, smax, const double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero sumz = zero scale = zero do k = 1, G call dcopy( p, pscale, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum /dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) term = pshrnk + sum const = (pshrnk*sum)/term do j = 1, p do i = 1, n temp = x(i,j) - mu(j,k) if (abs(temp)*sqrt(z(i,k)) .gt. RTMIN) * shape(j,k) = shape(j,k) + z(i,k)*(temp*temp) end do temp = pmu(j) - mu(j,k) shape(j,k) = shape(j,k) + const*(temp*temp) end do call dscal( p, sum/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) else scale = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do if (scale .eq. FLMAX) return scale = zero do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero endif if (scale .ne. FLMAX) scale = scale + temp if (temp .le. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if call dscal( p, one/temp, shape(1,k), 1) 100 continue end do term = pdof + sumz + two if (pshrnk .gt. zero) term = term + dble(G) scale = scale/term return end subroutine es1v ( x, mu, sigsq, pro, n, G, Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G double precision hood, Vinv c double precision x(n), mu(G), sigsq(G), pro(G[+1]), z(n,G[+1]) double precision x(*), mu(*), sigsq(*), pro( * ), z(n, * ) integer i, k, nz double precision temp, const, tmin, tmax, sum double precision muk, sigsqk, prok, sigmin double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call sgnrng( G, sigsq, 1, sigmin, temp) if (sigmin .le. zero) then hood = FLMAX return end if do k = 1, G c prok = pro(k) muk = mu(k) sigsqk = sigsq(k) const = pi2log + log(sigsqk) do i = 1, n temp = x(i) - muk c z(i,k) = prok*exp(-(const+(temp*temp)/sigsqk)/two) if (sigsqk .lt. one .and. * abs(temp) .ge. sqrt(sigsqk)*RTMAX) then hood = FLMAX return end if z(i,k) = -(const+(temp*temp)/sigsqk)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c temp = zero c do k = 1, nz c temp = temp + z(i,k) c end do c hood = hood + log(temp) c call dscal( nz, (one/temp), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine hc1v ( x, n, ic, ng, ns, ALPHA, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, ic(n), ng, ns, nd integer n, ic(*), ng, ns, nd c double precision x(n), ALPHA, d(ng*(ng-1)/2) double precision x(*), ALPHA, d(*) integer lg, ld, ll, lo, ls, i, j, k, m integer ni, nj, nij, nopt, niop, njop integer ij, ici, icj, iopt, jopt, iold double precision ALFLOG double precision qi, qj, qij, ri, rj, rij, si, sj double precision tracei, tracej, trcij, trop double precision termi, termj, trmij, tmop double precision temp, dij, dopt, siop, sjop double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision sqrthf parameter (sqrthf = .70710678118654757274d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision EPSMAX parameter (EPSMAX = 2.2204460492503131d-16) double precision ddot external ddot c------------------------------------------------------------------------------ c call dblepr( 'x', -1, x, n) c call intpr( 'n', -1, n, 1) c call intpr( 'ic', -1, ic, n) c call intpr( 'ng', -1, ng, 1) c call intpr( 'ns', -1, ns, 1) c call dblepr( 'alpha', -1, alpha, 1) c call intpr( 'nd', -1, nd, 1) iopt = 0 jopt = 0 niop = 0 njop = 0 nopt = 0 siop = 0 sjop = 0 tmop = 0.d0 trop = 0.d0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd if (ng .eq. 1) return ALPHA = max(ALPHA,EPSMAX) ALFLOG = log(ALPHA) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. lg) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 c call dswap( p, x(k,1), n, x(j,1), n) temp = x(k) x(k) = x(j) x(j) = temp ic(j) = ic(k) ic(k) = icj end if end do 3 continue c set up pointers do j = 1, n i = ic(j) if (i .ne. j) then c update sum of squares k = ic(i) if (k .eq. 1) then ic(i) = j ic(j) = 2 c call dscal( p, sqrthf, x(i,1), n) c call dscal( p, sqrthf, x(j,1), n) c call dcopy( p, x(j,1), n, v, 1) c call daxpy( p, (-one), x(i,1), n, v, 1) c call daxpy( p, one, x(j,1), n, x(i,1), n) c x(j,1) = ddot( p, v, 1, v, 1) temp = sqrthf*(x(j) - x(i)) x(i) = sqrthf*(x(j) + x(i)) x(j) = temp*temp else ic(j) = 0 ni = ic(k) ic(k) = ni + 1 ri = dble(ni) rij = dble(ni+1) qj = one/rij qi = ri*qj si = sqrt(qi) sj = sqrt(qj) c call dcopy( p, x(j,1), n, v, 1) c call dscal( p, si, v, 1) c call daxpy( p, (-sj), x(i,1), n, v, 1) c x(k,1) = x(k,1) + ddot(p, v, 1, v, 1) c call dscal( p, si, x(i,1), n) c call daxpy( p, sj, x(j,1), n, x(i,1), n) temp = si*x(j) - sj*x(i) x(k) = x(k) + temp*temp x(i) = si*x(i) + sj*x(j) end if else ic(j) = 1 end if end do c store terms also so as not to recompute them do k = 1, ng i = ic(k) if (i .ne. 1) then ni = ic(i) ri = dble(ni) d(nd-k+1) = ri*log((x(i)+ALPHA)/ri) end if end do c call intpr( 'ic', -1, ic, n) c compute change in likelihood and determine minimum dopt = FLMAX ij = 0 do j = 2, ng nj = ic(j) if (nj .eq. 1) then tracej = zero termj = ALFLOG rj = one else tracej = x(nj) nj = ic(nj) rj = dble(nj) termj = d(nd-j+1) end if do i = 1, (j-1) ni = ic(i) if (ni .eq. 1) then tracei = zero termi = ALFLOG ri = one else tracei = x(ni) ni = ic(ni) ri = dble(ni) termi = d(nd-i+1) end if nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) c call dcopy(p, x(i,1), n, v, 1) c call dscal( p, sj, v, 1) c call daxpy( p, (-si), x(j,1), n, v, 1) temp = sj*x(i) - si*x(j) c trcij = (tracei + tracej) + ddot(p,v,1,v,1) trcij = (tracei + tracej) + temp*temp trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = i jopt = j end if end do end do c call dblepr( 'dij', -1, d, (ng*(ng-1))/2) if (ns .eq. 1) then if (iopt .lt. jopt) then x(1) = dble(iopt) ic(1) = jopt else x(1) = dble(jopt) ic(1) = iopt end if d(1) = dopt return end if if (niop .ne. 1) ic(ic(iopt)) = 0 if (njop .ne. 1) ic(ic(jopt)) = 0 ls = 1 100 continue c if (.false.) then c ij = 1 c jj = 1 c do j = 2, n c nj = ic(j) c if (nj .ne. 0 .and. abs(nj) .le. n) then c call dblepr( 'dij', -1, d(ij), jj) c ij = ij + jj c jj = jj + 1 c end if c end do c end if c call dscal( p, siop, x(iopt,1), n) c call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) x(iopt) = siop*x(iopt)+sjop*x(jopt) if (jopt .ne. lg) then call wardsw( jopt, lg, d) c call dcopy( p, x(lg,1), n, x(jopt,1), n) x(jopt) = x(lg) m = ic(jopt) ic(jopt) = ic(lg) ic(lg) = m end if ic(iopt) = lg c ic(lg) = nopt c x(lg,1) = trop x(lg) = trop c x(lg,2) = tmop d(lo) = dopt lo = lo - 1 ic(lg) = lo d(lo) = tmop lo = lo - 1 d(lo) = dble(nopt) lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) lo = lo - 1 lg = lg - 1 ld = ld - lg iold = iopt iopt = -1 jopt = -1 dopt = FLMAX ni = nopt ri = dble(ni) tracei = trop termi = tmop ij = ((iold-1)*(iold-2))/2 if (iold .gt. 1) then do j = 1, (iold - 1) nj = ic(j) if (nj .ne. 1) then c tracej = x(nj,1) tracej = x(nj) k = ic(nj) termj = d(k) nj = int(d(k-1)) rj = dble(nj) else tracej = zero termj = ALFLOG rj = one end if nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) c call dcopy( p, x(iold,1), n, v, 1) c call dscal( p, sj, v, 1) c call daxpy( p, (-si), x(j,1), n, v, 1) temp = sj*x(iold)-si*x(j) c trcij = (tracei + tracej) + ddot(p,v,1,v,1) trcij = (tracei + tracej) + temp*temp trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij iopt = j jopt = iold nopt = nij niop = ni njop = nj sjop = si siop = sj end if end do end if if (iold .lt. lg) then i = iold ij = ij + i do j = (iold + 1), lg nj = ic(j) if (nj .ne. 1) then c tracej = x(nj,1) tracej = x(nj) k = ic(nj) termj = d(k) nj = int(d(k-1)) rj = dble(nj) else tracej = zero termj = ALFLOG rj = one end if nij = ni + nj rij = dble(nij) qij = one /rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) c call dcopy( p, x(iold,1), n, v, 1) c call dscal( p, sj, v, 1) c call daxpy( p, (-si), x(j,1), n, v, 1) temp = sj*x(iold) - si*x(j) c trcij = (tracei + tracej) + ddot(p,v,1,v,1) trcij = (tracei + tracej) + temp*temp trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij iopt = iold jopt = j nopt = nij niop = ni njop = nj siop = si sjop = sj end if ij = ij + i i = j end do end if c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 dopt = d(1) do i = 2, ld qi = d(i) if (qi .le. dopt) then ij = i dopt = qi end if end do if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if i = ic(iopt) j = ic(jopt) if (iopt .ne. iold .and. jopt .ne. iold) then if (i .ne. 1) then tracei = x(i) ici = ic(i) termi = d(ici) niop = int(d(ici-1)) ri = dble(niop) else tracei = zero termi = ALFLOG niop = 1 ri = one end if if (j .ne. 1) then c tracej = x(j,1) tracej = x(j) icj = ic(j) termj = d(icj) njop = int(d(icj-1)) rj = dble(njop) else tracej = zero termj = ALFLOG njop = 1 rj = one end if nopt = niop + njop rij = dble(nopt) qij = one/rij qi = ri*qij qj = rj*qij siop = sqrt(qi) sjop = sqrt(qj) c call dcopy( p, x(iopt,1), n, v, 1) c call dscal( p, sjop, v, 1) c call daxpy( p, (-siop), x(jopt,1), n, v, 1) temp = sjop*x(iopt)-siop*x(jopt) c trop = (tracei + tracej) + ddot(p,v,1,v,1) trop = (tracei + tracej) + temp*temp tmop = rij*log((trop+ALPHA)/rij) end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 100 900 continue d(lo) = dopt lo = lo - 1 d(lo) = tmop lo = lo - 1 d(lo) = dble(nopt) lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 3 ld = nd - 1 si = d(lo) lo = lo - 1 sj = d(lo) lo = lo - 1 ic(int(sj)) = ng if (si .lt. sj) then x(1) = si d(ld) = sj else x(1) = sj d(ld) = si end if ld = ld - 1 lg = ng + 1 do k = 2, ns d(ld) = d(lo) ld = ld - 1 lo = lo - 3 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) lo = lo - 1 icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k) = dble(ici) d(ld) = dble(icj) else x(k) = dble(icj) d(ld) = dble(ici) end if ld = ld - 1 end do ld = nd lo = nd - 1 do k = 1, ns ic(k) = int(d(lo)) lo = lo - 1 ld = ld - 1 d(ld) = d(lo) lo = lo - 1 end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end subroutine me1v ( EQPRO, x, n, G, Vinv, z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, G, maxi double precision Vinv, eps, tol c double precision x(n), z(n,G[+1]), mu(G), sigsq(G), pro(G[+1]) double precision x(*), z(n, * ), mu(*), sigsq(*), pro( * ) integer nz, iter, k, i double precision hold, hood, err, sum, smu, zsum double precision const, temp, term, sigmin, sigsqk double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G sum = zero smu = zero do i = 1, n temp = z(i,k) sum = sum + temp smu = smu + temp*x(i) end do if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then smu = smu / sum mu(k) = smu sigsqk = zero do i = 1, n temp = x(i) - smu temp = temp*temp sigsqk = sigsqk + z(i,k)*temp z(i,k) = temp end do sigsq(k) = sigsqk / sum end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if sigmin = FLMAX do k = 1, G sigmin = min(sigmin,sigsq(k)) end do if (sigmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G sigsqk = sigsq(k) const = pi2log + log(sigsqk) do i = 1, n c z(i,k) = temp*exp(-(const+(z(i,k)/sigsqk))/two) z(i,k) = -(const+(z(i,k)/sigsqk))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine me1vp ( EQPRO, x, n, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, G, maxi double precision pshrnk, pmu, pscale, pdof double precision Vinv, eps, tol c double precision x(n), z(n,G[+1]), mu(G), sigsq(G), pro(G[+1]) double precision x(*), z(n, * ), mu(*), sigsq(*), pro( * ) integer nz, iter, k, i double precision hold, hood, err, pmupmu double precision sumz, sum, smu, zsum, rteps double precision const, temp, term, sigmin, sigsqk double precision prok, tmin, tmax, ViLog double precision cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision three parameter (three = 3.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) double precision dlngam external dlngam c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX pmupmu = pmu*pmu iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G sumz = zero smu = zero do i = 1, n temp = z(i,k) sumz = sumz + temp smu = smu + temp*x(i) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(zsum,sumz) if (sumz .gt. rteps) then smu = smu/sumz sum = zero do i = 1, n term = abs(x(i) - smu) if (term .ge. eps .or. sqrt(z(i,k))*term .gt. RTMIN) * sum = sum + z(i,k)*(term*term) end do term = (pshrnk*sumz)/(pshrnk+sumz) temp = (pmupmu + smu*smu) - two*pmu*smu sigsq(k) = (pscale + sum + term*temp)/(pdof+sumz+three) term = sumz/(pshrnk+sumz) temp = pshrnk/(pshrnk+sumz) mu(k) = term*smu + temp*pmu end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if sigmin = FLMAX do k = 1, G sigmin = min(sigmin,sigsq(k)) end do if (sigmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G sigsqk = sigsq(k) const = pi2log + log(sigsqk) do i = 1, n term = abs(x(i) - mu(k)) if (term .gt. RTMIN) then z(i,k) = -(const+((term*term)/sigsqk))/two else z(i,k) = -const/two end if end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter cmu = dble(G)*(pi2log-log(pshrnk))/two const = pdof/two cgam = dble(G)*(const*log(pscale/two) - dlngam(const)) rmu = zero rgam = zero do k = 1, G temp = pmu - mu(k) temp = temp*temp term = log(sigsq(k)) rmu = rmu + (term + (pshrnk/sigsq(k))*temp) rgam = rgam + ((pdof+3.d0)*term + pscale/sigsq(k)) end do rmu = -rmu /two rgam = -rgam/two pdof = (cmu+cgam) + (rmu+rgam) return end subroutine ms1v ( x, z, n, G, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G c double precision x(n), z(n,G), mu(G), sigsq(G), pro(G) double precision x(*), z(n,*), mu(*), sigsq(*), pro(*) integer i, k double precision sum, smu, temp, sigsqk double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ do k = 1, G sum = zero smu = zero do i = 1, n temp = z(i,k) sum = sum + temp smu = smu + temp*x(i) end do pro(k) = sum / dble(n) if (sum .gt. one .or. smu .le. sum*FLMAX) then smu = smu / sum mu(k) = smu sigsqk = zero do i = 1, n temp = abs(x(i) - smu) sigsqk = sigsqk + z(i,k)*(temp*temp) end do sigsq(k) = sigsqk / sum else mu(k) = FLMAX sigsq(k) = FLMAX end if end do return end subroutine ms1vp ( x, z, n, G, * pshrnk, pmu, pscale, pdof, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G double precision pshrnk, pmu, pscale, pdof c double precision x(n), z(n,G), mu(G), sigsq(G), pro(G) double precision x(*), z(n,*), mu(*), sigsq(*), pro(*) integer k, i double precision pmupmu double precision sumz, sum, smu double precision temp, term double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero pmupmu = pmu*pmu do k = 1, G sumz = zero smu = zero do i = 1, n temp = z(i,k) sumz = sumz + temp smu = smu + temp*x(i) end do pro(k) = sumz / dble(n) if (sumz .gt. one .or. smu .le. sumz*FLMAX) then smu = smu/sumz term = sumz/(pshrnk+sumz) temp = pshrnk/(pshrnk+sumz) mu(k) = term*smu + temp*pmu sum = zero do i = 1, n term = abs(x(i) - smu) sum = sum + z(i,k)*(term*term) end do term = (pshrnk*sumz)/(pshrnk+sumz) temp = (pmupmu + smu*smu) - two*pmu*smu if (pshrnk .gt. zero) then sigsq(k) = (pscale + sum + term*temp)/(pdof+sumz+3.d0) else sigsq(k) = (pscale + sum + term*temp)/(pdof+sumz+two) end if else mu(k) = FLMAX sigsq(k) = FLMAX end if end do return end subroutine esvei ( x, mu, scale, shape, pro, n, p, G, * Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), scale(G), shape(p), pro(G[+1]) double precision mu(p,*), scale(*), shape(*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision smin, smax, prok, scalek double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call sgnrng( G, scale, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if do j = 1, p shape(j) = sqrt(shape(j)) end do do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (shape(j) .lt. one .and. * abs(temp) .ge. shape(j)*FLMAX) then hood = FLMAX return end if temp = temp/shape(j) if (abs(temp) .ge. RTMAX) then hood = FLMAX return end if if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum/scalek)/two) if (scalek .lt. one .and. * sum .ge. scalek*FLMAX) then hood = FLMAX return end if z(i,k) = -(const+sum/scalek)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. * one .le. sum*FLMAX) then hood = FLMAX return end if if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine mevei ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, scale, shape, pro, scl, shp, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi(2) double precision Vinv, eps, tol(2) c double precision x(n,p), z(n,G[+1]), scl(G), shp(p), w(p,G) double precision x(n,*), z(n, * ), scl(*), shp(*), w(p,*) c double precision mu(p,G), scale(G), shape(p), pro(G[+1]) double precision mu(p,*), scale(*), shape(*), pro( * ) integer nz, i, j, k integer iter, maxi1, maxi2, inner, inmax double precision tol1, tol2, sum, temp, term, tmin, tmax double precision prok, scalek, smin, smax, const, zsum double precision hold, hood, err, errin, dnp, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ maxi1 = maxi(1) maxi2 = max(maxi(2),0) if (maxi1 .le. 0) return dnp = dble(n*p) inmax = 0 if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) tol1 = max(tol(1),zero) tol2 = max(tol(2),zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX errin = FLMAX c start with shape and scale equal to 1 call dcopy(p, one, 0, shape, 1) call dcopy(G, one, 0, scale, 1) iter = 0 100 continue inner = 0 zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sum zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) sum = sum + z(i,k)*(temp*temp) end do w(j,k) = sum end do end if end do call dscal( G, dble(p), pro, 1) if (zsum .le. rteps) then eps = -FLMAX tol(1) = zsum tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if if (maxi2 .le. 0) goto 120 110 continue call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if inner = inner + 1 c scale estimate call dcopy( G, scale, 1, scl, 1) do k = 1, G sum = zero do j = 1, p sum = sum + w(j,k)/shape(j) end do scale(k) = sum/pro(k) end do call sgnrng(G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if c shape estimate call dcopy( p, shape, 1, shp, 1) do j = 1, p sum = zero do k = 1, G sum = sum + w(j,k)/scale(k) end do shape(j) = sum end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .le. eps) then eps = temp tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if call dscal( p, one/temp, shape, 1) errin = zero do k = 1, G errin = max(errin, abs(scl(k)-scale(k))/(one + scale(k))) end do do j = 1, p errin = max(errin, abs(shp(j)-shape(j))/(one + shape(j))) end do if (errin .gt. tol2 .and. inner .le. maxi2) goto 110 120 continue iter = iter + 1 inmax = max(inner, inmax) if (.not. EQPRO) call dscal( G, one/dnp, pro, 1) term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dscal( G, one/dble(G), pro, 1) end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end if do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j) end do c z(i,k) = prok*exp(-(const+sum/scalek)/two) z(i,k) = -(const+sum/scalek)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol1 .and. iter .lt. maxi1) goto 100 tol(1) = err tol(2) = errin eps = hood maxi(1) = iter maxi(2) = inmax return end subroutine meveip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, scale, shape, pro, scl, shp, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi(2) double precision Vinv, eps, tol(2) c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G[+1]), scl(G), shp(p), w(p,G) double precision x(n,*), z(n, * ), scl(*), shp(*), w(p,*) c double precision mu(p,G), scale(G), shape(p), pro(G[+1]) double precision mu(p,*), scale(*), shape(*), pro( * ) integer nz, i, j, k integer iter, maxi1, maxi2, inner, inmax double precision tol1, tol2, sum, temp, term, tmin, tmax double precision prok, scalek, smin, smax, const, sumz double precision hold, hood, err, errin, dnp, ViLog, zsum double precision rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero maxi1 = maxi(1) maxi2 = max(maxi(2),0) if (maxi1 .le. 0) return dnp = dble(n*p) inmax = 0 if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) tol1 = max(tol(1),zero) tol2 = max(tol(2),zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX errin = FLMAX c start with shape and scale equal to 1 call dcopy(p, one, 0, shape, 1) call dcopy(G, one, 0, scale, 1) iter = 0 100 continue inner = 0 zsum = one do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz zsum = min(zsum,sumz) if (sumz .gt. rteps) then term = pshrnk + sumz const = (pshrnk*sumz)/term call dscal( p, (one/sumz), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) sum = sum + z(i,k)*(temp*temp) end do temp = pmu(j) - mu(j,k) w(j,k) = pscale + sum + const*(temp*temp) end do call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) end if end do call dscal( G, dble(p), pro, 1) if (zsum .le. rteps) then eps = -FLMAX tol(1) = zsum tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if if (maxi2 .le. 0) goto 120 110 continue call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if inner = inner + 1 c scale estimate call dcopy( G, scale, 1, scl, 1) temp = pdof + two if (pshrnk .gt. zero) temp = temp + one do k = 1, G sum = zero do j = 1, p sum = sum + w(j,k)/shape(j) end do scale(k) = sum/(pro(k)+temp) end do call sgnrng(G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if c shape estimate call dcopy( p, shape, 1, shp, 1) do j = 1, p sum = zero do k = 1, G sum = sum + w(j,k)/scale(k) end do shape(j) = sum end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .le. eps) then eps = temp tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if call dscal( p, one/temp, shape, 1) errin = zero do k = 1, G errin = max(errin, abs(scl(k)-scale(k))/(one + scale(k))) end do do j = 1, p errin = max(errin, abs(shp(j)-shape(j))/(one + shape(j))) end do if (errin .gt. tol2 .and. inner .le. maxi2) goto 110 120 continue iter = iter + 1 inmax = max(inner, inmax) if (.not. EQPRO) call dscal( G, one/dnp, pro, 1) term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dscal( G, one/dble(G), pro, 1) end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end if do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j) end do c z(i,k) = prok*exp(-(const+sum/scalek)/two) z(i,k) = -(const+sum/scalek)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol1 .and. iter .lt. maxi1) goto 100 tol(1) = err tol(2) = errin eps = hood maxi(1) = iter maxi(2) = inmax return end subroutine msvei ( x, z, n, p, G, maxi, tol, * mu, scale, shape, pro, scl, shp, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, maxi double precision tol c double precision x(n,p), z(n,G), scl(G), shp(p), w(p,G) double precision x(n,*), z(n,*), scl(*), shp(*), w(p,*) c double precision mu(p,G), scale(G), shape(p), pro(G) double precision mu(p,*), scale(*), shape(*), pro(*) integer i, j, k, inner double precision sum, temp, smin, smax, err double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ tol = max(tol,zero) err = FLMAX c start with the equal volume and shape estimate do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sum if (sum .gt. one .or. one .lt. sum*FLMAX) then err = min(err,sum) call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) temp = temp*temp temp = z(i,k)*temp sum = sum + temp end do w(j,k) = sum end do else err = -FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (err .lt. zero) then call dscal( G, one/dble(n), pro, 1) call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) tol = FLMAX maxi = 0 return end if call dcopy( p, one, 0, shape, 1) call dcopy( G, one, 0, scale, 1) call dscal( G, dble(p), pro, 1) inner = 0 err = FLMAX 100 continue call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) goto 200 inner = inner + 1 c scale estimate call dcopy( G, scale, 1, scl, 1) do k = 1, G sum = zero do j = 1, p if (shape(j) .gt. one .or. * w(j,k) .lt. shape(j)*FLMAX) then sum = sum + w(j,k)/shape(j) else scale(k) = FLMAX goto 110 end if end do scale(k) = sum/pro(k) 110 continue end do call sgnrng(G, scale, 1, smin, smax) if (smin .le. zero .or. smax .eq. FLMAX) goto 200 c shape estimate call dcopy( p, shape, 1, shp, 1) do j = 1, p sum = zero do k = 1, G if (scale(k) .gt. one .or. w(j,k) .lt. scale(k)*FLMAX) then sum = sum + w(j,k)/scale(k) else shape(j) = FLMAX goto 120 end if end do shape(j) = sum 120 continue end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero .or. smax .eq. FLMAX) goto 200 sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dcopy( G, FLMAX, 0, scale, 1) call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) err = zero do k = 1, G err = max(err, abs(scl(k) - scale(k))/(one + scale(k))) end do do j = 1, p err = max(err, abs(shp(j) - shape(j))/(one + shape(j))) end do if (err .gt. tol .and. inner .le. maxi) goto 100 200 continue call dscal( G, one/dble(n*p), pro, 1) tol = err maxi = inner return end subroutine msveip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * maxi, tol, * mu, scale, shape, pro, scl, shp, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, maxi double precision tol c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G), scl(G), shp(p), w(p,G) double precision x(n,*), z(n,*), scl(*), shp(*), w(p,*) c double precision mu(p,G), scale(G), shape(p), pro(G) double precision mu(p,*), scale(*), shape(*), pro(*) integer i, j, k, inner double precision sum, temp, term, err double precision smin, smax, const, sumz double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ tol = max(tol,zero) err = FLMAX c start with shape and scale equal to 1 do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz if (sumz .gt. one .or. one .lt. sumz*FLMAX) then err = min(err,sumz) term = pshrnk + sumz const = (pshrnk*sumz)/term call dscal( p, (one/sumz), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) sum = sum + z(i,k)*(temp*temp) end do temp = pmu(j) - mu(j,k) w(j,k) = pscale + sum + const*(temp*temp) end do call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) else err = -FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (err .lt. zero) then call dscal( G, one/dble(n), pro, 1) call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) tol = FLMAX maxi = 0 return end if call dcopy(p, one, 0, shape, 1) call dcopy(G, one, 0, scale, 1) call dscal( G, dble(p), pro, 1) if (maxi .le. 0) return inner = 0 err = FLMAX 100 continue call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) goto 200 inner = inner + 1 c scale estimate call dcopy( G, scale, 1, scl, 1) do k = 1, G sum = zero do j = 1, p if (shape(j) .ge. one .or. * w(j,k) .le. shape(j)*FLMAX) then sum = sum + w(j,k)/shape(j) else scale(k) = FLMAX goto 110 end if end do temp = pdof + pro(k) + two if (pshrnk .gt. zero) temp = temp + one scale(k) = sum/temp 110 continue end do call sgnrng(G, scale, 1, smin, smax) if (smin .le. zero .or. smax .ge. FLMAX) then call dcopy( G, FLMAX, 0, scale, 1) call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if c shape estimate call dcopy( p, shape, 1, shp, 1) do j = 1, p sum = zero do k = 1, G if (scale(k) .gt. w(j,k) .or. * w(j,k) .lt. scale(k)*FLMAX) then sum = sum + w(j,k)/scale(k) else shape(j) = FLMAX goto 120 end if end do shape(j) = sum 120 continue end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero .or. smax .ge. FLMAX) then call dcopy( G, FLMAX, 0, scale, 1) call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then call dcopy( G, FLMAX, 0, scale, 1) call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) err = zero do k = 1, G err = max(err, abs(scl(k)-scale(k))/(one + scale(k))) end do do j = 1, p err = max(err, abs(shp(j)-shape(j))/(one + shape(j))) end do if (err .gt. tol .and. inner .le. maxi) goto 100 200 continue call dscal( G, one/dble(n*p), pro, 1) tol = err maxi = inner return end subroutine esvev ( x, mu, scale, shape, O, pro, n, p, G, * Vinv, v, w, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p, G integer n, p, G double precision Vinv, hood c double precision x(n,p), z(n,G[+1]), mu(p,G), pro(G[+1]) double precision x(n,*), z(n, * ), mu(p,*), pro( * ) c double precision v(p), w(p) double precision v(*), w(*) c double precision scale(G), shape(p), O(p,p,G) double precision scale(*), shape(*), O(p,p,*) integer i, j, k, nz double precision const, temp, tmin, tmax double precision smin, smax, scalek, prok, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ call sgnrng( G, scale, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if do j = 1, p shape(j) = sqrt(shape(j)) end do do k = 1, G scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) c prok = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dgemv( 'N', p, p, one, O(1,1,k), p, * w, 1, zero, v, 1) do j = 1, p if (shape(j) .lt. one .and. * abs(v(j)) .ge. shape(j)*FLMAX) then hood = FLMAX return end if v(j) = v(j)/shape(j) end do temp = ddot( p, v, 1, v, 1) if (scalek .lt. one .and. temp .ge. scalek*FLMAX) then hood = FLMAX return end if temp = temp/scalek c z(i,k) = prok*exp(-(const+temp)/two) z(i,k) = -(const+temp)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine mevev ( EQPRO, x, n, p, G, Vinv, z, * maxi, tol, eps, lwork, * mu, scale, shape, O, pro, w, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi(2), lwork double precision Vinv, eps, tol(2) double precision x(n,*), z(n, * ), w( * ), s(*) double precision mu(p,*), pro( * ) double precision scale(*), shape(*), O(p,p,*) integer maxi1, maxi2, p1, inmax, iter integer nz, i, j, k, l, j1, info, inner double precision tol1, tol2, dnp, term, rteps, ViLog double precision errin, smin, smax, sumz, tmin, tmax double precision cs, sn, dummy, hold, hood, err, zsum double precision const, temp, sum, prok, scalek double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ maxi1 = maxi(1) maxi2 = maxi(2) if (maxi1 .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) rteps = sqrt(eps) tol1 = max(tol(1),zero) tol2 = max(tol(2),zero) p1 = p + 1 dnp = dble(n*p) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX errin = FLMAX inmax = 0 iter = 0 100 continue sumz = zero zsum = one l = 0 do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, zero, 0, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum zsum = min(zsum,sum) pro(k) = sum if (sum .ge. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = z(j,k) z(j,k) = temp*temp end do end if end if end do iter = iter + 1 if (l .ne. 0 .or. zsum .lt. rteps) then if (Vinv .ge. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = l c w(1) = FLMAX tol(1) = err tol(2) = errin if (l .ne. 0) then eps = FLMAX else eps = -FLMAX end if maxi(1) = -1 maxi(2) = -1 return end if if (iter .eq. 1) then call dcopy( p, zero, 0, shape, 1) do j = 1, p sum = zero do k = 1, G sum = sum + z(j,k) end do shape(j) = sum end do call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) if (Vinv .ge. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) then call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = 0 c w(1) = smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = -1 maxi(2) = -1 return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = -1 maxi(2) = -1 return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (Vinv .le. zero) then call dcopy (G, temp/dble(n), 0, scale, 1) else call dcopy (G, temp/sumz, 0, scale, 1) end if if (temp .le. eps) then if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = 0 c w(1) = temp c w(2) = zero tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = -1 maxi(2) = -1 return end if call dscal( p, one/temp, shape, 1) end if c inner iteration to estimate scale and shape c pro now contains n*pro inner = 0 errin = zero if (maxi2 .le. 0) goto 120 110 continue call dcopy( p, shape, 1, w , 1) call dcopy( G, scale, 1, w(p1), 1) call dcopy( p, zero, 0, shape, 1) do k = 1, G sum = zero do j = 1, p sum = sum + z(j,k)/w(j) end do temp = sum/(pro(k)*dble(p)) scale(k) = temp if (temp .le. eps) then lwork = 0 c w(1) = temp tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = max(inner,inmax) return end if do j = 1, p shape(j) = shape(j) + z(j,k)/temp end do end do inner = inner + 1 call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = 0 c w(1) = smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = max(inner,inmax) return end if c normalize the shape matrix sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = -1 maxi(2) = -1 return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .le. eps) then if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = 0 c w(1) = temp tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = max(inner,inmax) end if call dscal( p, one/temp, shape, 1) errin = zero do j = 1, p errin = max(abs(w(j)-shape(j))/(one+shape(j)), errin) end do do k = 1, G errin = max(abs(scale(k)-w(p+k))/(one+scale(k)), errin) end do if (errin .ge. tol2 .and. inner .lt. maxi2) goto 110 120 continue inmax = max(inner,inmax) smin = smin/temp smax = smax/temp if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then lwork = 0 c w(1) = -smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = inmax return end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then lwork = 0 c w(1) = -smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = inmax return end if do j = 1, p s(j) = sqrt(shape(j)) end do call sgnrng( p, s, 1, smin, smax) if (smin .le. rteps) then lwork = 0 c w(1) = -smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = inmax return end if do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log + log(scalek)) do i = 1, n call dcopy( p, x(i,1), n, w(p1), 1) call daxpy( p, (-one), mu(1,k), 1, w(p1), 1) call dgemv( 'N', p, p, one, O(1,1,k), p, w(p1), 1, zero, w, 1) do j = 1, p w(j) = w(j) / s(j) end do sum = ddot(p,w,1,w,1)/scalek c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol1 .and. iter .lt. maxi1) goto 100 c smin = sqrt(smin) c smax = sqrt(smax) c rcmin = FLMAX c do k = 1, G c temp = sqrt(scale(k)) c rcmin = min(rcmin,(temp*smin)/(one+temp*smax)) c end do lwork = 0 c w(1) = rcmin tol(1) = err tol(2) = errin eps = hood maxi(1) = iter maxi(2) = inmax return end subroutine mevevp( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, lwork, * mu, scale, shape, O, pro, w, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi(2), lwork c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision Vinv, eps, tol(2) c double precision x(n,p), z(n,G[+1]), w(lwork), s(p) double precision x(n,*), z(n, * ), w( * ), s(*) c double precision mu(p,G), pro(G[+1]) double precision mu(p,*), pro( * ) c double precision scale(G), shape(p), O(p,p,G) double precision scale(*), shape(*), O(p,p,*) integer maxi1, maxi2, p1, inmax, iter integer nz, i, j, k, l, j1, info, inner double precision tol1, tol2, dnp, term, rteps, ViLog double precision errin, smin, smax, sumz, tmin, tmax double precision cs, sn, dummy, hold, hood, err, zsum double precision const, temp, sum, prok, scalek double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero maxi1 = maxi(1) maxi2 = maxi(2) if (maxi1 .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) rteps = sqrt(eps) tol1 = max(tol(1),zero) tol2 = max(tol(2),zero) p1 = p + 1 dnp = dble(n*p) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX errin = FLMAX inmax = 0 inner = 0 iter = 0 100 continue zsum = one l = 0 do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, pscale(1,j), 1, O(1,j,k), 1) end do sumz = zero do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) zsum = min(zsum,sumz) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) term = sumz+pshrnk const = (sumz*pshrnk)/term call dscal( p, sqrt(const), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = z(j,k) z(j,k) = temp*temp end do end if end if end do iter = iter + 1 if (l .ne. 0 .or. zsum .le. rteps) then lwork = l call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) if (l .ne. 0) then eps = FLMAX else eps = -FLMAX end if goto 200 end if if (iter .eq. 1) then call dcopy( p, zero, 0, shape, 1) do j = 1, p sum = zero do k = 1, G sum = sum + z(j,k) end do shape(j) = sum end do call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then eps = FLMAX goto 200 return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then eps = FLMAX goto 200 return end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if do k = 1, G scale(k) = temp / (pro(k)*dble(n)) end do if (temp .le. eps) then eps = FLMAX goto 200 return end if call dscal( p, one/temp, shape, 1) end if inner = 0 errin = zero if (maxi2 .le. 0) goto 120 110 continue call dcopy( p, shape, 1, w , 1) call dcopy( G, scale, 1, w(p1), 1) call sgnrng( p+G, w, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call dcopy( p, zero, 0, shape, 1) do k = 1, G sum = zero do j = 1, p if (w(j) .le. z(j,k) .and. z(j,k) .lt. w(j)*rteps) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if sum = sum + z(j,k)/w(j) end do temp = sum/(pro(k)*dble(n*p)) scale(k) = temp do j = 1, p if (temp .le. z(j,k) .and. z(j,k) .lt. temp*rteps) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if shape(j) = shape(j) + z(j,k)/temp end do end do inner = inner + 1 call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then eps = FLMAX goto 200 return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then eps = FLMAX goto 200 return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .le. eps) then eps = FLMAX goto 200 end if call dscal( p, one/temp, shape, 1) errin = zero do j = 1, p errin = max(abs(w(j)-shape(j))/(one+shape(j)), errin) end do do k = 1, G errin = max(abs(scale(k)-w(p+k))/(one+scale(k)), errin) end do if (errin .ge. tol2 .and. inner .lt. maxi2) goto 110 120 continue inmax = max(inner,inmax) smin = smin/temp smax = smax/temp term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX goto 200 return end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX goto 200 return end if do j = 1, p s(j) = sqrt(shape(j)) end do call sgnrng( p, s, 1, smin, smax) if (smin .le. rteps) then eps = FLMAX goto 200 return end if do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log + log(scalek)) do i = 1, n call dcopy( p, x(i,1), n, w(p1), 1) call daxpy( p, (-one), mu(1,k), 1, w(p1), 1) call dgemv( 'N', p, p, one, O(1,1,k), p, w(p1), 1, zero, w, 1) do j = 1, p w(j) = w(j) / s(j) end do sum = ddot(p,w,1,w,1)/scalek c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol1 .and. iter .lt. maxi1) goto 100 c smin = sqrt(smin) c smax = sqrt(smax) c rcmin = FLMAX c do k = 1, G c temp = sqrt(scale(k)) c rcmin = min(rcmin,(temp*smin)/(one+temp*smax)) c end do c w(1) = rcmin lwork = 0 eps = hood 200 continue tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end subroutine msvev ( x, z, n, p, G, w, lwork, maxi, tol, * mu, scale, shape, O, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, maxi, lwork double precision tol c double precision x(n,p), z(n,G), w(max(4*p,5*p-4,p+G)) double precision x(n,*), z(n,*), w(*) c double precision scale(G), shape(p), O(p,p,G), mu(p,G), pro(G) double precision scale(*), shape(*), O(p,p,*), mu(p,*), pro(*) integer p1, i, j, k, j1, inner, info double precision dnp, err, dummy double precision temp, sum, smin, smax, cs, sn double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision BIGLOG parameter (BIGLOG = 709.d0) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ tol = max(tol,zero) p1 = p + 1 err = FLMAX inner = 0 call dcopy( p, zero, 0, shape, 1) do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, zero, 0, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sum if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) if (lwork .gt. 0) then do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then inner = info else do j = 1, p temp = z(j,k) temp = temp*temp shape(j) = shape(j) + temp z(j,k) = temp end do end if end if else err = zero call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do c inner iteration estimates scale and shape c pro now contains n*pro if (inner .ne. 0 .or. err .eq. zero) then lwork = inner call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if lwork = 0 call sgnrng( p, shape, 1, smin, smax) if (smin .eq. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if do k = 1, G scale(k) = temp / (pro(k)*dble(n)) end do if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) c iteration to estimate scale and shape c pro now contains n*pro if (maxi .le. 0) goto 200 100 continue call dcopy( p, shape, 1, w , 1) call dcopy( G, scale, 1, w(p1), 1) call absrng( p, w, 1, smin, smax) if (smin .le. one .and. one .ge. smin*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call dcopy( p, zero, 0, shape, 1) do k = 1, G sum = zero do j = 1, p sum = sum + z(j,k)/w(j) end do temp = (sum/pro(k))/dble(p) scale(k) = temp if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if do j = 1, p shape(j) = shape(j) + z(j,k)/temp end do end do inner = inner + 1 call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if c normalize the shape matrix sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) err = zero do j = 1, p err = max(abs(w(j)-shape(j))/(one+shape(j)), err) end do do k = 1, G err = max(abs(scale(k)-w(p+k))/(one+scale(k)), err) end do if (err .ge. tol .and. inner .lt. maxi) goto 100 200 continue call dscal( G, one/dble(n), pro, 1) tol = err maxi = inner return end subroutine msvevp( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * w, lwork, maxi, tol, * mu, scale, shape, O, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, maxi, lwork c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision tol c double precision x(n,p), z(n,G), w(lwork) double precision x(n,*), z(n,*), w( * ) c double precision mu(p,G), pro(G) double precision mu(p,*), pro(*) c double precision scale(G), shape(p), O(p,p,G) double precision scale(*), shape(*), O(p,p,*) integer p1, i, j, k, l, j1, inner, info double precision sum, term, temp, err, smin, smax double precision sumz, cs, sn, dummy, const double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (pshrnk .le. zero) pshrnk = zero tol = max(tol,zero) p1 = p + 1 err = FLMAX inner = 0 l = 0 call dcopy( p, zero, 0, shape, 1) do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, pscale(1,j), 1, O(1,j,k), 1) end do sumz = zero do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) if (sumz .ge. one .or. one .lt. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) term = sumz+pshrnk const = (sumz*pshrnk)/term call dscal( p, sqrt(const), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = z(j,k) temp = temp*temp shape(j) = shape(j) + temp z(j,k) = temp end do end if else err = zero call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (l .ne. 0 .or. err .eq. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if do k = 1, G scale(k) = temp / (pro(k)*dble(n)) end do if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call dscal( p, one/temp, shape, 1) if (maxi .le. 0) goto 200 100 continue call dcopy( p, shape, 1, w , 1) call dcopy( G, scale, 1, w(p1), 1) call sgnrng( p+G, w, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call dcopy( p, zero, 0, shape, 1) do k = 1, G sum = zero do j = 1, p if (w(j) .le. z(j,k) .and. z(j,k) .ge. w(j)*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if sum = sum + z(j,k)/w(j) end do temp = sum/(pro(k)*dble(n*p)) scale(k) = temp do j = 1, p if (temp .le. z(j,k) .and. z(j,k) .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if shape(j) = shape(j) + z(j,k)/temp end do end do inner = inner + 1 call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) err = zero do j = 1, p err = max(abs(w(j)-shape(j))/(one+shape(j)), err) end do do k = 1, G err = max(abs(scale(k)-w(p+k))/(one+scale(k)), err) end do if (err .ge. tol .and. inner .lt. maxi) goto 100 200 continue lwork = l tol = err maxi = inner return end subroutine esvii ( x, mu, sigsq, pro, n, p, G, Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), sigsq(G), pro(G[+1]) double precision mu(p,*), sigsq(*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision prok, sigsqk, sigmin double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call sgnrng( G, sigsq, 1, sigmin, temp) if (sigmin .le. zero) then hood = FLMAX return end if do k = 1, G c prok = pro(k) sigsqk = sigsq(k) const = dble(p)*(pi2log+log(sigsq(k))) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (abs(temp) .ge. RTMAX) then hood = FLMAX return end if if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum/sigsqk)/two) if (sigsqk .lt. one .and. sum .ge. sigsqk*FLMAX) then hood = FLMAX return end if z(i,k) = -(const+sum/sigsqk)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine hcvii ( x, n, p, ic, ng, ns, ALPHA, v, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, ic(n), ng, ns, nd c double precision x(n,p), v(p). d(*), ALPHA double precision x(n,*), v(*), d(*), ALPHA integer lg, ld, ll, lo, ls, i, j, k, m integer ni, nj, nij, nopt, niop, njop integer ij, ici, icj, iopt, jopt, iold double precision ALFLOG double precision qi, qj, qij, ri, rj, rij, si, sj double precision tracei, tracej, trcij, trop double precision termi, termj, trmij, tmop double precision dij, dopt, siop, sjop double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision sqrthf parameter (sqrthf = .70710678118654757274d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision EPSMAX parameter (EPSMAX = 2.2204460492503131d-16) double precision ddot external ddot c------------------------------------------------------------------------------ iopt = 0 niop = 0 njop = 0 nopt = 0 tmop = 0.d0 trop = 0.d0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd if (ng .eq. 1) return ALPHA = max(ALPHA,EPSMAX) ALFLOG = log(ALPHA) c call intpr( 'ic', -1, ic, n) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. lg) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 call dswap( p, x(k,1), n, x(j,1), n) ic(j) = ic(k) ic(k) = icj end if end do 3 continue c set up pointers do j = 1, n i = ic(j) if (i .ne. j) then c update sum of squares k = ic(i) if (k .eq. 1) then ic(i) = j ic(j) = 2 call dscal( p, sqrthf, x(i,1), n) call dscal( p, sqrthf, x(j,1), n) call dcopy( p, x(j,1), n, v, 1) call daxpy( p, (-one), x(i,1), n, v, 1) call daxpy( p, one, x(j,1), n, x(i,1), n) c call dcopy( p, FLMAX, 0, x(j,1), n) c x(j,1) = ddot( p, v, 1, v, 1) / two x(j,1) = ddot( p, v, 1, v, 1) else ic(j) = 0 ni = ic(k) ic(k) = ni + 1 ri = dble(ni) rij = dble(ni+1) qj = one/rij qi = ri*qj si = sqrt(qi) sj = sqrt(qj) call dcopy( p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(i,1), n, v, 1) c x(k,1) = qi*x(k,1) + qj*ddot(p, v, 1, v, 1) x(k,1) = x(k,1) + ddot(p, v, 1, v, 1) call dscal( p, si, x(i,1), n) call daxpy( p, sj, x(j,1), n, x(i,1), n) c call dcopy( p, FLMAX, 0, x(j,1), n) end if else ic(j) = 1 end if end do c store terms also so as not to recompute them do k = 1, ng i = ic(k) if (i .ne. 1) then ni = ic(i) ri = dble(ni) c x(i,2) = ri*log(x(i,1)+ALPHA) x(i,2) = ri*log((x(i,1)+ALPHA)/ri) end if end do c call intpr( 'ic', -1, ic, n) c call dblepr( 'trace', -1, x(1,1), n) c call dblepr( 'term', -1, x(1,2), n) c compute change in likelihood and determine minimum dopt = FLMAX ij = 0 do j = 2, ng nj = ic(j) if (nj .eq. 1) then tracej = zero termj = ALFLOG rj = one else tracej = x(nj,1) termj = x(nj,2) nj = ic(nj) rj = dble(nj) end if do i = 1, (j-1) ni = ic(i) if (ni .eq. 1) then tracei = zero termi = ALFLOG ri = one else tracei = x(ni,1) termi = x(ni,2) ni = ic(ni) ri = dble(ni) end if nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) c trcij = (qi*tracei + qj*tracej) + qij*ddot(p,v,1,v,1) trcij = (tracei + tracej) + ddot(p,v,1,v,1) c trmij = rij*log(trcij+ALPHA) trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = i jopt = j end if end do end do c call dblepr( 'dij', -1, d, (l*(l-1))/2) if (ns .eq. 1) then if (iopt .lt. jopt) then x(1,1) = dble(iopt) x(1,2) = dble(jopt) else x(1,1) = dble(jopt) x(1,2) = dble(iopt) end if d(1) = dopt return end if if (niop .ne. 1) ic(ic(iopt)) = 0 if (njop .ne. 1) ic(ic(jopt)) = 0 ls = 1 100 continue c if (.false.) then c ij = 1 c jj = 1 c do j = 2, n c nj = ic(j) c if (nj .ne. 0 .and. abs(nj) .le. n) then c call dblepr( 'dij', -1, d(ij), jj) c ij = ij + jj c jj = jj + 1 c end if c end do c end if call dscal( p, siop, x(iopt,1), n) call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) if (jopt .ne. lg) then call wardsw( jopt, lg, d) call dcopy( p, x(lg,1), n, x(jopt,1), n) m = ic(jopt) ic(jopt) = ic(lg) ic(lg) = m end if ic(iopt) = lg ic(lg) = nopt x(lg,1) = trop x(lg,2) = tmop d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) lo = lo - 1 lg = lg - 1 ld = ld - lg iold = iopt iopt = -1 jopt = -1 dopt = FLMAX ni = nopt ri = dble(ni) tracei = trop termi = tmop ij = ((iold-1)*(iold-2))/2 if (iold .gt. 1) then do j = 1, (iold - 1) nj = ic(j) if (nj .ne. 1) then tracej = x(nj,1) termj = x(nj,2) nj = ic(nj) rj = dble(nj) else tracej = zero termj = ALFLOG rj = one end if nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) call dcopy( p, x(iold,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) c trcij = (qi*tracei + qj*tracej) + qij*ddot(p,v,1,v,1) trcij = (tracei + tracej) + ddot(p,v,1,v,1) c trmij = rij*log(trcij+ALPHA) trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij iopt = j jopt = iold nopt = nij niop = ni njop = nj sjop = si siop = sj end if end do end if if (iold .lt. lg) then i = iold ij = ij + i do j = (iold + 1), lg nj = ic(j) if (nj .ne. 1) then tracej = x(nj,1) termj = x(nj,2) nj = ic(nj) rj = dble(nj) else tracej = zero termj = ALFLOG rj = one end if nij = ni + nj rij = dble(nij) qij = one /rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) call dcopy( p, x(iold,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) c trcij = (qi*tracei + qj*tracej) + qij*ddot(p,v,1,v,1) trcij = (tracei + tracej) + ddot(p,v,1,v,1) c trmij = rij*log(trcij+ALPHA) trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij iopt = iold jopt = j nopt = nij niop = ni njop = nj siop = si sjop = sj end if ij = ij + i i = j end do end if c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 dopt = d(1) do i = 2, ld qi = d(i) if (qi .le. dopt) then ij = i dopt = qi end if end do if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if i = ic(iopt) j = ic(jopt) if (iopt .ne. iold .and. jopt .ne. iold) then if (i .ne. 1) then tracei = x(i,1) termi = x(i,2) niop = ic(i) ri = dble(niop) else tracei = zero termi = ALFLOG niop = 1 ri = one end if if (j .ne. 1) then tracej = x(j,1) termj = x(j,2) njop = ic(j) rj = dble(njop) else tracej = zero termj = ALFLOG njop = 1 rj = one end if nopt = niop + njop rij = dble(nopt) qij = one/rij qi = ri*qij qj = rj*qij siop = sqrt(qi) sjop = sqrt(qj) call dcopy( p, x(iopt,1), n, v, 1) call dscal( p, sjop, v, 1) call daxpy( p, (-siop), x(jopt,1), n, v, 1) c trop = (qi*tracei + qj*tracej) + qij*ddot(p,v,1,v,1) trop = (tracei + tracej) + ddot(p,v,1,v,1) c tmop = rij*log(trop+ALPHA) tmop = rij*log((trop+ALPHA)/rij) end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 100 900 continue d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 1 ld = lo si = d(lo) lo = lo - 1 sj = d(lo) ic(int(sj)) = ng if (si .lt. sj) then x(1,1) = si x(1,2) = sj else x(1,1) = sj x(1,2) = si end if lg = ng + 1 do k = 2, ns lo = lo - 1 d(ld) = d(lo) ld = ld - 1 lo = lo - 1 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k,1) = dble(ici) x(k,2) = dble(icj) else x(k,1) = dble(icj) x(k,2) = dble(ici) end if end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end subroutine mevii ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), sigsq(G), pro(G[+1]) double precision mu(p,*), sigsq(*), pro( * ) integer nz, iter, i, j, k double precision sumz, sum, temp, const, term, zsum double precision sigmin, sigsqk, hold, hood, err double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(sumz,zsum) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) sigsqk = zero do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsqk = sigsqk + z(i,k)*sum z(i,k) = sum end do sigsq(k) = (sigsqk/sumz)/dble(p) else sigsq(k) = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if call sgnrng( G, sigsq, 1, sigmin, temp) if (sigmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G c temp = pro(k) sigsqk = sigsq(k) const = dble(p)*(pi2log+log(sigsqk)) do i = 1, n c z(i,k) = temp*exp(-(const+z(i,k)/sigsqk)/two) z(i,k) = -(const+z(i,k)/sigsqk)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do const = zero - tmax sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) + const if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)-const) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meviip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), sigsq(G), pro(G[+1]) double precision mu(p,*), sigsq(*), pro( * ) integer nz, iter, i, j, k double precision sumz, sum, temp, const, term, zsum double precision sigmin, sigsqk, hold, hood, err double precision prok, tmin, tmax, ViLog, rteps double precision pmupmu, cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 pmupmu = ddot(p,pmu,1,pmu,1) 100 continue iter = iter + 1 zsum = one do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(sumz,zsum) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) sigsqk = pscale do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsqk = sigsqk + z(i,k)*sum end do temp = pmupmu + ddot(p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot(p,mu(1,k),1,pmu,1) const = sumz+pshrnk sigsqk = sigsqk + ((sumz*pshrnk)/const) * temp c sigsq(k) = sigsqk/(pdof+(sumz+one)*dble(p)+two) temp = pdof+sumz*dble(p)+two if (pshrnk .gt. zero) temp = temp + dble(p) sigsq(k) = sigsqk/temp call dscal( p, sumz/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else sigsq(k) = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if call sgnrng( G, sigsq, 1, sigmin, temp) if (sigmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G sigsqk = sigsq(k) const = dble(p)*(pi2log+log(sigsqk)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + temp*temp end do z(i,k) = -(const+sum/sigsqk)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do const = zero - tmax sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) + const if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)-const) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter if (pshrnk .gt. zero) then cmu = dble(p)*(log(pshrnk)-pi2log)/two const = pdof/two cgam = const*log(pscale/two)-dlngam(const) rmu = zero rgam = zero do k = 1, G term = log(sigsq(k)) temp = pmupmu + ddot( p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot( p, mu(1,k), 1, pmu, 1) rmu = rmu + (pshrnk*temp)/sigsq(k) rgam = rgam + ((pdof+3.d0)*term - (pscale/sigsq(k))) end do rmu = -rmu /two rgam = -rgam/two pdof = (dble(G)*cmu+rmu) + (dble(G)*cgam+rgam) else pdof = FLMAX end if return end subroutine msvii ( x, z, n, p, G, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), mu(p,G), sigsq(G), pro(G) double precision x(n,*), z(n,*), mu(p,*), sigsq(*), pro(*) integer i, j, k double precision sum, sumz, temp, sigsqk double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) if (sumz .ge. one .or. one .le. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) sigsqk = zero do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsqk = sigsqk + z(i,k)*sum end do temp = sumz*dble(p) if (temp .ge. one .or. sigsqk .le. temp*FLMAX) then sigsq(k) = sigsqk/temp else sigsq(k) = FLMAX end if else sigsq(k) = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do return end subroutine msviip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), sigsq(G), pro(G) double precision mu(p,*), sigsq(*), pro(*) integer i, j, k double precision sumz, sum, temp double precision sigsqk, const, pmupmu double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision ddot external ddot c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero pmupmu = ddot(p,pmu,1,pmu,1) do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) if (sumz .ge. one .or. one .lt. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) sigsqk = pscale do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsqk = sigsqk + z(i,k)*sum end do temp = pmupmu + ddot(p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot(p,mu(1,k),1,pmu,1) const = sumz+pshrnk sigsqk = sigsqk + ((sumz*pshrnk)/const) * temp temp = pdof+sumz*dble(p)+two if (pshrnk .gt. zero) temp = temp + dble(p) sigsq(k) = sigsqk/temp call dscal( p, sumz/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else sigsq(k) = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do return end subroutine esvvi ( x, mu, scale, shape, pro, n, p, G, * Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), scale(G), shape(p,G), pro(G[+1]) double precision mu(p,*), scale(*), shape(p,*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision smin, smax, prok, scalek double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call sgnrng( G, scale, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if temp = sqrt(scale(k)) do j = 1, p shape(j,k) = temp*sqrt(shape(j,k)) end do end do do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (shape(j,k) .lt. one .and. * abs(temp) .ge. shape(j,k)*FLMAX) then hood = FLMAX return end if temp = temp/shape(j,k) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine mevvi ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol double precision x(n,*), z(n, * ) double precision mu(p,*), scale(*), shape(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, temp, term, scalek, epsmin double precision hold, hood, err, smin, smax, const double precision prok, tmin, tmax, ViLog, zsum, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if tol = max(tol,zero) eps = max(eps,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G call dcopy( p, zero, 0, shape(1,k), 1) call dcopy( p, zero, 0, mu(1,k), 1) sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do pro(k) = sum c pro(k) now contains n_k zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum end do end if end do if (zsum .le. rteps) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = -FLMAX maxi = iter return end if epsmin = FLMAX do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) epsmin = min(smin,epsmin) if (smin .le. zero) then scale(k) = zero else temp = zero do j = 1, p temp = temp + log(shape(j,k)) end do temp = temp/dble(p) if (temp .gt. BIGLOG) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if scale(k) = temp/pro(k) epsmin = min(temp,epsmin) if (temp .le. eps) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape(1,k), 1) end if end do if (.not. EQPRO) then call dscal( G, one/dble(n), pro, 1) else if (Vinv .le. zero) then call dscal( G, one/dble(G), pro, 1) end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (epsmin .le. eps) then tol = err eps = -FLMAX maxi = iter return end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if end do do k = 1, G scalek = scale(k) const = dble(p)*(pi2log + log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j,k) end do c z(i,k) = pro(k)*exp(-(const+(sum/scalek))/two) z(i,k) = -(const+(sum/scalek))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine mevvip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), scale(G), shape(p,G), pro(G[+1]) double precision mu(p,*), scale(*), shape(p,*), pro( * ) integer nz, iter, i, j, k double precision sumz, sum, temp, term, scalek, epsmin double precision hold, hood, err, smin, smax, const double precision prok, tmin, tmax, ViLog, zsum, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G call dcopy( p, pscale, 0, shape(1,k), 1) call dcopy( p, zero, 0, mu(1,k), 1) sumz = zero do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do pro(k) = sumz zsum = min(zsum,sumz) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) term = pshrnk+sumz const = (pshrnk*sumz)/term do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum temp = pmu(j) - mu(j,k) shape(j,k) = shape(j,k) + const*(temp*temp) end do call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) end if end do if (zsum .le. rteps) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = -FLMAX maxi = iter return end if c pro(k) now contains n_k epsmin = FLMAX term = pdof+two if (pshrnk .gt. zero) term = term + one do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) epsmin = min(smin,epsmin) if (smin .eq. zero) then scale(k) = zero else sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if c pro(k) contains n_k scale(k) = temp/(pro(k)+term) epsmin = min(temp,epsmin) if (temp .le. eps) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape(1,k), 1) end if end do if (.not. EQPRO) then call dscal( G, one/dble(n), pro, 1) else if (Vinv .le. zero) then call dcopy( G, one/dble(G), 0, pro, 1) end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (epsmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if end do do k = 1, G scalek = scale(k) const = dble(p)*(pi2log + log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j,k) end do c z(i,k) = pro(k)*exp(-(const+(sum/scalek))/two) z(i,k) = -(const+(sum/scalek))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine msvvi ( x, z, n, p, G, mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale(G), shape(p,G), pro(G) double precision mu(p,*), scale(*), shape(p,*), pro(*) integer i, j, k double precision sum, temp, smin, smax double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ do k = 1, G call dcopy( p, zero, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do pro(k) = sum if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do c pro(k) now contains n_k do k = 1, G if (mu(1,k) .ne. FLMAX) then do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum end do else call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) if (smin .le. zero) then scale(k) = zero call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if if (smax .eq. FLMAX) then scale(k) = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale(k) = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if if (temp .lt. SMALOG) then temp = zero scale(k) = zero call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if temp = exp(temp) if (pro(k) .lt. one .and. temp .ge. pro(k)*FLMAX) then scale(k) = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if scale(k) = temp/pro(k) if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if call dscal( p, one/temp, shape(1,k), 1) 100 continue end do call dscal( G, one/dble(n), pro, 1) return end subroutine msvvip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale(G), shape(p,G), pro(G) double precision mu(p,*), scale(*), shape(p,*), pro(*) integer i, j, k double precision sumz, sum, temp, term double precision smin, smax, const double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero do k = 1, G call dcopy( p, pscale, 0, shape(1,k), 1) sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do pro(k) = sumz if (sumz .ge. one .or. one .le. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) term = pshrnk+sumz const = (pshrnk*sumz)/term do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum temp = pmu(j) - mu(j,k) shape(j,k) = shape(j,k) + const*(temp*temp) end do call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do c pro(k) now contains n_k do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) if (smin .le. zero) then scale(k) = zero call dcopy( p, FLMAX, 0, shape(1,k), 1) else if (smax .eq. FLMAX) then scale(k) = FLMAX else sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale(k) = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) else if (temp .lt. SMALOG) then temp = zero scale(k) = zero call dcopy( p, FLMAX, 0, shape(1,k), 1) else temp = exp(temp) c pro(k) contains n_k term = pro(k) + pdof + two if (pshrnk .gt. zero) term = term + one scale(k) = temp/term if (temp .ge. one .or. one .le. temp*FLMAX) then call dscal( p, one/temp, shape(1,k), 1) else call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end if end if end do call dscal( G, one/dble(n), pro, 1) return end subroutine esvvv ( CHOL, x, mu, Sigma, pro, n, p, G, Vinv, * w, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c character CHOL logical CHOL c integer n, p, G integer n, p, G double precision hood, Vinv c double precision x(n,p), w(p), z(n,G[+1]) double precision x(n,*), w(*), z(n, * ) c double precision mu(p,G), Sigma(p,p,G), pro(G[+1]) double precision mu(p,*), Sigma(p,p,*), pro( * ) integer nz, p1, info, i, j, k double precision const, detlog, temp, prok, tmin, tmax double precision umin, umax, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ p1 = p + 1 c if (CHOL .eq. 'N') then if (.not. CHOL) then do k = 1, G call dpotrf( 'U', p, Sigma(1,1,k), p, info) w(1) = dble(info) if (info .ne. 0) then hood = FLMAX return end if end do end if do k = 1, G call absrng( p, Sigma(1,1,k), p1, umin, umax) if (umax .le. one .and. umax .ge. umin*RTMAX) then w(1) = zero hood = FLMAX return end if if (umax .ge. one .and. umin .le. umax*RTMIN) then w(1) = zero hood = FLMAX return end if end do do k = 1, G detlog = zero do j = 1, p detlog = detlog + log(abs(Sigma(j,j,k))) end do const = dble(p)*pi2log/two + detlog c prok = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, Sigma(1,1,k), p, w, 1) temp = ddot( p, w, 1, w, 1)/two c z(i,k) = prok*exp(-(const+temp)) z(i,k) = -(const+temp) end do end do w(1) = zero if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then w(1) = zero hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do w(1) = zero return end subroutine hcvvv ( x, n, p, ic, ng, ns, ALPHA, BETA, * v, u, s, r, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, ic(n), ng, ns, nd double precision ALPHA, BETA c double precision x(n,p+1), v(p), u(p,p), s(p,p) c double precision r(p,p), d(ng*(ng-1)/2) double precision x(n,*), v(*), u(p,*), s(p,*) double precision r(p,*), d(*) integer psq, pm1, pp1 integer i, j, k, l, m, ij, iold integer lg, ld, ll, lo, ls integer ici, icj, ni, nj, nij integer nopt, niop, njop, iopt, jopt double precision trcij, trmij, trop, tmop double precision traci, tracj, termi, termj double precision qi, qj, qij, si, sj, sij, ri, rj, rij double precision dij, dopt, siop, sjop double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision rthalf parameter (rthalf = .7071067811865476d0) double precision ddot, vvvtij external ddot, vvvtij double precision BETA0, ALPHA0, ABLOG common /VVVMCL/ BETA0, ALPHA0, ABLOG save /VVVMCL/ double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision EPSMAX parameter (EPSMAX = 2.2204460492503131d-16) c------------------------------------------------------------------------------ iopt = 0 niop = 0 nopt = 0 tmop = 0.d0 trop = 0.d0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd psq = p*p pm1 = p-1 pp1 = p+1 if (ng .eq. 1) return ALPHA = max(ALPHA,EPSMAX) BETA0 = BETA ALPHA0 = ALPHA ABLOG = log(BETA*ALPHA) c call intpr( 'ic', -1, ic, n) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. ng) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 call dswap( p, x(k,1), n, x(j,1), n) ic(j) = ic(k) ic(k) = icj end if end do 3 continue c set up pointers if (ng .eq. n) goto 4 do j = n, ng+1, -1 icj = ic(j) i = ic(icj) ic(icj) = j if (i .ne. icj) then ic(j) = i else ic(j) = j end if end do 4 continue c call intpr( 'ic', -1, ic, n) c initialize by simulating merges do k = 1, ng j = ic(k) if (j .ne. k) then c non-singleton call dcopy( psq, zero, 0, r, 1) trcij = zero l = 1 10 continue m = l + 1 qj = one/dble(m) qi = dble(l)*qj si = sqrt(qi) sj = sqrt(qj) call dcopy( p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(k,1), n, v, 1) trcij = trcij + ddot( p, v, 1, v, 1) call dscal( p, si, x(k,1), n) call daxpy( p, sj, x(j,1), n, x(k,1), n) call mclrup( m, p, v, r, p) l = m i = ic(j) if (i .eq. j) goto 20 j = i goto 10 20 continue c d(ll+k) = trcij c copy triangular factor into the rows of x j = k m = p do i = 1, min(l-1,p) j = ic(j) call dcopy( m, r(i,i), p, x(j,i), n) m = m - 1 end do ij = j if (l .ge. p) then do m = p, l icj = ic(j) ic(j) = -k j = icj end do end if ic(ij) = n+l x(k, pp1) = zero if (l .ge. 2) then x( k, pp1) = trcij trmij = vvvtij( l, p, r, sj, trcij) x(ic(k),pp1) = trmij end if else ic(k) = 1 c d(ll+k) = zero end if end do c call intpr( 'ic', -1, ic, n) c call dblepr( '', -1, x(1,pp1), n) c call dblepr( 'trac', -1, d(ll+1), ng) c call dblepr( 'term', -1, term, n) c compute change in likelihood and determine minimum dopt = FLMAX ij = 0 do j = 2, ng icj = ic(j) nj = 1 if (icj .eq. 1) then tracj = zero termj = ABLOG do i = 1, (j-1) ni = 1 ici = ic(i) if (ici .eq. 1) then nij = 2 rij = two si = rthalf sj = rthalf sij = rthalf call dcopy( p, x(i,1), n, v, 1) call daxpy( p, (-one), x(j,1), n, v, 1) call dscal( p, rthalf, v, 1) c trcij = half*ddot( p, v, 1, v, 1) trcij = ddot( p, v, 1, v, 1) call dcopy( p, v, 1, u, p) c trmij = rij*log(BETA*trcij+ALPHA) trmij = two*log(BETA*(trcij+ALPHA)/two) termi = ABLOG else m = p l = ici 110 continue call dcopy( m, x(l,ni), n, u(ni,ni), p) ni = ni + 1 m = m - 1 l = ic(l) if (l .le. n) goto 110 ni = l - n c traci = d(ll+i) c traci = trac(i) c termi = vvvtrm(i,ni,n,p,ic,x,traci) c termi = term(i) traci = x( i , pp1) termi = x(ic(i), pp1) ri = dble(ni) nij = ni + 1 rij = dble(nij) qij = one/rij qi = ri*qij si = sqrt(qi) sj = sqrt(qij) sij = sj call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = traci + ddot(p,v,1,v,1) call mclrup( nij, p, v, u, p) trmij = vvvtij( nij, p, u, sij, trcij) end if dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = i jopt = j m = p do k = 1, min(nij-1,p) call dcopy( m, u(k,k), p, r(k,k), p) m = m - 1 end do end if end do else m = p l = icj 120 continue call dcopy( m, x(l,nj), n, s(nj,nj), p) nj = nj + 1 m = m - 1 l = ic(l) if (l .le. n) goto 120 nj = l - n c tracj = d(ll+j) c termj = vvvtrm(j,nj,n,p,ic,x,tracj) tracj = x( j , pp1) termj = x( ic(j), pp1) rj = dble(nj) do i = 1, (j-1) m = p do k = 1, min(nj-1,p) call dcopy( m, s(k,k), p, u(k,k), p) m = m - 1 end do ni = 1 ici = ic(i) if (ici .eq. 1) then nij = nj + 1 rij = dble(nij) qij = one/rij qi = qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) sij = sqrt(qij) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = tracj + ddot(p,v,1,v,1) termi = ABLOG else m = p l = ici k = nj + 1 130 continue call dcopy( m, x(l,ni), n, v, 1) call mclrup( k, m, v, u(ni,ni), p) ni = ni + 1 m = m - 1 l = ic(l) if (l .le. n) goto 130 ni = l - n c traci = d(ll+i) c termi = vvvtrm(i,ni,n,p,ic,x,traci) traci = x( i , pp1) termi = x(ic(i), pp1) ri = dble(ni) nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) sij = sqrt(qij) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = ( traci + tracj) + ddot(p,v,1,v,1) end if call mclrup( nij, p, v, u, p) trmij = vvvtij( nij, p, u, sij, trcij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = i jopt = j m = p do k = 1, min(nij-1,p) call dcopy( m, u(k,k), p, r(k,k), p) m = m - 1 end do end if end do end if end do c if (.false.) then c i = 1 c ij = 1 c do j = 2, ng c call dblepr( 'dij', -1, d(ij), i) c ij = ij + i c i = j c end do c end if if (ns .eq. 1) then if (iopt .lt. jopt) then x(1,1) = iopt x(1,2) = jopt else x(1,1) = jopt x(1,2) = iopt end if d(1) = dopt return end if ls = 1 200 continue call dcopy( p, x(iopt,1), n, v, 1) call dscal( p, siop, v, 1) call daxpy( p, sjop, x(jopt,1), n, v, 1) if (jopt .ne. lg) then call wardsw( jopt, lg, d) call dcopy( p, x(lg,1), n, x(jopt,1), n) m = ic(jopt) icj = ic(lg) if (icj .ne. 1) x( jopt, pp1) = x( lg, pp1) ic(jopt) = icj ic(lg) = m end if if (niop .eq. 1) then ic(iopt) = lg else l = ic(iopt) do k = 1, min(niop-1,p) m = l l = ic(l) end do if (l .lt. n) call intpr("l .lt. n", 8, l, 1) ic(m) = lg end if l = ic(iopt) do k = 1, min(nopt-1,p) call dcopy( p, r(1,1), p, x(l,1), n) m = l l = ic(l) end do ic(m) = nopt + n c call intpr('ic', 2, ic, n) c term(iopt) = tmop c trac(iopt) = trop x(iopt, pp1) = zero if (nopt .ge. 2) then x(iopt,pp1) = trop x(ic(iopt),pp1) = tmop endif call dcopy( p, v, 1, x(iopt,1), n) d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) lo = lo - 1 lg = lg - 1 ld = ld - lg iold = iopt dopt = FLMAX ni = nopt ri = dble(ni) termi = tmop traci = trop ij = ((iold-1)*(iold-2))/2 if (iold .gt. 1) then do j = 1, (iold-1) call dcopy(psq, zero, 0, u, 1) m = p do k = 1, min(ni-1,p) call dcopy(m, r(k,k), p, u(k,k), p) m = m - 1 end do nj = 1 icj = ic(j) if (icj .eq. 1) then nij = ni + 1 rij = dble(nij) qij = one/rij qi = ri*qij si = sqrt(qi) sj = sqrt(qij) sij = sj call dcopy(p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(iold,1), n, v, 1) trcij = traci + ddot(p,v,1,v,1) tracj = zero termj = ABLOG else m = p l = icj k = ni + 1 310 continue call dcopy( m, x(l,nj), n, v, 1) call mclrup( k, m, v, u(nj,nj), p) k = k + 1 nj = nj + 1 m = m - 1 l = ic(l) if (l .le. n) goto 310 nj = l - n c call vvvget(j,nj,n,p,ic,x,tracj,termj) tracj = x( j ,pp1) termj = x(ic(j),pp1) rj = dble(nj) nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) sij = sqrt(qij) call dcopy(p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(iold,1), n, v, 1) trcij = ( traci + tracj) + ddot(p,v,1,v,1) end if call mclrup( nij, p, v, u, p) trmij = vvvtij( nij, p, u, sij, trcij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = nj njop = ni siop = sj sjop = si iopt = j jopt = iold m = p do k = 1, min(nij-1,p) call dcopy(m, u(k,k), p, s(k,k), p) m = m - 1 end do end if end do end if if (iold .lt. lg) then i = iold ij = ij + i do j = (iold+1), lg call dcopy(psq, zero, 0, u, 1) m = p do k = 1, min(ni-1,p) call dcopy(m, r(k,k), p, u(k,k), p) m = m - 1 end do nj = 1 icj = ic(j) if (icj .eq. 1) then nij = ni + 1 rij = dble(nij) qij = one/rij qi = ri*qij si = sqrt(qi) sj = sqrt(qij) sij = sj call dcopy(p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(iold,1), n, v, 1) trcij = traci + ddot(p,v,1,v,1) termj = ABLOG else m = p l = icj k = ni + 1 410 continue call dcopy( m, x(l,nj), n, v, 1) call mclrup( k, m, v, u(nj,nj), p) k = k + 1 nj = nj + 1 m = m - 1 l = ic(l) if (l .le. n) goto 410 nj = l - n c call vvvget(j,nj,n,p,ic,x,tracj,termj) tracj = x( j ,pp1) termj = x(ic(j),pp1) rj = dble(nj) nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) sij = sqrt(qij) call dcopy(p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(iold,1), n, v, 1) trcij = ( traci + tracj) + ddot(p,v,1,v,1) end if call mclrup( nij, p, v, u, p) trmij = vvvtij( nij, p, u, sij, trcij) dij = trmij - (termi + termj) d(ij) = dij ij = ij + i i = j if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = iold jopt = j m = p do k = 1, min(nij-1,p) call dcopy(m, u(k,k), p, s(k,k), p) m = m - 1 end do end if end do end if c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 do i = 2, ld qi = d(i) if (qi .le. dopt) then ij = i dopt = qi end if end do c call dblepr("d", 1, d, nd) c call dblepr("d", 1, d, ld) if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if do k = 1, p call dcopy( p, zero, 0, r(1,k), 1) end do if (iopt .ne. iold .and. jopt .ne. iold) then i = iopt j = jopt nj = 1 icj = ic(j) ni = 1 ici = ic(i) if (icj .eq. 1) then termj = ABLOG if (ici .eq. 1) then nij = 2 rij = two si = rthalf sj = rthalf call dcopy(p, x(i,1), n, v, 1) call daxpy( p, (-one), x(j,1), n, v, 1) call dscal( p, rthalf, v, 1) trcij = ddot( p, v, 1, v, 1) call dcopy( p, v, 1, r, p) termi = ABLOG else m = p l = ici 610 continue call dcopy( m, x(l,ni), n, r(ni,ni), p) ni = ni + 1 m = m - 1 l = ic(l) if (l .le. n) goto 610 ni = l - n c call vvvget(i,ni,n,p,ic,x,traci,termi) traci = x( i , pp1) termi = x(ic(i), pp1) ri = dble(ni) nij = ni + 1 rij = dble(nij) qij = one/rij qi = ri*qij si = sqrt(qi) sj = sqrt(qij) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = traci + ddot( p, v, 1, v, 1) call mclrup( nij, p, v, r, p) end if else m = p l = icj 620 continue call dcopy( m, x(l,nj), n, r(nj,nj), p) nj = nj + 1 m = m - 1 l = ic(l) if (l .le. n) goto 620 nj = l - n c call vvvget(j,nj,n,p,ic,x,tracj,termj) tracj = x( j , pp1) termj = x(ic(j), pp1) rj = dble(nj) if (ici .eq. 1) then nij = nj + 1 rij = dble(nij) qij = one/rij qj = rj*qij si = sqrt(qij) sj = sqrt(qj) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = tracj + ddot( p, v, 1, v, 1) termi = ABLOG else m = p l = ici k = nj + 1 630 continue call dcopy( m, x(l,ni), n, v, 1) call mclrup( k, m, v, r(ni,ni), p) ni = ni + 1 m = m - 1 l = ic(l) if (l .le. n) goto 630 ni = l - n c call vvvget(i,ni,n,p,ic,x,traci,termi) traci = x( i , pp1) termi = x(ic(i), pp1) ri = dble(ni) nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = ( traci + tracj) + ddot( p,v,1,v,1) end if call mclrup( nij, p, v, r, p) end if trop = trcij tmop = dopt + (termi + termj) nopt = nij niop = ni njop = nj siop = si sjop = sj else m = p do k = 1, min(nopt-1,p) call dcopy(m, s(k,k), p, r(k,k), p) m = m - 1 end do l = ic(iopt) if (l .ne. 1) then 710 continue if (l .le. n) then l = ic(l) goto 710 end if niop = l-n else niop = 1 end if l = ic(jopt) if (l .ne. 1) then 720 continue if (l .le. n) then l = ic(l) goto 720 end if njop = l-n else njop = 1 end if nopt = niop + njop end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 200 900 continue d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 1 ld = lo si = d(lo) lo = lo - 1 sj = d(lo) ic(int(sj)) = ng if (si .lt. sj) then x(1,1) = si x(1,2) = sj else x(1,1) = sj x(1,2) = si end if lg = ng + 1 do k = 2, ns lo = lo - 1 d(ld) = d(lo) ld = ld - 1 lo = lo - 1 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k,1) = dble(ici) x(k,2) = dble(icj) else x(k,1) = dble(icj) x(k,2) = dble(ici) end if end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end double precision function vvvtij( l, p, r, s, trac) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer l, p double precision r(p,*), s, trac double precision detlog double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision det2mc external det2mc double precision BETA, ALPHA, ABLOG common /VVVMCL/ BETA, ALPHA, ABLOG save /VVVMCL/ double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) if (l .le. p) then vvvtij = log(BETA*(trac+ALPHA)/dble(l)) else if (trac .eq. zero) then vvvtij = log((ALPHA*BETA)/dble(l)) else detlog = det2mc( p, r, s) if (detlog .eq. -FLMAX) then vvvtij = log(BETA*(trac+ALPHA)/dble(l)) else if (detlog .le. zero) then vvvtij = log(exp(detlog)+BETA*(trac+ALPHA)/dble(l)) else vvvtij = log(one+exp(-detlog)*(BETA*(trac+ALPHA)/dble(l))) * + detlog end if end if end if vvvtij = dble(l)*vvvtij return end double precision function det2mc( n, u, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer k, n double precision q, s double precision u(n,*) double precision zero, two parameter (zero = 0.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) det2mc = zero do k = 1, n q = u(k,k)*s if (abs(q) .le. zero) then det2mc = -FLMAX return end if det2mc = det2mc + log(abs(q)) end do det2mc = two*det2mc return end subroutine mevvv ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, U, pro, w, S) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p,G), pro(G), S(p,p) double precision mu(p,*), U(p,p,*), pro(*), S(p,*) integer nz, p1, iter, i, j, k, l, j1 double precision piterm, hold, rcmin, rteps double precision temp, term, cs, sn, umin, umax double precision sumz, sum, detlog, const, hood, err double precision prok, tmin, tmax, ViLog, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if piterm = dble(p)*pi2log/two p1 = p + 1 eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX c zero out the lower triangle do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do i = 1 do j = 2, p call dcopy( p-i, zero, 0, S(j,i), 1) i = j end do do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do do j = 1, p call dcopy( j, zero, 0, S(1,j), 1) end do call dcopy( p, zero, 0, mu(1,k), 1) sumz = zero do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(sumz,zsum) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) end do do j = 1, p call dscal( j, one/sqrt(sumz), S(1,j), 1) end do else call dcopy( p, FLMAX, 0, z(1,k), 1) end if do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if rcmin = FLMAX do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,K) end do end do call absrng( p, S, p1, umin, umax) rcmin = min(umin/(one+umax),rcmin) end do if (rcmin .le. rteps) then tol = rcmin eps = FLMAX maxi = iter return end if do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,K) end do end do c temp = pro(k) detlog = zero do j = 1, p detlog = detlog + log(abs(S(j,j))) end do const = piterm+detlog do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, S, p, w, 1) sum = ddot( p, w, 1, w, 1)/two c z(i,k) = temp*exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 c w(1) = rcmin tol = err eps = hood maxi = iter return end subroutine mevvvp( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, mu, U, pro, w, S) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision Vinv, eps, tol c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p,G), pro(G), S(p,p) double precision mu(p,*), U(p,p,*), pro(*), S(p,*) integer nz, p1, iter, i, j, k, l, j1 double precision piterm, hold, rcmin, rteps double precision temp, term, cs, sn, umin, umax double precision sum, sumz, detlog, const, hood, err double precision prok, tmin, tmax, ViLog double precision cmu, cgam, rmu, rgam, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision twolog parameter (twolog = 0.6931471805599453d0) double precision pilog parameter (pilog = 1.144729885849400d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if piterm = dble(p)*pi2log/two p1 = p + 1 eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do do j = 1, p call dcopy( p, pscale(1,j), 1, S(1,j), 1) end do sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(sumz,zsum) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) const = sumz+pshrnk temp = (sumz*pshrnk)/const call dscal( p, sqrt(temp), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) do j = 1, p temp = pdof+sumz+dble(p)+two call dscal( j, one/sqrt(temp), S(1,j), 1) end do call dscal( p, sumz/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else call dcopy( p, FLMAX, 0, z(1,k), 1) end if do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if rcmin = FLMAX do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,K) end do end do call absrng( p, S, p1, umin, umax) rcmin = min(umin/(one+umax),rcmin) end do if (rcmin .le. rteps) then tol = rcmin eps = FLMAX maxi = iter return end if rmu = zero rgam = zero do k = 1, G c temp = pro(k) do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do detlog = zero do j = 1, p detlog = detlog + log(abs(S(j,j))) end do rmu = rmu - detlog rgam = rgam - (pdof+dble(p)+one)*detlog const = piterm+detlog do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, S, p, w, 1) sum = ddot( p, w, 1, w, 1)/two c z(i,k) = temp*exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 c w(1) = rcmin tol = err eps = hood maxi = iter cmu = dble(p)*(log(pshrnk) - pi2log)/two sum = zero do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do call daxpy( p, (-one), mu(1,k), 1, pmu, 1) call dtrsv('U','T','N',p, S, p, pmu, 1) sum = sum + ddot( p, pmu, 1, pmu, 1) end do rmu = rmu - pshrnk*sum/two sum = zero term = zero temp = zero do j = 1, p call dcopy( p, pscale(j,1), p, pmu, 1) c call dtrsv('U','T','N', p, U, p, pmu, 1) i = p-j+1 c call dtrsv('U','T','N', i, U(j,j,k),i,pmu(j),1) call dtrsv('U','T','N', i, S(j,j), p, pmu(j), 1) sum = sum + ddot(i, pmu(j), 1, pmu(j), 1) temp = temp + log(abs(pscale(j,j))) term = term + dlngam((pdof+one-dble(j))/two) end do rgam = rgam - sum/two const = -dble(p)*(pdof*twolog+(dble(p)-one)*pilog/two) cgam = (const-pdof*temp)/two-term pdof = (dble(G)*cmu+rmu) + (dble(G)*cgam+rgam) return end subroutine msvvv ( x, z, n, p, G, w, mu, U, pro, S) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p,G), pro(G), S(p,p) double precision mu(p,*), U(p,p,*), pro(*), S(p,*) integer i, j, k, l, j1 double precision sum, temp, cs, sn double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p c call dcopy( j, zero, 0, U(1,j,k), 1) call dcopy( j, zero, 0, S(1,j), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sum / dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) end do temp = sqrt(sum) if (temp .ge. one .or. one .lt. temp*FLMAX) then do j = 1, p call dscal( j, one/temp, S(1,j), 1) end do else do j = 1, p call dcopy( j, FLMAX, 0, S(1,j), 1) end do end if else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do return end subroutine msvvvp( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * w, mu, U, pro, S) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p,G), pro(G), S(p,p) double precision mu(p,*), U(p,p,*), pro(*), S(p,*) c------------------------------------------------------------------------------ c c x double (input) (n,p) matrix of observations. c z double (input/output) (n,G[+1]) conditional probabilities. c n integer (input) number of observations. c p integer (input) dimension of the data. c G integer (input) number of Gaussian clusters in the mixture. c mu double (output) (p,G) mean for each group. c U double (output) (p,p,G) c pro double (output) (G) mixing proportions (used even if equal). c w double (scratch) (max(p,G)) integer i, j, k, l, j1 double precision sumz, temp, cs, sn, const double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do do j = 1, p call dcopy( p, pscale(1,j), 1, S(1,j), 1 ) end do sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) if (sumz .ge. one .or. one .lt. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) const = sumz+pshrnk temp = (sumz*pshrnk)/const call dscal( p, sqrt(temp), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) temp = pdof+sumz+dble(p)+one if (pshrnk .gt. zero) temp = temp + one do j = 1, p call dscal( j, one/sqrt(temp), S(1,j), 1) end do call dscal( p, sumz/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do return end subroutine mvn1d ( x, n, mu, sigsq, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n integer n double precision mu, sigsq, hood c double precision x(n) double precision x(*) c------------------------------------------------------------------------------ c c x double (input) (n) matrix of observations (destroyed). c n integer (input) number of observations. c mu double (output) mean. c sigsq double (output) variance. c hood double (output) loglikelihood. double precision dn double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot external ddot c------------------------------------------------------------------------------ dn = dble(n) mu = ddot( n, one/dn, 0, x, 1) sigsq = zero call daxpy( n, (-one), mu, 0, x, 1) sigsq = ddot( n, x, 1, x, 1)/dn if (sigsq .eq. zero) then hood = FLMAX else hood = -dn*(pi2log + (one + log(sigsq)))/two end if return end subroutine mvn1p ( x, n, pshrnk, pmu, pscale, pdof, * mu, sigsq, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n double precision pshrnk, pmu, pscale, pdof double precision mu, sigsq, hood c double precision x(n) double precision x(*) integer i double precision dn, scl, const, term, temp, xbar double precision cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero dn = dble(n) xbar = ddot( n, one/dn, 0, x, 1) const = pshrnk + dn mu = (dn/const)*xbar + (pshrnk/const)*pmu sigsq = zero do i = 1, n temp = xbar - x(i) sigsq = sigsq + temp*temp end do temp = xbar - pmu sigsq = sigsq + pscale + dn*(pshrnk/const)*(temp*temp) temp = pdof + dn + two if (pshrnk .gt. zero) temp = temp + one sigsq = sigsq / temp if (sigsq .eq. zero) then hood = FLMAX else call daxpy( n, (-one), mu, 0, x, 1) temp = ddot( n, x, 1, x, 1) if (sigsq .lt. one .and. temp .ge. sigsq*FLMAX) then hood = FLMAX return end if temp = temp/sigsq hood = -(dn*(pi2log + log(sigsq)) + temp)/two end if if (pshrnk .gt. zero) then cmu = (pi2log-log(pshrnk))/two term = pdof/two cgam = term*log(pscale/two) - dlngam(term) temp = pmu - mu const = log(sigsq) rmu = -(const - (pshrnk/sigsq)*(temp*temp))/two rgam = -(term+one)*const - (pscale/sigsq)/two pdof = (cmu+rmu) + (cgam+rgam) else pdof = FLMAX end if return end subroutine mnxiip( x, n, p, pshrnk, pmu, pscale, pdof, * mu, sigsq, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision sigsq, hood c double precision x(n,p), mu(p) double precision x(n,*), mu(*) integer i, j double precision dnp, scl, temp, term, sum double precision dmudmu, pmupmu, cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ dnp = dble(n*p) scl = one/dble(n) do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) end do sum = zero do i = 1, n do j = 1, p temp = x(i,j) - mu(j) sum = sum + temp*temp end do end do pmupmu = ddot(p,pmu,1,pmu,1) dmudmu = ddot(p,mu,1,mu,1) temp = dmudmu + pmupmu temp = temp - two*ddot(p,mu,1,pmu,1) term = pshrnk + dble(n) scl = (pshrnk*dble(n))/term sigsq = pscale + scl*temp + sum temp = pdof + dble(n*p) + two if (pshrnk .gt. zero) temp = temp + dble(p) sigsq = sigsq/temp call dscal( p, dble(n)/term, mu, 1) call daxpy( p, pshrnk/term, pmu, 1, mu, 1) if (sigsq .eq. zero) then hood = FLMAX else sum = zero do i = 1, n do j = 1, p temp = x(i,j) - mu(j) sum = sum + temp*temp end do end do hood = -(sum/sigsq + dnp*(pi2log + log(sigsq)))/two end if if (pshrnk .gt. zero) then dmudmu = ddot(p,mu,1,mu,1) cmu = dble(p)*(log(pshrnk)-pi2log)/two temp = (dmudmu+pmupmu) - two*ddot(p,pmu,1,mu,1) term = log(sigsq) rmu = -(dble(p)*term + (pshrnk*temp)/sigsq)/two temp = pdof/two cgam = temp*log(pscale/two) - dlngam(temp) rgam = -(temp+one)*term - pscale/(two*sigsq) pdof = (cmu+rmu) + (cgam+rgam) else pdof = FLMAX end if return end subroutine mvnxii( x, n, p, mu, sigsq, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p double precision sigsq, hood c double precision x(n,p), mu(p) double precision x(n,*), mu(*) integer j double precision dnp, scl double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot external ddot c------------------------------------------------------------------------------ dnp = dble(n*p) scl = one/dble(n) do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) end do sigsq = zero do j = 1, p call daxpy( n, (-one), mu(j), 0, x(1,j), 1) sigsq = sigsq + ddot( n, x(1,j), 1, x(1,j), 1) end do sigsq = sigsq/dnp if (sigsq .eq. zero) then hood = FLMAX else hood = -dnp*(pi2log + (one + log(sigsq)))/two end if return end subroutine mnxxip( x, n, p, pshrnk, pmu, pscale, pdof, * mu, scale, shape, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision scale, hood c double precision x(n,p), mu(p), shape(p) double precision x(n,*), mu(*), shape(*) integer i, j double precision sum, temp, smin, smax double precision term, const, scl double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ temp = one/dble(n) do j = 1, p mu(j) = ddot( n, temp, 0, x(1,j), 1) shape(j) = zero end do do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j) sum = sum + temp*temp end do shape(j) = shape(j) + sum end do term = pshrnk + dble(n) scl = (pshrnk*dble(n))/term do j = 1, p temp = pmu(j) - mu(j) shape(j) = shape(j) + scl*(temp*temp) + pscale end do call dscal( p, dble(n)/term, mu, 1) call daxpy( p, pshrnk/term, pmu, 1, mu, 1) call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) scale = zero hood = FLMAX return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) scale = FLMAX hood = FLMAX return end if if (temp .le. SMALOG) then call dcopy( p, FLMAX, 0, shape, 1) scale = zero hood = FLMAX return end if temp = exp(temp) term = pdof + dble(n) + two if (pshrnk .gt. zero) term = term + one scale = temp/term if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) hood = FLMAX return end if call dscal( p, one/temp, shape, 1) const = dble(p)*(pi2log+log(scale)) hood = zero do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j) sum = sum + (temp*temp)/shape(j) end do hood = hood - (const+(sum/scale))/two end do c log posterior computation not yet available pdof = FLMAX return end subroutine mvnxxi( x, n, p, mu, scale, shape, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p double precision scale, hood c double precision x(n,p), mu(p), shape(p) double precision x(n,*), mu(*), shape(*) integer i, j double precision dn, scl, sum, temp, smin, smax double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ dn = dble(n) scl = one/dn do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) shape(j) = zero end do do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j) sum = sum + temp*temp end do shape(j) = shape(j) + sum end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) scale = zero hood = FLMAX return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) scale = FLMAX hood = FLMAX return end if if (temp .lt. SMALOG) then call dcopy( p, FLMAX, 0, shape, 1) scale = zero hood = FLMAX return end if temp = exp(temp) scale = temp/dn if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) hood = FLMAX return end if call dscal( p, one/temp, shape, 1) hood = -dble(n*p)*(one + pi2log + log(scale))/two return end subroutine mnxxxp( x, n, p, w, * pshrnk, pmu, pscale, pdof, * mu, U, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision hood c double precision x(n,p), w(p), mu(p), U(p,p) double precision x(n,*), w(*), mu(*), U(p,*) integer i, j, j1 double precision dnp, scl, detlog, sum, term, temp double precision umin, umax, cs, sn, const double precision cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision twolog parameter (twolog = 0.6931471805599453d0) double precision pilog parameter (pilog = 1.144729885849400d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ dnp = dble(n*p) scl = one/dble(n) do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) call dcopy( p, pscale(1,j), 1, U(1,j), 1) end do c mu contains ybar; U contains Cholesky factor of inverse Wishart scale do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu, 1, w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu, 1, w, 1) term = (pshrnk*dble(n))/(pshrnk+dble(n)) call dscal( p, sqrt(term), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) scl = pdof + dble(n+p+1) if (pshrnk .gt. zero) scl = scl + one scl = one/sqrt(scl) do j = 1, p call dscal( j, scl, U(1,j), 1) end do term = pshrnk + dble(n) call dscal( p, dble(n)/term, mu, 1) call daxpy( p, pshrnk/term, pmu, 1, mu, 1) call absrng( p, U, p+1, umin, umax) c rcond = umin / (one + umax) if (umin .eq. zero) then hood = FLMAX return end if detlog = zero do j = 1, p detlog = detlog + log(abs(U(j,j))) end do const = dble(n)*(detlog + dble(p)*pi2log/two) sum = zero do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu, 1, w, 1) call dtrsv( 'U', 'T', 'N', p, U, p, w, 1) sum = sum + ddot(p, w, 1, w, 1) end do hood = -(const+sum/two) cmu = dble(p)*(log(pshrnk) - pi2log)/two call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu, 1, w, 1) call dtrsv( 'U', 'T', 'N', p, U, p, w, 1) temp = ddot( p, w, 1, w, 1) sum = zero term = zero do j = 1, p term = term + dlngam((pdof+dble(1-j))/two) call dcopy( p, pscale(j,1), p, pmu, 1) c call dtrsv('U','T','N', p, U, p, pmu, 1) i = p-j+1 c call dtrsv('U','T','N', i, U(j,j),i,pmu(j),1) call dtrsv('U','T','N', i, U(j,j),p,pmu(j),1) sum = sum + ddot(i, pmu(j), 1, pmu(j), 1) end do if (pshrnk .gt. zero) then rmu = -(detlog+pshrnk*temp/two) const = -dble(p)*(pdof*twolog+(dble(p)-one)*pilog/two) cgam = (const/two-pdof*detlog) - term rgam = -((pdof+dble(p)+one)*detlog + sum/two) pdof = (cmu+cgam) + (rmu+rgam) else pdof = FLMAX end if return end subroutine mvnxxx( x, n, p, mu, U, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p double precision hood c double precision x(n,p), mu(p), U(p,p) double precision x(n,*), mu(*), U(p,*) integer i, j, j1 double precision dn, dnp, scl double precision umin, umax, cs, sn double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot external ddot c------------------------------------------------------------------------------ dn = dble(n) dnp = dble(n*p) scl = one/dn do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) call dcopy( p, zero, 0, U(1,j), 1) end do do i = 1, n call daxpy( p, (-one), mu, 1, x(i,1), n) j = 1 do j1 = 2, p call drotg( U(j,j), x(i,j), cs, sn) call drot( p-j, U(j,j1), p, x(i,j1), n, cs, sn) j = j1 end do call drotg( U(p,p), x(i,p), cs, sn) end do scl = sqrt(scl) do j = 1, p call dscal( j, scl, U(1,j), 1) end do call absrng( p, U, p+1, umin, umax) c rcond = umin / (one + umax) if (umin .eq. zero) then hood = FLMAX else hood = zero do j = 1, p hood = hood + log(abs(U(j,j))) end do hood = -dn*(hood + dble(p)*(pi2log + one)/two) end if c c do j = 1, p c do i = 1, j c x(i,j) = ddot(i,U(1,i),1,U(1,j),1) c if (i .ne. j) x(j,i) = x(i,j) c end do c end do c do j = 1, p c call dcopy( p, x(1,j), 1, U(1,j), 1) c end do return end c Luca: add to check if compile ok subroutine hceee ( x, n, p, ic, ng, ns, io, jo, v, s, u, r) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt c Gaussian model-based clustering algorithm in clusters share a common c variance (shape, volume, and orientation are the same for all clusters). implicit NONE integer n, p, ic(n), ng, ns, io(*), jo(*) c double precision x(n,p), v(p), s(p,p), u(p,p), r(p,p) double precision x(n,*), v(*), s(*), u(*), r(*) c------------------------------------------------------------------------------ c c x double (input/output) On input, the (n by p) matrix containing c the observations. On output, the first two columns c and ns rows contain the determinant and trace of the c sum of the sample cross product matrices. Columns 3 and 4 c contain the merge indices if p .ge. 4 c n integer (input) number of observations c p integer (input) dimension of the data c ic integer (input) (n) Initial partitioning of the data; groups must c be numbered consecutively. c ng integer (input) Number of groups in initial partition. c ns integer (input) Desired number of stages of clustering. c io,jo integer (output [p .le. 3]) If p .lt. 3, both io and jo must be of c length ns and contain the indices of the merged pairs on c output. If p .eq. 3, jo must be of length ns and contains c an index of each merged on output pair. Otherwise io and c jo are not used and can be of length 1. c v double (scratch/output) (p) On output, algorithm breakpoints; c tells where the algorithm switches from using trace c to trace + det, and from trace + det to det as criterion. c s double (scratch/output) (p,p) On output the first column contains c the initial trace and determinant of the sum of sample c cross product matrices. c u,r double (scratch) (p,p) integer q, i, j, k, l, m, i1, i2, l1, l2 integer ni, nj, nij, lw, ls, lg, ici, icj integer nopt, iopt, jopt, idet, jdet, ndet double precision DELOG double precision ri, rj, rij, dij, tij, zij double precision trc0, trc1, trcw, det0, det1, detw double precision si, sj, siop, sjop, sidt, sjdt double precision dopt, zopt, dijo, tijo, tdet double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision ddot, detmc2 external ddot, detmc2 double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision EPSMIN parameter (EPSMIN = 1.1102230246251565d-16) c------------------------------------------------------------------------------ i1 = 0 i2 = 0 lw = p*p c call intpr('ic', -1, ic, n) c form scaled column sums call dscal( n*p, one/sqrt(dble(n)), x, 1) si = one/sqrt(dble(p)) sj = si / dble(n) call dcopy( p, zero, 0, v, 1) do k = 1, n call daxpy( p, sj, x(k,1), n, v, 1) end do trc0 = zero call dcopy( lw, zero, 0, r, 1) do k = 1, n call dcopy( p, v, 1, s, 1) call daxpy( p, (-si), x(k,1), n, s, 1) trc0 = trc0 + ddot( p, s, 1, s, 1) call mclrup( (k+1), p, s, r, p) end do det0 = detmc2( p, r) DELOG = log(trc0+EPSMIN) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. ng) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 call dswap( p, x(k,1), n, x(j,1), n) ic(j) = ic(k) ic(k) = icj end if end do 3 continue c call intpr( 'ic', -1, ic, n) trcw = zero call dcopy( lw, zero, 0, r, 1) q = 1 do j = 1, n i = ic(j) if (i .ne. j) then c update trace and Cholesky factor as if a merge q = q + 2 ni = ic(i) ri = dble(ni) rij = dble(ni+1) sj = sqrt(one/rij) si = sqrt(ri)*sj call dcopy( p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcw = trcw + ddot(p, v, 1, v, 1) call mclrup( q, p, v, r, p) ic(j) = 0 ic(i) = ic(i) + 1 call dscal( p, si, x(i,1), n) call daxpy( p, sj, x(j,1), n, x(i,1), n) c call dcopy( p, FLMAX, 0, x(j,1), n) c update column sum in jth row else ic(j) = 1 end if end do c call intpr('ic', -1, ic, n) trc1 = trcw if (q .lt. p) then detw = -FLMAX else detw = detmc2( p, r) end if det1 = detw ls = 1 lg = ng l1 = 0 l2 = 0 100 continue if (q .ge. p) then c if (.false.) c * call intpr('PART 2 --------------------------', -1, ls, 0) if (detw .lt. DELOG) then goto 200 else goto 300 end if end if dopt = FLMAX do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) call dcopy( p, x(i,1), n, s, 1) call dscal( p, sj, s, 1) call daxpy( p, (-si), x(j,1), n, s, 1) tij = trcw + ddot(p, s, 1, s, 1) zij = max(tij,EPSMIN) if (zij .le. dopt) then dopt = zij nopt = nij siop = si sjop = sj iopt = i jopt = j call dcopy( p, s, 1, v, 1) end if end do end do trcw = dopt if (ls .eq. ns) goto 900 call dscal( p, siop, x(iopt,1), n) call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) if (jopt .ne. lg) then call dcopy( p, x(lg,1), n, x(jopt,1), n) ic(jopt) = ic(lg) end if ic(iopt) = nopt x(lg,1) = detw x(lg,2) = trcw if (p .ge. 4) then x(lg,3) = dble(iopt) x(lg,4) = dble(jopt) else if (p .eq. 3) then x(lg,3) = dble(iopt) jo(ls) = jopt else io(ls) = iopt jo(ls) = jopt end if c update the Cholesky factor q = q + 1 call mclrup( q, p, v, r, p) ls = ls + 1 lg = lg - 1 goto 100 200 continue q = q + 1 c call intpr('ic', -1, ic, n) dopt = FLMAX zopt = FLMAX do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) call dcopy( p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) tij = trcw + ddot(p, v, 1, v, 1) call dcopy( lw, r, 1, u, 1) call mclrup( q, p, v, u, p) dij = detmc2( p, u) if (dij .le. dopt) then dopt = dij tdet = tij ndet = nij sidt = si sjdt = sj idet = i jdet = j end if if (tij .eq. zero) then zij = -FLMAX else zij = max(tij,EPSMIN) if (dij .eq. (-FLMAX)) then zij = log(zij) else if (dij .le. zero) then zij = log(exp(dij) + zij) else zij = log(one + zij*exp(-dij)) + dij end if end if if (zij .le. zopt) then zopt = zij dijo = dij tijo = tij nopt = nij siop = si sjop = sj iopt = i jopt = j call dcopy( lw, u, 1, s, 1) end if end do end do if (dopt .lt. DELOG) then if (l1 .eq. 0) l1 = ls trcw = tijo detw = dijo call dcopy( lw, s, 1, r, 1) else l2 = ls trcw = tdet detw = dopt siop = sidt sjop = sjdt nopt = ndet iopt = idet jopt = jdet call dcopy( p, x(iopt,1), n, v, 1) call dscal( p, sjop, v, 1) call daxpy( p, (-siop), x(jopt,1), n, v, 1) call mclrup( q, p, v, r, p) end if if (ls .eq. ns) goto 900 call dscal( p, siop, x(iopt,1), n) call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) if (jopt .ne. lg) then call dcopy( p, x(lg,1), n, x(jopt,1), n) ic(jopt) = ic(lg) end if ic(iopt) = nopt x(lg,1) = detw x(lg,2) = trcw if (p .ge. 4) then x(lg,3) = dble(iopt) x(lg,4) = dble(jopt) else if (p .eq. 3) then x(lg,3) = dble(iopt) jo(ls) = jopt else io(ls) = iopt jo(ls) = jopt end if ls = ls + 1 lg = lg - 1 if (detw .ge. DELOG) then c if (.false.) c * call intpr('PART 3 --------------------------', -1, ls, 0) goto 300 end if goto 200 300 continue q = q + 1 detw = FLMAX do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) call dcopy( p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) call dcopy( lw, r, 1, u, 1) call mclrup( q, p, v, u, p) dij = detmc2( p, u) if (dij .le. detw) then detw = dij nopt = nij siop = si sjop = sj iopt = i jopt = j call dcopy( lw, u, 1, s, 1) end if end do end do c update the trace call dcopy( p, x(iopt,1), n, v, 1) call dscal( p, sjop, v, 1) call daxpy( p, (-siop), x(jopt,1), n, v, 1) trcw = trcw + ddot( p, v, 1, v, 1) if (ls .eq. ns) goto 900 call dcopy( lw, s, 1, r, 1) call dscal( p, siop, x(iopt,1), n) call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) if (jopt .ne. lg) then call dcopy( p, x(lg,1), n, x(jopt,1), n) ic(jopt) = ic(lg) end if ic(iopt) = nopt x(lg,1) = detw x(lg,2) = trcw if (p .ge. 4) then x(lg,3) = dble(iopt) x(lg,4) = dble(jopt) else if (p .eq. 3) then x(lg,3) = dble(iopt) jo(ls) = jopt else io(ls) = iopt jo(ls) = jopt end if ls = ls + 1 lg = lg - 1 goto 300 900 continue x(lg,1) = detw x(lg,2) = trcw if (p .ge. 4) then if (iopt .lt. jopt) then x(lg,3) = dble(iopt) x(lg,4) = dble(jopt) else x(lg,3) = dble(jopt) x(lg,4) = dble(iopt) end if else if (p .eq. 3) then if (iopt .lt. jopt) then x(lg,3) = dble(iopt) jo(ls) = jopt else x(lg,3) = dble(jopt) jo(ls) = iopt end if else if (iopt .lt. jopt) then io(ls) = iopt jo(ls) = jopt else io(ls) = jopt jo(ls) = iopt end if end if c decode do k = 1, ng ic(k) = k end do m = ng + 1 if (p .ge. 4) then l = m do k = 1, ns l = l - 1 i = int(x(l,3)) ici = ic(i) j = int(x(l,4)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(m - k) if (ici .lt. icj) then x(l,3) = dble(ici) x(l,4) = dble(icj) else x(l,3) = dble(icj) x(l,4) = dble(ici) end if end do else if (p .eq. 3) then l = m do k = 1, ns l = l - 1 i = int(x(l,3)) ici = ic(i) j = jo(k) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(m - k) if (ici .lt. icj) then x(l,3) = dble(ici) jo(k) = icj else x(l,3) = dble(icj) jo(k) = ici end if end do else do k = 1, ns i = io(k) ici = ic(i) j = jo(k) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(m - k) if (ici .lt. icj) then io(k) = ici jo(k) = icj else io(k) = icj jo(k) = ici end if end do end if l = 2 m = min(p,4) do k = ng, lg, -1 if (k .le. l) goto 950 call dswap( m, x(k,1), n, x(l,1), n) l = l + 1 end do 950 continue x(1,1) = det1 x(1,2) = trc1 v(1) = dble(l1) v(2) = dble(l2) s(1) = det0 s(2) = trc0 return end mclust/NAMESPACE0000644000176200001440000001262113107024377013006 0ustar liggesusersuseDynLib(mclust, .registration = TRUE) # Export all names # exportPattern(".") # Import all packages listed as Imports or Depends import("stats", "utils", "graphics", "grDevices") # export(.mclust) export(mclust.options, emControl) export(em, emE, emEEE, emEEI, emEEV, emEII, emEVI, emV, emVEI, emVEV, emVII, emVVI, emVVV, emEVV, emVEE, emEVE, emVVE, emX, emXII, emXXI, emXXX) export(me, meE, meEEE, meEEI, meEEV, meEII, meEVI, meV, meVEI, meVEV, meVII, meVVI, meVVV, meEVV, meVEE, meEVE, meVVE, meX, meXII, emXXI, emXXX) export(sim, simE, simEEE, simEEI, simEEV, simEII, simEVI, simV, simVEI, simVEV, simVII, simVVI, simVVV, simEVV, simVEE, simEVE, simVVE) export(estep, estepE, estepEEE, estepEEI, estepEEV, estepEII, estepEVI, estepV, estepVEI, estepVEV, estepVII, estepVVI, estepVVV, estepEVV, estepVEE, estepEVE, estepVVE) export(mstep, mstepE, mstepEEE, mstepEEI, mstepEEV, mstepEII, mstepEVI, mstepV, mstepVEI, mstepVEV, mstepVII, mstepVVI, mstepVVV, mstepEVV, mstepVEE, mstepEVE, mstepVVE) export(mvn, mvnX, mvnXII, mvnXXI, mvnXXX) export(cdens, cdensE, cdensEEE, cdensEEI, cdensEEV, cdensEII, cdensEVI, cdensV, cdensVEI, cdensVEV, cdensVII, cdensVVI, cdensVVV, cdensEVV, cdensVEE, cdensEVE, cdensVVE, cdensX, cdensXII, cdensXXI, cdensXXX) export(bic, pickBIC, nVarParams, nMclustParams) export(map, unmap, partconv, partuniq, errorBars) export(mclustModel, mclustModelNames, checkModelName, mclustVariance) export(decomp2sigma, sigma2decomp) export(imputeData, imputePairs, matchCluster, majorityVote) export(mapClass, classError, adjustedRandIndex) export(mclust1Dplot, mclust2Dplot, mvn2plot, surfacePlot, uncerPlot) export(clPairs, clPairsLegend, coordProj, randProj) export(priorControl, defaultPrior, hypvol) export(hc, print.hc) S3method("print", "hc") export(hcE, hcEEE, hcEII, hcV, hcVII, hcVVV) export(hclass, randomPairs) export(mclustBIC, print.mclustBIC, summary.mclustBIC, print.summary.Mclust, plot.mclustBIC, summaryMclustBIC, summaryMclustBICn) S3method("print", "mclustBIC") S3method("summary", "mclustBIC") S3method("print", "summary.mclustBIC") S3method("plot", "mclustBIC") export(Mclust, print.Mclust, summary.Mclust, print.summary.Mclust, plot.Mclust, predict.Mclust, logLik.Mclust) S3method("print", "Mclust") S3method("summary", "Mclust") S3method("print", "summary.Mclust") S3method("plot", "Mclust") S3method("predict", "Mclust") S3method("logLik", "Mclust") export(densityMclust, plot.densityMclust, dens, predict.densityMclust, cdfMclust, quantileMclust, densityMclust.diagnostic, plotDensityMclust1, plotDensityMclust2, plotDensityMclustd) S3method("plot", "densityMclust") S3method("predict", "densityMclust") export(MclustDA, print.MclustDA, summary.MclustDA, print.summary.MclustDA, plot.MclustDA, predict.MclustDA, cvMclustDA, getParameters.MclustDA, logLik.MclustDA) S3method("print", "MclustDA") S3method("summary", "MclustDA") S3method("print", "summary.MclustDA") S3method("plot", "MclustDA") S3method("predict", "MclustDA") S3method("logLik", "MclustDA") export(MclustDR, print.MclustDR, summary.MclustDR, print.summary.MclustDR, plot.MclustDR, plotEvalues.MclustDR, projpar.MclustDR, predict.MclustDR, predict2D.MclustDR) S3method("print", "MclustDR") S3method("summary", "MclustDR") S3method("print", "summary.MclustDR") S3method("plot", "MclustDR") S3method("predict", "MclustDR") export(MclustDRsubsel, MclustDRsubsel_cluster, MclustDRsubsel_classif, MclustDRsubsel1cycle, MclustDRrecoverdir, print.MclustDRsubsel, summary.MclustDRsubsel) S3method("print", "MclustDRsubsel") S3method("summary", "MclustDRsubsel") export(me.weighted, covw) export(icl, mclustICL, print.mclustICL, summary.mclustICL, print.summary.mclustICL, plot.mclustICL) S3method("icl", "Mclust") S3method("icl", "MclustDA") S3method("print", "mclustICL") S3method("summary", "mclustICL") S3method("print", "summary.mclustICL") S3method("plot", "mclustICL") export(mclustBootstrapLRT, print.mclustBootstrapLRT, plot.mclustBootstrapLRT) S3method("print", "mclustBootstrapLRT") S3method("plot", "mclustBootstrapLRT") export(MclustBootstrap, print.MclustBootstrap, summary.MclustBootstrap, print.summary.MclustBootstrap, plot.MclustBootstrap) S3method("print", "MclustBootstrap") S3method("summary", "MclustBootstrap") S3method("print", "summary.MclustBootstrap") S3method("plot", "MclustBootstrap") export(as.Mclust, as.Mclust.default) S3method("as.Mclust", "default") export(as.densityMclust, as.densityMclust.default, as.densityMclust.Mclust) S3method("as.densityMclust", "default") S3method("as.densityMclust", "Mclust") export(clustCombi, print.clustCombi, summary.clustCombi, print.summary.clustCombi, plot.clustCombi, combiPlot, entPlot, combiTree, combMat, clustCombiOptim) S3method("plot", "clustCombi") S3method("print", "clustCombi") S3method("summary", "clustCombi") S3method("print", "summary.clustCombi") export(gmmhd, print.gmmhd, summary.gmmhd, print.summary.gmmhd, plot.gmmhd, gmmhdClusterCores, gmmhdClassify) S3method("print", "gmmhd") S3method("summary", "gmmhd") S3method("print", "summary.gmmhd") S3method("plot", "gmmhd") # deprecated functions export(cv.MclustDA, cv1EMtrain, bicEMtrain) mclust/data/0000755000176200001440000000000013205037575012501 5ustar liggesusersmclust/data/Baudry_etal_2010_JCGS_examples.rda0000644000176200001440000012236213205037573020576 0ustar liggesusers‹´ýy8ÕÑ÷? ›çñ æá˜Ëi¯$sEBD“”•F¥($¡©LíC©Ìcæyžçé>õù~~÷õ|ŸûºîçŸçŸŽÎuÎû}Þ{¯õZ¯×Þk­mit`3Ç::::F&:FÚŸ´W::z:&:vÚ+³›¦ª:#ñ߇èèxh/@ràÔzkŒˆ3dùŸr)Š¿ÃÙ‰64ïóúŽAOš6 쇑ƒ¸ý5˃c['U··™gýš¸ï§ŸŽ -¨r–´Æ]ÉïØù-bI«}g~ÇdEÓ}Ô\Ƥx<}Õ%Ѳü·±†ì0 ŸO6xS¢Õecì‡Ñüׇ“ªïzARfWÀiâF<ÔÕË•mŽK¶gÄ$? AÙ¡A|D~Ü÷Rî ~ ÿèýϯ0ãêX3wo\£lƼãði |ØN2²G‹»jÉùå¢ßv{FÙP+ëlaŽ98°oon.H\² ôx<’‰/6dX«¡‘Ý'Æ·¨ä^÷OO¨ É¢LH zbåfæä qÛñi{zô•¹ùĹ>¤Ê Å4¢%»<Õñ†4±¾åKÍäu Ô óÛŸDÓÜÁû–¶ âž­‰QWO£.ó‹Ë·Ãi÷… Jsü=²¼bô ;HޚŽ(¾ZîÔ&_´ø;àØae%4qnáq²·nL¡¦¤¥µ¬Qp?H¯h_Ö©b@CKL:âkÑh"é…-]þc o|úD¨%Íž§«q}r u%†w›‚$ª…Ù Zo6á–¯íE·‰D ™Ã „QqoJªØû-x…ýÌðÞc¶h¸ã“Te¦.®¹¡ÝIx ”ç¢ÌˆI¤‹ýÞ1E©(mVö T“ó¶Å¢!Ös±[µÑ|*#à 6E3Í•dÛ7‡@¢sGÁÉ3¹44—\øV¦Ëï¶?KW)I‰.CÔíuyÏh{!Hí¹¶kW¾*Hña¿wçØÑèåOB!š¸îm°æKÉ:Ô—bby´’æ?:áÅFhFUð†Ý7-46Y™ª,þ(æÏ‹r“ ùÍÜl§Š:!ÿ~vdͱÿ ˆ&…Ãxê¢QÕö;INô@¹qâ ®Ê"P^ìy—+†êÔï*ôýj@i‰-ƒ‘ï€B½Öè®Y¡Å³^7^ôҖαXzÔzÛfë‡_hεº%ìÙVù,™11 Xü¡3¾ØŠæ´ùlDKVô{RDýQÂ1¦ÊC7AxÛu»Y) Œ”?:2xg3 ½§³Ž£á±ýŒèÑðkÁw?ÐìW±ŽH è.Vñ×¥÷´Þ©ü(TÞ8~¦gh?Hð„¹éŸõCCUk¤Óö¸ Sx÷%  ˆçݨÐ"ë²ç­îOeþ”Ip¦%P¾èž®KBÓ7”¾ {^Æããå68‰¢ÿáÌ 4³!ù‡[ÑU ÔùkØí¤ùÇZÚØ™Œ"ŒÜ?‘nß…Ò—Íö¦NÜEiW„Ü?ø”lsúâê6Z¼ÌËTx R§;Þ+IÔñ²ùMûhï_Pyr õ­÷T+(¢¡°=D#›3 6ëËbko ’SiŽò+Eh•~gyï):Üeœ¾8*@‹:…2”ÑÀ:ŸñÝ› q[èè+Ò. TÝÛ)×€ÉU†÷÷Õ? Ùùi–iáZ­özâ´% W5Ô<¬ˆç?oÿ,¡ùÕ­'w9·{‚hÿ5ÓýºïPR«ÿ6¯U)~¯Ë².DSñÂN2¶Æ ùÓí2ݳ6Ôƒ¾M¨ÁgËy­YÜ÷]`öAÏôç;³ÁŠZ8«öfx^µª­-{o8’Yî©ô\ ñ»iU{¾4 ¯ÒïZ§áü¹JzÍtü#û8ƒ—îÝat_Öˤ¬~nØvDs|Å_‹2©-§€r¢ð@Ýuš&¤në E5œDÿUÔœc³ò¹Üõ­s?ü²Lãmo®‰¾ñÒ›¬%±í¨£fÂîÉñ4÷Ü4!ÚÀ¤¶Üßו'gÖm6µêéRŽ­Û{ÇQCYóúðauÔqP3â¦6™ï6lÝr¤ÔnÞÜí¸$¾¼·r£ï !9ãocÍÐoý‹Ig¼€Rð9²A*×­‰^> æ˜ØW.¢™·Êí¢wШ‘ýñ[ªÝ Ùä~ ^gƒ$£gn\@;HIÆ ÖMñ‚X·¶þ›÷¹À­ì¡v’¾Í?ð—üq‰Æ‡£ (P$Š0Eíe^%…±¨zf™´¹¿ýìô–æ¤ñ:«ÏŠeòhöƒýÏ55°ÉºW–t(äÒâ^c4ó`Zá­Ç”}úœÀÏî§7¿{à>ê?`ÔŽyç˜E¤Û4š£Å²ˆqÞÑ \6g½S>‘f7BàÑñĵ9®°ÆÐx N±l*‰ç®'Ãixëð•u‹R+vûáÔ¹TÊHªWöð(FK~—_âÊCSón~J" }6tÑòXˆÍ]]Ö—ŠùWïÓ™}hU¡ùŒé×YZüQ^s«àFSWžòhġŠÓ/ÏÞþ€[#âïúíÁC'y·ÔÛÓâ‰Ø1CÅùe4®âسIMl>/op›†û} f&Ý* Å×ù;Ü$þ¼?9U‰F#ÚtšûÇÐØy§§{"£Çòý;ó_¡‰€}>pV eG|À‹ñ9M7œ·è3‹zÔxb=m±móg¸ Òxõˆ6H ?8ø*‹ ÍŒžÜ³4’Û ôâ¸Ô¸@„_f;šÓñ šl¥Åë¶Væš©h|;+¤²2HÖÄ>ý  ”õ«AOã¬Ð,ïÛ]ÇJÚP}V†©™SÈl2!Uí>Œz\rûM­Gqg\i¬¾¾4žœ:å¯(\ïœNú¾@½þw´Íq 5$¸ïòw´|B/$Ï ÜÀõ_ {qÇÔŠZÅ ¶o ÏC#æ ‘š<‰h_æ—àD‡“˜úE‚ÐÄ × Ï¯ç¢^Õ#÷[*b¼8Ú9°#ˆ±“NO 1Ù}õ…Åoà‹óÖ¨5èÆ…êëJÑŸ±cå$…ý.Z›õÿÀ³5s¬l>É…Æ^Ì6qÄ~â7jÁʃÍ@TâzpÈ€k‰ðUv1jƒ¨¹Žz\¦>p¾2_ÛIj’!|÷:å‹»^|»,iƒ vÝ ¤ÖáÓî_£@zßõþ³ó~ =÷\ø#¹%èyRW»ð·'t7{·mò¢€¯ÌmÞÄÝr§î#þØ,Äbåó—hʖ缬CõoØŒoò~-NöÝWhóûièå¥ãTúfvÐ>V*0G÷­ÊàÓSz`ã „GÆý;%A¤,j“ö¾³@,_µ °b‘ú‘GÔòœó2Óâ ׬º™m¸]¬+fE¨kÿêòûŒÊkfv÷'žÄcva®7ƒ€.©”GJ $œÿ,„/Í Ég, P•KÎ ‰”¯xÙBÒ!/ú2:O „ÏàµûmÖ¡ ´ç Û°370Úl‘oͱG¥ _"9ÊXO½¯îÞ¼è+ëI‹¸@ž( £?qª­3 n ™PeUHÌ¥Á¹j߀üPJ0)ïÕtœž‚Øíìo-÷®Ʉհ@rˆSj’Áþ»€hÒ˜•÷t?®>ï¢Ï¹ÄŽ~ŸQRgÙ¼H7-T„õ1ð§ý~qB:M¼c}º6ãœ- /wã /¯wGK.‰©.­üðM4U'ô#¹HÞñ•SYóV„7ÖÇT§]áî~®@Þ+ a~Ñ w?éùùøò+N$éGßæ_YÕÂnáGîåj¥Í·À¶`÷}@8ÈX¿Ïæ–[ù¿zºá‘¤ÄÉ“ùá襯öQÙ—™´ë3úß3ˆF#Ã÷Ÿýîd¹9Î;—¾¢QÅ[Äõ÷@ÐõïXQÍ&~‘æ‹Mxõ7«yàS”Ñ’W™> Ä=úë}q“Òçv¢9ˆ&ti–ÇAP𝝑ï‡%o÷ᨢÍèYveN}¹véxž²˜',âÀ™‡·¬( 8XHÔ:6„›*p±]œ٘˧ßnóÈÑïã‚À+Ý:1F»ip˱ç1@´ïcPX8Ëûxý(æâcM¶»Ûi~!±éñê n˜¡àÂÓ¦÷‚ÐÄÑWm7¦7˜¼«’KîãS=›@÷Ú×côùÀ?÷wA¶÷=]fà`Äkô#OÈ÷sí}(¿žÐ 9ú±O7é®1T‚ðgÇX£=Ì@–ýò$£ëðûxõ§øëåñè$£ëD¯³ ¨~~ºI 3¯-tè@ˆÕ³FìœæOì¯ÿ´–G!Úÿë~l¸Ùìo¹Âïm=|÷ìOmqÂ@*v¦òøGµ/¶Ø} «I)ÿt†Nð+¹t“"‚;=žîm|-²kqñ©/ß®9}{  |ä-sý–w€<Ô¤°O}%¿¾ávHyëZu¯ºÄwãЮó´Ïwv±~¿¤#†æÍÑÌj‰aĵ…«w,$—¡Jƒf·cYÂhNæ ^ LBƒ¢û§'¨úøK‹uy=\Ö]NrÊ”Æe¯}\ΕF‹‹—¥¬0Èwò®b(œ/dîOÝ%¦øu Š×"ø O|¶CÁ¸Ñiísé)To«uñúÉLôkÛÖ”8¦äâ|pU“ùÇ·ßÜ3‰¢[úO©£@¼?§mùÂõ%i²D–}Æ™:†wr“Þƒ`iô…›'Ÿ¢Î’Ûþ xކ—O.,¼MþI™OË"ÔÄ•Ñã“CãžÆO…Œžá‚Ï‹7\µ@o~’뫵+ž6~Q±Ûàð[–ÊLÄóñæÞ`kFÜÓÇa/{Š—>Ëz½/ELZPæ×ý­«4¼•rl¾:t}³:ñ„2ˆ×8î> µDŸz6ø,_wvwîJ ¯I¨ðBÕ‹È3bch}äñÖ/·ÎPÐÕb—ö•íPàÏBö¶Û3pÇt ÏAvu $g<§oS² Ãë—§Ùž§ì»ão d$..)q2å“–0þ:6K.Øì† rªu¦\»ð$ÿ¹l{:z<²*çFGR˜ƒf¹ñã¹~5•ŒGÃ?nv1¢¢â5çÝ@æô´ð8xoX„{ÝIR ž5›µRFÚn»›¯q›»ßxïÕ—À¯o1ôþˆ £9©k.gÑ9¡cëD ›T¯Þ:ƒ'.ˆ8îôb³eËùu01ÜnHÃb§’ÔHv5®=­}ýâÙ ôÍY¦¤)ê Î:{déç c ºDõl¦áÃ@tß¶,T2ZÖʸ¿½èà|Êä,ôÆŽ·¥­6Øïp¿B7 Ø>áùÌw}ìí½Ð—»™?_mŠBèGMjÕ(žÉ|½.¢\<ÙtîªÜ@Ò þå·x‘†O¶ëäv »ü]:²i£ÎáǨ‚ãÒ£ô„ þÏÐDZ5æw@54•Óô.Ȥ‹F—ýlП~|¾šìtü9•†ºÛü‹çãFñÍŠçc³€[0`ËQÃghÄÛýjaºöÿ[ׯ¿{w?8³q¡ýõîEÔÕÅü´ì)³q‹l?d{¥R¦¾ šî¼“VÒðxRnEêŸá>£,#ßÎ &p” Á› B­Ï¸ùüPË”f óa ·Ô=o‡$>‰•nɾ Æ]Ö–@}’Sȶuø ñ©¡é䀟wuÞã®DѱÝb¯ð¸Ý´Cæ6ôñ¦ù¹¬»r8­ì>¯û:*®HkYÖ_‚UÇçûÏúp— #¿ÝIS(šô½ÄUþí#¯š€˜Zçá_ †ô¢,Š'Ü€ü mÃöî“øËcÉ''>áÊ=…óN4{ÍØ8íyIOô4²f? |ÎúêÙŸ½@àJ·B°;n’ÒxÒ²o'ð´ý¥«Qr»3‡æq Œ$°@´úT•ª;ù!yÚ? ¿v̽ „Ôú&Z¤Çsˆùè(¯ 𿼞Ãs]5¼ ¶nº¸ˆowègMzqZŸž°/ •—7\ÐÈ¥ý:Õ×Ï ‹í€‘TØÂîÖ´mÅö°³Ç=À¶ê 75픊uiTzna'%&…<•×eâsãuï§c§€H¨‰Û× hä¹¼è®4Tµ.Æ!U@{?‚ä:ó%w¢Duà¯Ê)lçÞo™L¾17à7Kâ}Ù9Bu·¯ûÄËáF’Debœ3–3n÷é ÇEifgMg€Ü¸™“̲¢Pé~ކ«³9Ö²F4þÑPÚ›èˆC¦^ÌnYÆÕV)FËÝ@`Q@íƒT:ž—jS|V@šnßyÁF·Ìò™¸ü¥Ê‘[ðàRñ9úuMt]Û‚»óR‹ø£ „ÿnëz(ümo}}áéçJNÔ3ÀD¾ï¤·Nf9Ô×Ì[ïücÈXØç‡ .…Äá³^‚WV,×QïÈß…&0Vßd:“Äq×|¡e\,»ïý' ðÝ}L®k„©1ǤÏŸR÷ÜB‘ÜÓ¢÷Ì‚P[Tò32ýÛ[uò:@Ö„øf:üõæÒ‘Fúv4/æŸO¥šã“IßjꀘÆà¤ ÂÉõÖZœ~4þ‘§Ñ„—¦íüø…lÑÔŸé?Ìûf€‘s‚ùó¢<è¼þRyø’Î]y»ˆÏ ŸÓ=ðµKÜ|{í'š¦Ë¾¢wÆèÓª¢snÉ—lÿ±¦O„ÿíKã¼Î'ÖV. ê|ïš.TEÿ e¾<¡öoO%dð\ÐV‘j+Mƒ='Žy–ui ™å]1[ÖPë¼í² Ü‹Rãæaƽ=~7X<~U›ô°\š|sîÞÓW‹ÃúG˜N…1êô©›[Ѽãgl´ÈžÔ —c~¡¾¦hK5íùûžìÛŽ+§÷¼üBŠªí¹CYŽÿo—Ͳ ä3b2F£ÅwWÏí +ÀR¶ûî.Güý5ϳå_ß@X· ç‹ ™ØÝ[ÎF×j¥m#ùp÷cöïo¶ ¾%A{¾sèóºç÷ú\T7œÔᤀšZ,êî|BbÄ:<îtÙype/F™QõˆN ­4%»ÜÑ©ž‚Ãs6VÀ¶vëó´qðÿ·ê°è’p»½Q¹&Æ$‰WPÁ‹æÌ±ûƒ@"á·‰‡¢PÅr®Ëý/8½á±šýÄ,M÷?ü®5 T²IñÁÜ5 (õô–ã3ÓÚ¢e}sé°j}C ýk€´E(!ë‚ox.N¥}ÿ>e,”oøåù¢ßm±AóGFº…çÑCo1»}1 pZð•³÷PÿÑ}HÑ“ZÎÏ€ïMÝÉ|ÿ š¹Üv!á‘[1Þ­^ÜB-qç&Á·wdPÁ°˜v­wÔÖ’á:É^Y‹têÀÌ׫øðyCŽë©à2©`µï4½P“Ò-}9ˆõ.óme•ç÷àºAv¡àk@d8˜– Â-þ Q?¶ã¡Ò:ñ üÁ"û¹CûòY‚‚ »¼1¡ÌûvÆ[¦AµÜà}ÕVšn•IáéGs1G,â61ãÙÐ2ÔßÇA ˆ·ÜÇñ3Áɇ÷¾ ;{0\YøÏ|kÕe8„ê9OO™tÙÀðX¹Ù“{¨s£ Öë·K ~r$ø|voz°µˆÆ§üÑH Ä÷#·£Ñ±¹d㨜±i™Æó¾Ô¿ ‚rál7:QÏቂݙò8îEñ/wþ1N¯Ø[BÓngó?<Á5Ÿ¤-î‡Ó—äׇñ¯kqo½­ÁŸ øSmªŸžRÀ¿¤Î‘ŸI\Gƒ¯ × šp3‰é÷¹hš¿7]4´ÒëGÕgb+Tw?ÝåiNÙyÔøIÓKRª ø}¸µ÷xlÒ—!&²ÎEü1¯®›·Œu¿Elï+ÖQ†°šZ<oBBö«1W—ŽÑtÀ”¸M-ahÄéâ£÷§ûÿw²–§ói7oÚ|ÿ’µþó&Ãõÿó׿ÿý…£§œ½ÿû…ÿ¾Éáê|ÞYõØ9Úµþ×ÇÙϹ¤úß{pÿ½âUÚ?ëë«É´—µÿ§Ì±Íÿ;s ‰þÚ–}­]DI™y¼¨¬[4?¢ñ¸™ÕÓ¨Še£kÍv[ä­½zq. ùûnµ{íHÌþ LJ ióåu¢HZrÿj£[´€†OIœ²ø’ËÓ±¢ª‡PZ·uF™H¬ijp{‡ ž™ªµÞ_¯a¥–^-õïvûYs ¤ïž™Ní’Ç?£…ì\ ÐzØ4ýPî4xø |Wa#JD¬¡K¼Î(ë`·VQë0šÒŽòQ?Á â\ƒb÷Spï“+Ÿ¸R¹ÑLõo'9Æ\vîsÁ®to ð,˜«?ñq-çZ5ÏÃ@޼wY2 ÄÞƒæË ¹¨ýÁ’\üeÚx5U<SB!›=ó¥9@ì)×££®âé›ÌâÑÔ=ž “ØP4£5ùuæÙ,ê¯ýØÆÎè;‡ÛÌÊÉ 4pÊrq~m#HÜÜÄbL‡7ΠÆÃ²hFš¥®Uà;H>«}Ë×!–¹¢s×v{¡biwï ”ATÜsÐþƒ>nã¼ûêè% ´Æ–!qU¼-'¤´ ¯(áv­9ÈÁ=N“I/b*@Òáü+hÆ×ûŠk (ó¨<¢‹#žÎp¥|Nñ ÆJ‡ýÑT§•KI*Ò9À©Ú dåüÚ®L mY  vÄ⋦º³~€ ›¯e„(hãðÝÚL?zAKÛàšXxrH$¿Ê,2Þó©“'6í.ìqg#šg¥@Té‘}Öç\i‘Ëþ9rÄv¿¯î+܆Vgº†ÏmÄaÍt>³©hÜ ›µt^(ÚÃfÒN ê›É’EpM±z_œ:†¦/3TF4ž†²=-d×ÔÍJŽ›;…FÎŒÈ_ß§Š?þ).©úšŠ¦86(»h iÈÉCòšTe=r¦š‚Eï…Žz,¡ùbÙ/Æ* ÅvÎëHÏ)Ü}v›A•S ªÜ´öNÀ0Ÿ´.ôåE$ósïg$Ñô~9~õ«) ñÉ`ý ?Ä^ˆm:A Ëð½åë§¢ùoeD…xÖ⯟Á.ÛÅ¿÷s¥˜Þ;6ä>êÌR _5óƣ믹cöˆ¡7äðtã ¹?gÜ:‹}ßÔ’Seš†¢ßº?gIåO jMP»UIL‹¾P6b¶±<ÔÑ$1˜yHJŽíòjhBg¿ª,Ü-¤Ù‘híÙ?yx\-Ïž7f#ê^ù´92¥¤=‰ùb÷U©‰ÚûÉõ¡¾ç±Í1£8gJVÒˆ°Ž«ÄÔ¦ù¿•.œ~M½¿‘=P"…‹½šh2¥›5!c(]Ì?M€"ßÚ»X€V?¾š>>~ MÖõˆ›œpACSëóg@ìÆ¾óÂê4ÿÐ)™Ê Ûâû”£™ƒîÉ≻nIÜ KCC¹}{_è¡þ½1FÛŸ›€øÐéëîM*¨>ð¶rOï9 (é6œ1Åkß¶E%¡±ú{Žç’øñ£¦Ð³½§A¢ÿ•؈* ˆr¨$,Ý~^.®+h eÁ{«t¸L õÝý£¬·ËÍ¢'×\_ŽƒìÝôkéMùh„Û8Ô‚ä¶5:”&|ÄóÝ”æÂ a/š1pˆé©VZ˜Òz¦8SŸÒ?‰''öYzƒ˜ÐmJâÃYœþš°º`—†–¨ZS}Σ©c•ûœïucjãÝÊ÷ªhR\ÿ/ÕÊ þóÏîð…íª˜ÿªH¤Ä·;Jåš ghuª<òÔKÝ ’»nYÿÌzÒEµÕÖ(ûšïæ¹#ÒfÊÖDCê»O½Û·ˆþX…‘µ½™@òHÔÒ=ÇeêïÄ+/\¦»¼æë¢µhšðå:œ¢éïÜJ?fâÖbfÄÅ „O£ÝwJá/L{Ï»þRÁ?8š{.ù t—¶?ãi¯ñˆgD›#žœèñ|Kÿ(—ß õòý¨þòci¾nô}NáÕí¬W¸:Bá)4 dL„ŠÚ¯€d× Ö–'xöÛÌ繄O¸ÖøÚ€òåh‚H¥Œ´?i‚«ÏSc Ô+ú¿3Ý‹ºv¶.Mù2ƒ¤ó´ÜÊà`? HX¤gÊÞŸf+\6¨ï³O×PÉaø·S’Kì "þï·z=JIÏnâcå¦ä´…b4ÆÔj¡š€¯>-&m@ïìu2kÑ„YÕÍSuªµ†m÷Ç¡»Í(Åbhš²‡<}ÓË)( ôs 9ƒ†*®Ÿž¸“åJQıÃÔG!Ô,¹$êzÇ(›çùNkÝG³f<^8UÑÏ•ý±«J4¿Û>ÆOJ•Ð×mýü¨ðÌ ¶GshqòŠŒhè2šN7>MŸdêäÛÿ57ïšÉ£†£·Z>îP †áS?˜.",¨|hN0+eÖ Ÿa¿ÿQ Ôí£Ìòàa)*Çw4+ë©ëµU—~ïU?ÖB_ [L£Á˜Õù4®÷8u?¡®Üö+zæÿ5,Ìè[K¸¡™ñs–Á×A27:ñ1ŸÆS6Ûä'd¡^÷Á×½ŸÐ ‡Í5no’Þf|¶ö šœà—>[ bïŠÓ^Ç j†‚ý§j‡@Ææ|ÑÏ¥rø˜.M×RŠÆ¸f•Â’®¨ˆì!KûQ49uækmÞ Ô¬"(Ûž~¤¹c2V*€®êƒñõÝ¿Ñ<ÞZõ©7_ o#î…æŽïýX»XˆföüyõQ¤ä‡U8dQ—½ˆÂóÀ› aµ“`fg> º&€”è¶%Kž…§Q†ìiqî›çÑ;C¨?}´ôi÷H^¤éÙíoA’’©žá ”d4¿ÐæïHëÍMå ½ÅYÁÅ(¤rNÈóÜ I²žšÝçx´rÓô‚LÔ]Tã™îï™{¤\œNY8O¡Á‡\ô'y…Ðô7Ãc7ÅŸà^Ãû?د¨¢œ¸‘ÈCËhéÄȹëAÍh.Aã¬qÄVô§R¹´£iÍßà®>  ¢íf½¼3¨uÏØer&°mÞ¸k¾ÏÄFB[æ´üѧ§¾é*bÔãîXK©åDE‰·¼òéÁÈÙÔB4qXz¢¸ò Hn¬~§!É:=¼‚Jw¼)Í"#žVþf«t—ç|NWêZŠ ’4ß²w_++^‹‰†¯«tgþ9:Õ¥…€2ûwcÍ}m‚ôj QÝöI‘;¨÷PǼü¶wh6dYàžÆU»UkîÃMóWÁƒ_—€ÎìÓ/ æD’h󨿆V”R "îCAIY?üAB0g¤ÒÇ ½Î»(ú7ÃÍÿï‚&*Pc°ØÎ¹Ä?ÆåNÝžCS×÷~z—m‡§d›Ìz8ã_+ƒRùx4ɵwzl¤ý"?ÜÉ©ÓqèG)PöÛ“ïíJÄó”µ*eTe°¼™wê!Zv{ÓšáKÿu3ËhEhûðõ²é4JÖW“[±+BSÍ·Çéß.Ã~¢·¯ (²;ˆ%!.@‰%f‹x8ƒ”a‹’¦³:ˆ(ôú<Ö'äËHÿ[¨ç~+ƒ/C%š=j_LãÕ‚;+ +¢I‹:¢•â/LÏÙû‚ô‘³ÁK"=@9ãÐpöd-nLåz·Í‚Hñ™gŸÔè•Ü~3$ÝENð< ÇŸE†=ж¾CM#Ç/l Çý­±ªKÇ=P“5ŸÊž <ùèݵ׃´ñ¶}¡Ðƒ\/*Óë~¥éš°M¡÷v‚Äd«¤G¢ÉtkfÝjW4Η§åÛ)Ž&Þ}ÈêWæÖ_ëÒ^¯²ìEµa‡cô>lB“lZ’œPÇWbƒô1u4 V»ÃiW8Pœ83Ë;ŒWIáh2zV–qxPJº¯•^BƒâG*ж;¢:hãwâéf_Ú}¯1Ѻ…'ß¶¼»×)‹ºOÜõl»Á‚æÿ-¤€ÔÖ#>…q⮃ŠMç.E‚TœyEÉhÊÔô/Ø.’L/Ž]RqdÅÓõ´Çù†ñÎÌ‚oA"NÙ-;Ju%wè4»ƒóGó¨¢[@¹%z™aïa<0ôÓ£fÆsà¥ãv_Ô'g˜€®îÁÇÏìÕýœ ”ŒÿØkÀ?~‚–MfxÕfO¢ú´í]#PüÎrß¶ qÇ g¦Õ;П]ƒÙYøç…³me´ïKN•ØG_Gô9Ñe} r7±<™FsmÊ^ VhqøÈånÚü‹H/Ð~Ç“©Úç–ÇÐØE¯#:qi öèÄß­}Ô½ ,šW6 —¾="A*m.Ð,ĺõ–TYâº(Ù­"–@9ûcxÄ dpéÅ•Ïqh&=Õ~Lß u½ÑºõsÜ~Œjg3‰˜{Ë+n;@z_è,%yÿJ»ovWF$N5ÙÕ|MÈݱ›ACk•²%lÜhiëV‰ÆÍ¨=Q˜ý˜ýFTÈϬ5Ó‚S§¯9¦_;Ž:‹»Ü¿¾“ÊÚ>®|ë ¿ë·M©f}Ýõ«­¾çÈñ”óÁUí1!×ò·‚xk,¡¼µ ØÈuæNi‚äQdž°¿™Ì}¡•Ì ”ã̓%ô (ƒÇ)íãB"0òòªW(öá&Áíñ—?=G‹«nS\ÐtLRÉ5–¨+š'NQÁÕô¹0ÚZ€»7$kþX >ºÝ¾z ¦VzQnž$ÊÅÝÇk\ÐKƒ¶"ý+4’´Qv=¼ŒÆ·jg|ŸÞRümé⣨ñà e_õó¨gÐÝ¡ÅG˜´vgm˜Še†Mk°ú‘oíX5åÒº7‚»žƒxz²’ríy4ñüÃöù{@ùi¼5^ó+šþäüñ¦Å«+/Gš>ÝG3ßf“ïo”é먶Û8<нNâÖ4Ó¾•üÃÐ $„Lz˜(´ûd2ÚV<>’&Ÿ607ÞDEºåJ•|¨à4Ïç$ÑP4gú¬Óôó#´øxÈ}Àø5šøþ´Gìü;×>›B*A£<{z–OÑpòhïÛ!™9.Åm—ØMÉ9foV4ÔÅÜÖ¯ô$7¨Õã#žÈi cÔ[Ý)Axwo]_§é_!1ÛK@ÜíLØÇð»x8÷Î!íàÝ õÀÃøì1ˆ$Zj^ÕJ“—_¸„ HMúúD\×ëµí G,þÙÐ<¯¾Œ' Y»LêqjL½÷0—ve:·â{ÇÞ©¡é±…âÆ.hظïÍ©‡@òðÏKYÕ¨Œz>g›ü%ày÷aãZ'šŽ½Òyó HpŸx+•Rtœ‡•¬ þO$Ÿ8ô;¿ÒöÚ'æ„Ðl°“ [bÎÌ _74ÑH?ØËy u]/Ö~çR d)­šù£—Q]ks¿ÀïeÔ_ÿg8¹…Uäèù¾=)ƒ¿ÉlÙ–ºé2j]7ò4¢C…:ÚüíUθ+ÀÏm¥H•_ˆ@\cb:{¥øƒÊÚþéÝùo¿\Ãu#ƒ†„@ú=FÊ| ÂǦ²KŽüÀÓ<÷Ò5*‹§¡vN}¯l?²ËúÔ®I(1êÝa>ôÛb·˜Ì¥Ø]À#ä=C ³Ú†þ›IRUáPÖè¾»ƒß`7jœéàP¢xYãÊ ³šO¦µÈ·íM%®½@ò|Sìô½•ÛM¿ ñøÕþÉ‘@.¼³Yüö*C÷þчg»td³6¦Ö9‹Ì¯®¡¦¤ÊÛF]éh„ðvÔš® ¸¬¤]Ù ûó¬˜#ÐÀ­êKÌÅ@´»tìÙ(‡XâUÔ¢¢Ç†3oÏql‹ô»o ³¤f4=ÁãýûnWi6¼wùì *m?šlÌôH¡{^pÅYEµ÷O-aîg›Q‘ 5]zª^ï2Sçm1ïZ Ä·úﺬ‡*ö yW¥÷ã²ýÅÖmVé¨Úõ¼£I¡«ýûÿ<<¢®OséËT4ú•…ÆØð’©TÉ’ì,êöOÉv Ïb&>Á ‚šÂ|ô7 z1ú[O¶ ‚p\¤1ƒA%Ç}Vwt€³½ì`‘5~+ðèw–WKåãÁ~8ÇYço*'*¿ÅMÊõéÇ]7×Df|PF²’ºä¼=Í.(Qr+D*½»ŠO;Ï8p¼Ê1Ò亄³F*nÒçy‚êó×Ó 6©ÖgÑÄÂo‹\w_™RÙ¼S%÷œgP³!<ð%÷9œÞõ$2ä)nëà—Þ†æðÄS{ûUOdpSxÁ|ÈßLÒ—m(vCa*Ôø­Þ,muÂÇ+ºQÁ rHe}ôîm Úª¨ È}㕟Kª€Èò¾6bÇI Ƽ¾8ÿ^øW‹ÃÅ’© Ù»2Ê/ÝBýáÚG·{”àÎOb‘}zhñä ãâ¬MÀÕ0ZgÏWŠšþÂÖ‘*\ñìO›ŸwNX¤žCõzKÊJÁ¼ ˆÏôíhÂÝiÇkÙ7 ö‡Þûu€×z@ËDò-Dœ†òs—éKÞ³šŽ|4’ñPyK ’ÒX_€ôsh³Ž³ârÞdDW¢7‡{¹ïèἚ'Ãǽððåߪ»h¹Mõú¥2 Ë­]Û°HÞÖ*W¾šÏ3?§2OoànW–ŽÍÎAÓp\™x}è1½Ñ¥‚‡&З½ÏDµ2Ÿõ#´”$2Ç‘þe#ð›ïo¸ ÂBëaž—R‹ãÒÝâßx…þ_n?)E^Òß ¤¬ÿd~“©NÌKd& m{hä,…"k/q©>F™Ý\¥† hqæœËô°$¶ÒhÕ/¸ûñ…ÓÈûˆ^.O–Ò¤Õ}±Ð:ZÓ‡5i&”ëüÉ4ª½Èõãç†<¨Ìw¶.ý|(‚"Móv9ÕhäR¼äÓDVÜ+Yk{`U ÍúrÛnlR¢Ç;ëTšŸ®º<‘À³“Ý/@€3ÛUwÛ6Ô¼£;m@÷!¹ó¶XY¹éÚ•G@ÿa†nlÛm4ÞsØî<³šs'Áú«0 ?4ú.ûE³BÅOöhÂ-‹¸Ú]FÅ÷½#qíÎÞRQuàÛó¸ÿðÙx¸ôµ@Ýõ ­¬‰.©ßAÕf»Î2TÜG£Êå食ÕêY¾î­G@8௩Ê4Ûß nG?=uKd±Ðì?! üÍ£^VT/™5ç7E2IÂñAã~Ü}õŽ5½ã2šë÷÷óKž[~gÃŽ‘Pe×›o‡]€0m¯›½õ OùH{·vàb“M„^q\à{Øê% Ï(sž5Û«†>Û¡ 9ΉiUqÉÂ/qÆÉjümèå%Ñ@#CÓnùå¨ñ÷ÑÇU÷qÕÝ' »| @Þô¯”O–°%ïTeÒ ¯¦qoTyncòÓÁý4;r¾jµ}/KeÍ ¢§ámmæïCƨ»¬#|mi H·.Vk‰+hVG®·›Ó½Ù­ÄoR/úçfCí^p(ˆ°æX^,>ˆ3o´¯Ùµëâ›iIÒBÑðó‹ÕÄÛ@¸ŸR&eüHËYô·O­àR3|Œe‚ÈR<§ƒ¸xhçØüS:\ûl1³{•Œ?tä½Y–R±Þîô]Àÿ-¹bÛu2Ì_ÊsØárÆ„ jª¨öŦxº\š¾ºªþ+MN ˜½¼–¼ØåíÓÃY»O3_ÍÚ Ä4¸ùþ 5$·Gr@ÞôjE&ô¥í7zM‡'Wn ͦ¢šEMG%€pDðH”®âè¬ð=Û¼8ï^ö:ˆ—^ØÄï¼—Æ œ»Ä|#½h  °ëMò’‚«mAj&Í>´¸7°¤ÐR±M‹p]>®ÐYrÊP¢‰—U6ã©Û"%ÖÝ7ÿÀmûûÏðOñ÷|e+ÙÀ¿IßzV¶•ÑŸx™U÷&V ÅЄÁúÃiqÍ)ggç=¸G¯øâÄ"J·{ÌìQ£D•ŸÔG+è{Hí¦ÈX‰×:Ü¡7£Ø\‘Ù»J<Â-ˆZƒSÅÐ@Ç´¢jR:âå@é¸.Ò Ó¶EóàN®¨Ëmƒ›@ð×îîçñèOJæUE–^ ñ0‡¹dkàâ"æœïZPØ×ók§ÑLbœÒ›ª9 ¶*mNÿèM{îûæÙ~Y@r1ÿ[’‰+D³N>|7iÓ¾œ‚*áí ) ŸÞ¨$w.Jܧ/@·“+Âeau,l£×áqEÍqKg÷Ï£ßo'ª¸—€D<2¼Ð¹|é bõxA ê(ó­ºUüØÛvÇ-̤¡ ¹Êoi<êjì‹k4¾< ì¤ê‰± €îÛD})¾‡¤¦wN@ èid7DCm¯Œ ÅË+jSý„2”NÜ~¶¯ÍH¬ÜòÕ@(Öå¶ qwÆ)~úÃÑœš½Âå)Îh.ñiÂ’ïvÀ6µÍ@Òo)l =Ÿ9â–kWzïoŨµþ]òƒú ,'ªWœ<äŠ~ÁªÔV E/RøÇjÜÿ™ïÜ× ™\»’( „3Ÿm= AÀ9‹Q{Dø²wÔ…’×M‹¨Ë¸™þù‡æäFT¢p¼ï×Õ +ÌÎæ2ăˆVϨ ÃC¼ôFýÅV?STßrôpéay ºœºòÞ¾-6·uü¸ dñ{É?KQ—ˆQ{ã˜&¿ÅÉ'Xã_ò*rÕ NÀáze¨àí!\1]sïàcq ¼ {µf d«çI’"4^õ±ÑšjŠóZ?ħ€èr'“ŽïºTÂÄTÕEþßZ@:íÿóÂ=£DÜþÔ¼âÅú¨µã ¨oñNeò’ˆìp Ê›þ&ñüÁ4·3?bo"¿‚îÆVËL–.;ð øO¸¼õ÷Â׋¿?‹ZÁ}HÕ¤ô¥-ÀÏv„"ãè8'ÅÐOçpƒÃ@Niœ{3„ª¾Þp¶²Ãš## ;qUNr¸wË_Ý[ÿ?S7mŠ:O—í)Ý‹[qåEŠ.«ðÜ9¹á4Kðþ­OÚ»†{÷ˆe@v3ïŠZ<–àÀ)§÷FºŠžABk=/èb€”ër[4¿ÿÐ:•ümêx«N·Öxµó¿¢¯öâãLW†ôŒÆ*YJA ¦ŠýÜ™ÛhD¾Âý§ÐOૹ¶Í‰×ß+Œ‰ñ÷ö£‘Iß×™ÏîpZ¾Åçyãì¸ÒÏMa~Hë} |}ô@6¯d%Nà(aÆë' È]Ý:{ÄÞ¢Aüó3úÖA]#KƒàÌõM‰ì[Ѻ‡i]L²îÂéW¦–4€H7ÆÒ¤CÆcéþŽñ{œÑäN+ûﯧxôÍ_¢‹Ê"š®öäÐð6ÑpòûQÒókà›ò˜@ë#1 ±ß8”Eãýfoè¶y&p^]Ö ÈgFxoi€°kçÛ·GÉ@DNKiß»Ê0þü6êPí)7ºiOl4y–i¿c±é‚Íø¯‘¦êMQ—Úô–§ ô@z}'üÔqà½`¼¢_OeùòKERl0½í™¸Áƒ›7ïÜ"PøoŽenÏF“|í#®;Ê€ØÿŽƒÉ |Í‹<ç·Ðü„y¨âéjÝ®¡´rXÈk·µ/°Ñhûưiúa<Óh+nÈÒ„®ëEs„EÛo=Çõ¬;;ã¡=i âéOqÝ'×ÎúY ukØ_’¥á÷Õ)†~ NêDe7‘…;†Ë_®ÆÓì-UœÝoÃ0Nªzç|¡ˆöÜëWè+n£©“’üú»¥øþ­îmoÈ# ö¢‚8ƒ¾ƒaTܽ1 Qœ›gKY‹\m ñ Žë_Á„Õ~ˆNáS–ÞI>aà‰É€ž š®ªØ´¡ë5´Ñ|P$±º|mÛZ\ãwÃ}s„3!µÈKWix0týô—@ܶŸi™ÿ:¢îNÖ= ì¨:#êtù­J´´«<£(ÊHKö3bÏÆÑ¯Ó)W«æ¯ãùëf•§O¢ªÌd• Ínïd7h£´÷®")¹@êõ9µ¨Dö'2߬’h¼3ŽÌkˆ‡Ü}§ªàΫŒ}çÐê¹ëÀ¸ö‚®&£;›\ï‘€d8A”žò`ÜÅðuøVþšÅàù°=Ã+ó?T½Oo% ¦ˆÕô=QË@ä%7 »Œ¨ŽçäudXq…±n–ê Ü× úˆ¢]Nñ\~NÓ/ܦ‰+þµ‘ÿO‚­àÙé)¿o· *µ0õ© î»ÇªúøA(ùc™Þ®:4ᬖe7Œ{N®Dí¬Æã¿"Ì6^r}ßn-H=m ƒ€¿TòÁšÁr9­kq×~£ ®6'œ}}»~² ÏŠ®oçÆÔ PN  ËýFÇ8}¨ôLG58'÷›%&B$9`¹j¹‚ê{o‹Šé{ÙÒ™X㟠ðäÍ“cí›à‘õ7õÈÈyôPç/ÔÅôí@sÖ3 Dç*Üj’¼þ±GhñVÀ!ƒ×¼K¥·žzâÝC<ÛÔÿ“¬Å°þÿž,Ƹþÿ–Æ´þÿk:˜Æÿg:])껳÷õ(ÒÀŠœ¾Ðå£1É¿Íoé+{» š¯»6 ¢ì™õ;§,£Poý¤ŒÐá <´È7-èjL‡‡·L¶ÕÛ„SidzðUMlJvÕú»„Ú¸ã‡*‹Ïcê±"ýžøö8cd½Ââq©h̪©è=ü…ì;ïá‰Fz”/ÑbÅÈíŸbh´0ñ™ÃŽ8`MàVrN}Š:–N;ÖHJóöíO´Ð7Ò ™«ÝÑÐókk« AÀÇOrÂoŽƒ3/ÖÚVˆÚ¹o*ÈkÆ/³h÷] `ç;¡¼Fƒ'î„7›#pýÎLÛ£÷'Ûʧ¸Ç/k’¨‹ƒð„_Z kÏÂÍ<ÎYuãŸQ~“iÖ`X ðIÙm}ñe¿¿PdiÝŒ=o›5òcàùw7B ?nêé9‹{­Žµðµ¢‘µ"eÍuh¡°d\_Wøº¿«*úÜ™÷²4X#hpð/!?—dU6BÝíî×Å?¢æ m«W48ý¶+ñÃ:?ºZg|÷mõg2@ã›/¼ôÚ,ÌGʈŒƒÀâŸ~3è¸*.G³TR¶ö“Ç ö¸ÏuyGÝ–@`ÀGäéШ]–ÿ'Â(M-YeÏÚ06P|,6À¬_»CËú"0Òÿ™¼7 ÀÏ7À}Š G÷£Á½¾.¢5“À>ô …G ÞS¼?–†ñxÓõ˜)kÀ?ó~èèçÛ `(tÆåþI`Óßý’•.ØÏ™þÍ51S5€óòôóÒÔk¸ã£Ù-²ø~Ý}ýðwp› ‚Ïãd4¶ød‰&lð%iªqvùWüÔ¤QS¯Ý5ÖOÀ6»÷Þm¾&Ø÷¶—ÃãÁ}ÑöëÀНlúPŠ*ÜLµ¸ª)ÀÉ÷`Ûîï€.öí _…üýæÙl›Š0ÜÁ¿c^ÔzÕ\«h;qX$þKÌ¡-6r¸~#‡›ã[ »•EFõ ½—Gêÿï8B­­?ËwÔ¿æÿ¦"͆ëϸ½tžJÏ#³ž}[¿­°Î³ÂÓìŠ&*²+¨œ^zûøF!4tVHí¥ÓÛ/<æ‘܉?Ì wëê¢yëfÓ>I´Ê¸ã¹ý~ÔàðÕ]Õå8‰Üµ4ì)ƒ«Å–<<C¾åbDOžØHJüù†¦“ mk ^Ë…;Ãï:P–rXä¶sR¸²·,!N5*Jn%Ià‘¥M¯?¹û¾XUbÔùí³Ã±Ãø ×·Å-¹¡¸¬áû6DñÇßdÍ©7ŸÑ€ð͹ët6¸ëù¶‡ŠÐ`&üî;Œ?•™èpeáÎ0|Ցݯï^‡ü#hñ9ðžaÇ#‚yZ­·pçÌ&Sï :ÔYúÞµ¡x }j Ø£4†;Ñ®k— ü· ÔÏ„ÈîÂcÓ¥—ß øã7;³Ï% üŸÂ¨WkRÄVBê‹ ýí´òßÂ(4'xœÞv×Lf“Ä]­÷¿Ã­\…Ø6 ãq£âKB¼'QµÞwÎÕ·PÂ1BâÈ¡7hâeþ»é!ÜT@ç®Ç‚ªf>Çç^ìùoîØÃmÿG-Ûýí³swG¿* Ö‰ã¡M*ßWrãÊú~3¥”ÛyÁºÿ`¡ö=3ït Þ¬>ŽÎC>¸t|œ¯ÈÛµ®úKõíAU×%n)–Çm Z (vC£ý°Î…“u}+¿q­•ú…¨6GÔ›XkòL¿»ñle»'Bc¶2©Ü£Ò½滞Ž[· ß-„`T¹iUü§—ó ¹pÛJ­…Ÿ7®Ù‘ö¢Ø÷W©OÏQÅ”ý1394·Nú9ꨊ|j-&ê•-rÛ«ž„ë¦n-u¨ãÖþ‚F;·³sÎ[ýß_(ûËD „;J¿Ú–&\×I¥óQÚ·g+nÈëw§Á¸çÜqÃ>Ç"ÔYcq™žEõÕÖý/ø…)\TÛÍt'„±QœðÈ™è»J£OqíU$+ÏVÓWòüˆG-yáB?Ö¾S™ÿ#?q‹•Ã:û1<ÌÅ7",„_,Îî›gÿoÁš›s Öm4D}ªåé‘S~¸<=H»Âhõo¸Á]€ãŽeW.mÁ½Jáy™5θàgÇ­'î¸ø\Î9e«ÿSX6\L~ÃþJµî–Ô0e¥É6fw­ojøû¤ÝÎ,a´àÑÀ~ì#/îo÷TwšC#CÇ+k…Îâ{û¯·ÍŸCC‡çNÔu–¡‘î…ÖùÂ3ÿ-@ÃsñSVg­ÿ-@C3E‘¹\óY/„°Î‰››v +Å£V–Ššñè™æðÛD<Ÿ®ÍºXFÃ9œFm˜Ç]÷›×I±ÏpÜ¿í”È Þ6¼ N‹°áBi\1ÇU)6€K§¿›±‰£ž=sGÏ ¥à°êhçÖ¬EÜñùçªLPNP¼m‘Áƒ–8ä/÷›¾Ã]jÿ–_ðÐìØ›‚ƒ«x¬ä?x—F3<ÕµñÄÏlXûY…*E’O‹q+ š¶ðv‘à-ÿ§ m8 Ò— Vÿwa®¤osÖ|‰ÊmÊ»Ï7ãß)ú\’¨Nìòf¯a?Ô•¾½BèY3®Û/1>¨ñ ç'ÝJm4÷òàþZ•!Ü5“&|8NÚ‡©Ù?Bņ…Y³Ë¨EÜê´îIN*kÕß~%WЈiê­uO*Ó ïÒ–fÔY¬õæA÷Z¸3ǾÄ8ßÔüè°s7îÚ·eÿ>S"mÜw™ïÚ„çu¥uói2˜yl¤Uç*¾þ˜Ùi›*h'}ñþmOeÙ¯z̨ñzÏ6ÁÉUÜ„*Õö ïÙ÷ÿO9´®óÿ ‡XÿÊ¡ÿ¯ÎÊŒà»#zÉú‡^]{y¿·ó,ž²›rã*'âÁ©ÃçŠ@้wL/þÞÇ/P\  ›ümPZ} 7Efàõÿ"!®+Œ¢ú¸ï³ñ†€Û R/ìŸ\J¥ÛpÓ[Þc/Zygý½ñ¬1žèßOyó-8%Þ}E»NÉèv íñÛ)’gB»¯|9õÍŽ ×Ú§'ñj'—þ ˆÉe’çºöѱ/øª°ÆxÔä(€€¹/§cc ÎíêÛý¼·J¾=ajŽú÷ ÑU)†ÝÁ ¡\ß@ÌÌŸé°C5•“«v)*k -4“Ÿñ©Z‡·Ë© ua±ëá©m·-—|AüFœìt9“9bùg$@È£&Döìo }­Ÿõ>…1ûG¿€ßó±@€+p÷ý`2Ùr™ÈoD! Æyæ JÓRw€;ÍM%Èà.°ff»«ƒlùÇGî…€xUåŽ +E¼Þ©²zÜô}âªO¤Ùeƒ…—–Ø ‰–ö³áD5?$m¬Ä·d¦?õ'Ñž÷oÃ6OÔNp]yߟIeÐú˜³p ­VÈßsRMå_eÂgo—û/㉠»¬ü’Q%&öžëÿO¿£f©›1…uù“!p}Õï#=;¼vš­vŽ÷ðhJhÃø‰@è–”x£z…;·®?Ç•½ÿZ6à‰‹ï£T< rpŠ¡ª$`›ÉÐU­Ö7î–ŠQ‰nªîÝÛðèTžÛEG´îe‘wW[H”F‹"„ApýU¤Llðn““Zð1ÅwV—ü$ãAúnÄßÖ´x.ôtÁ+ßç ð4ë/Ρ9_™O÷>û›ß¶C¥N¸ ‚O¦çî<$± ÁÀ1…çOþÛ×îàZ¹Æ@j.8XÍxø+;¯œ?„ÛÏïwl@K:kûÌ…Náï¥õ¬ék@䡜U2 ¸Î—8½ ßÄz¿‰ö véßz4^a¸¡fË.…ç?ªçf×á>bµaÃ%*³Ë^ógPŽÈýhåOÈG‹N˜wÈßF4û€øí)¢ ?û“o½™?â'Ë <&ⱄ!§€[ ²üòò¨2ð1½Öàd‘‹|k6Ìdàì{ø;>ï÷\ç7ˆ½ lfœ2{>Ê‚DÖVuÓ7À÷éÒ‹_A@ñS¬!8Kúšð…©@™´‘•ø¦ ”;懅°7¥i`»\(”Yq¼‹Çw÷ncòÁ£G®þúîS äºÇR®µO@8–e¿¿°-Hy«ôlo:Lö=ü÷&¶ƒDÎVMSÚ (»"¨¯áäꤿ­5ûwæ&)›\ \k²0ÛÄÓþW@båR™ÛV^«¾óúø ˆ<å8v×Ü„=¯â[ÇS¦.†sð½s?PdCJ™óÓÐÔëê¬Òå ¡ÞÎÄ2û$9>1Š û¹¥ýáOö‚”Br?éâ mŒó©çÖºdk$$_ƒtJÍf X§k/>´Aú >­Æ ÙÜ9îM-)?»ÕÑvàôµc¹D¡ÒµÜÍne¡¶ðÄ8ž”în M¤ÒûÌþ–}B¥ì°Ÿ-Þe^sí8Ôaоù¢“ak¢‹jûŸ³ŸèvêMmjÈi ˆ99=êÿ üR‰<;ºd€ÐÌôy·<;k÷ò»«=ĽŠw^´d%ߨò'¡ =ó·Ô¿ò«»QiXÛæ[E§2ºþ§òžtqô'œN…~¯<Å|ûaOô9¼$}S“ï Pä÷Ýo2Ý‚f\·¨Ÿî‡"øy_¶ÉŽì9\Ã/|ßQ9CÿÓYŸ¡zì‘ð9îÖ —êFñ(Ïe²ÿ>×’9@Êc+*«>sÕ+Q÷¬€Ÿn.zßÄ:HÑ[%F‹ãµÙŸßŸ?x‰25:~è2ê¢_BÖ)õxjÓß„@ÈKöû‘Ä`4#ïÀÔñøýÙ^TÑÅ­ œÝÀk!¸§4~ —°}¹PM¥!ëy6öì|Ü…‚ý_ø1ªço ½£›òÜÅL‹+áK¯;Ä FÖ²…?sÀØz³Òðr5êÀoÍÏ;ý¶ÍýOJž•°Ö±s?Z¬>xm½ÖM¥ßßâV ’ü)É'fÑë'­&?ðä¦óÎU)j "ò7MÜ]ySæþè.§Ù2šÝiþ4ÄjãØ œ*/“¶>ÆÅw”H°‚´“eüç"3`§«ïo,ø…k&í\K½4†¬@ÈÕS2ßJ³Ç³Ò:[•Ïï£~›^žnào¶ w­ØŒ~Wy^vp?~µÖ| øoÍ߯‡@`Fü˜µÓ_Þü YlŸFíÜ}…+nÑü7ߙ෈?á¹¹ø&z«íöïr›C×Ay.àùjóZ¶Ð„KO%Ç•4EÑh‰™´ OÜÑ´×ó¨ä1EGµ8 ø¼•ôU q­Ætd^yä.køáçn Ù‚=@ü—¶ëœZÏ•_[èPé²tþ&&á‰õF“ÚaÜ÷•`¾eî!ˆ…0FßSE %>gŽÉ[“öR÷K€,ºâxçw5•…)fî¤R)ˆk~ýdW„§…—”ƒ<ö?£|ó‡› `rõKTïÝoRéø^ܼÒÓ"›‹T£¢1ˆ7K0ì q@Ëܓˇ®T£ÞÑókVT¦æGÓ¡Šsx=µSÓ¥M þ§þnù³ÿ†3wªAàà[>v—@ì«@^•~Tµ¾ ‚ùª?‹4ƒžãô»Ãý Â÷}÷†Ð¯¨·ˆù×E‘­ º9÷óRìÜb­( œñМû¢ ø÷É©|žò¢¬³Ñ× S¸í©Jüø9Ô|k¿1ñ.¤ ~î¿¡¶€@Ò^›ûI¿@Àº†qWº8gSš¿®6€ šÓ©ï”Á¯ƒÖãXüê“:G€¿{WÔNæ"4|tÄqKTˆ\ɱM¾¶DH‡îü6ŠÜ@«š¤>·¡®d‚ Ùk™Ô–"wÛOgÃ:°oâþKS¾ÜÚôÖ`ÿ3×çWBžÃ„Íêïð:C÷ÁÎa GFDF¹£Œ·¤X‡Aà³Å±¥Ž?xÖÝYÜbOˆŠˆ¿ãüØ "å.}r¾ ¾çëƒé qÛJ:ïÛ  §hþÈN²Å>æ\ÊŒ¾7L6•* ü?2Eùø}B€P7v"Åx'ì×ÛoD›ïsuªæŸ|r¼€ºLžÿøâ„õüœNÐ…ü'n-LÙ6y¿Â}›7ÐùÊF¿Ê›”SB€¸òa—=›+0ÙI?”½JrQ€¾ÁÕ÷Àç$a¯Ù´|f½UWÃhãqtào!**x}æc8Z6§¿`Âü¸4ÿm4}óy¦Œ§I ̶drWè'°–{ù’•6/·Ù÷wFwWÉúî½¢© Æt÷›à7N&Ìù—áÛg“óÞÀq`…[Yç4ÍÞ+´~ É7ÿ(ÇÁ9Ög¸°Â‡»%9”VÛ1ˆ¶Nm¡:½É“÷=¸Õ@ ÖFýNÞM £[èTøù ˆµCÉ}'hqØ“ôá¡ ]’3!Ê”gd|æh蚎˜ (^óazÙóÖûÀ0 âwtHîkvÞ£Ò5¼¤ N“Žlœ´×a'N04ý&é¥jå-C wªq}|øl•»²„[?R°]¸þ$ˆÛOF9aea[¾äG g~Òõ5·Òõ5²‘f7P°n›ô‘× œcZ¼i ·ìž ŸÅ«ª7ºÓG¯ü mÆÀt¤êž¹ª®1ßi÷sj•ÉÈÆnïñÀþE´5EH ¸øÿuÞ¡(‘ä…Oª@(›ÏsË Ü/O:†A®Ýv›i˜7ÎïÝ×BÉ-¶T.֠洛~`÷ô*ÐЋÄõ+ëq7'ðTÄgêäÏ-¨ú©“•-í:×·îÓð.ìë†bQ`÷M‹9Jãí+³ÅC‡üAB÷\nûv Bý<ýKæù¨ŠóŠÎ‰ýòá9x‡ƒ¸°ƒèí2ü†ê­·{„wê>ÿŽÊ¦÷Â2k‹ì×Û£@±ÞµþøA<ŽT¬²'fjëéöàÍý%\Ó ›îÓÌ\«°IªÞbÇî°\‰3þ «ô?ºëxf·…]§‡}>otÁ·$J_’N=@¥?™üEä>mœô Žê]ÁÇaã… Òl½Ì$˜Oó/fÙ{í@xºr—îòK äeýmÙš¥«µÿ#¿CÄs pˆëìÚ^qüѦ›u3û+\¿ÑEØüÓNàËca’[÷r•\ø–À2àòg{3•ªŽ–Ož17ÔÜ ü+QyKß?xkF…«^ÙUØÌTDMýYU2ðzµr¨RÁk¾<Ù *$$h¶tÄv;ð>yG‡îÊdù3cM¼1p.½µys•J—0ä“¶}pî8"þ¬tˆ^rÊìÞ@'gdž»QøÛ{ËÄI´8ÞPZõøö T·å¨çëïTž›·–™Sée¸ÉtüAèòwÏW2Tæ3ÎêÞ(RLÕՖηsÏ= š óÆ ?f¶¸°ÆÂW^¿â³»ÛÚ¯¡éë>·e–g ü¾üšñYàÜ´3¸€MÔēÞ]æÃ•ÏQkòáQQÊpÇЭ7ƒ0¼§ êÏf¦j8Û¡åæNF´ŸÊrìÝÖµj<àö#…ïøà/Ù!yƒîsæþÝ‘rý ;F1RŠdÙá’Ešw¾+ ß(‡”½÷•!†ûEàÔF´H–QQú¤…zÒ|N/5[ 2P]ÝlWþ5”±Ýé>Õ¤ü= X úõ¸ÅvãÙÄ ôµ†ñbþ¿ kà?ÞÈÿêôfàSþÞ®`€,bq<§ý¢¦¯Eœ^Ï£2»;dvûo£ÒI_h}$ìGêŒjÎÿ…Ñˤ¯µ ’PÚjÍЦü;<쉒XšB—m*pMÑ^‡â+x¢~kj|V °ÈûñDú‚ø1Z¸RîJÄBMn­ˆýçd0ªŸøj‹‹Ì–.ƒð¥˜ò Bç˜ë›žà%ëÛ훯ƒ„YéÇLaK`xûüö7—4 ÒØÉ 3‹Z»ØßüÒ´k±ÿEo zYMˆµà…€,‘ôŽÓhiƒÔT4¯ž&,¿wN‰ÿœ d ʲ3dê箼Ítõ›ÆpnP¼[TTÂÌÓ&…éNSSõ=öë$p${ÔÖ¥•)Ä÷+zDëe9i\ÃveûžcU@*ø~“ðþˆªç=™ú ¯RG¹3fÐ8Ë—Î[^I@·¶÷Vξ@<¶–:á‘Á£dÏ.Ñ{—”-­†e×N©YýœõÇÍúx }ùúüEóz•D¬‹ÇKóíu_¹Pí´ÑNëÝô ¨–Øò]/ͲÊëYÐJ÷ÙžÝC6¨‹üpÔ²V¯øî7ä9G‹cl”Óá?1ï^J·:ò÷ç+hÐAkSVòK4¯ñZ÷a…1ê»o|vÝxOj[üx•’M=ÝFl¸½ØXMEHVUJtŽ?ÐôG-eõ¬Í g|þ@äY`èZÉ(K éðŽPïÉãxÀ&ašÉs÷ ‡pÏ…æ‚À±0îõX&*ÉäZ† Ë7àËÏôJ÷À³¯;MØ ¿@ÈÆîªðãùòµŠÈ âÔ(vO¡„¬<ã¿ß5Â]"ûŽ´£…ƒ×ò¾\ÓŒ³X€ÀÞaõhtõæÝlQᵇ>.* ž‘Õ~äðý‘°igaynq‰™/ z9{ªîôܦãÏZ€'ðãØÒÌšîJŸ¹ÿé3ðõm¶Q‰™+<þ̈ŠÇ¼.9;éª@×Óˆ“8'vϪ9%7dÆW@bFnï6WM Ý+e=©jм„>&¥ò׬^¸-Ÿseä êzr]ïèµÃxŒ)3!kN5òùvNûÞ²^ÿɽ»q‡CPßùð×ÿ÷>Èû»©»*vlø^£ïTÐ4÷êqÓò ɵZ»^r:3<‡´êQÿÖT‘À6V»³n4²Š;Íú‡@Vâi÷=g¶©ßc¶„JÈèØÜ" Èðo¥7Èe™ü TóÑY‡.€Âz ÂšÉÔGºà’§Œc³Þ™ŒÁ gkÙV.g‚Š®eï5íA&­Êöä~ ´p7ª¡¹WÿN…ô ÅãÝe ­°rÍ;ßd_¥î‡ f›ETaSc]b¶K(•y¯Ý6ùn·ö}Y†‹Ÿ±yR%n¶1clxu¹¾ØÎ;q<·W|uüútUä4ê}ûyjâgHkuÉ-¹Ý©ÑbõŸÇðÖaÍ£·@6A5Qu‘€ë/:–#ñZ¹©œß÷ï¦1@{»»¡CÞÄŸ/M¾4Æ­ç¿pÿœD5™·ìª—àu[N& K‘ô» @a9¼¢rí.H˲Yf4æP%ê$øî¡í@qŒ‘:( r/=U?iP¹ÄŒÅŒÝpÃ\bb–+•;ì³Ik5ÈŠÝñÝÑ{úì˜3q²˜ÚBUp|òÊoÔ7„ QG¿ŽlõÅ‹?¶4}á‚L|Ò÷¦ Añz~KÔŽw ™æ-ùÉ d/F¾oý*N·ã,­Ñü›WK’#›poÏÅgØ£WÚt8ýg•†O^ÆÙ ™@ªÇSeŸ£@LnËj¬¹:0„Ýѩ穨ѿ8ø'xÅÎÿT¦Ò¥øF^¨Æ þðª_×T=“­©„?SIƒôTvÓ¢‡‡@R{ô‰×„ (´9^Mß¿ù, ›Ï]2  P6>UÑßç'EÞG‚Àb¾©ü¼ÀñŸNòR§Š O>ˆA ùUgJýºÞ ´Åì<¿Ÿ¹¢v ¤vú¾c}N›¿ûÍÒs‰¨/‰ŸoÆsnÉ”:ÈáiÖ;/–ñ!q#+*/¬°\Ô+¡ÒÕpf~¡§JËj¡îT1¾C-cÉTPÐ[=ûÁø:ð›W.?JG«eÅ}ëKÀ}¿Ûàà+OzøR‰PLñ¥üþAâßm˜qëëüqW üòýöžû)¿¨ŒWÜœR@ú̆×ü—¤ðˆôsÅK´ñÝjLˆ°Å­nŸ8¯ºƒW¬ßëã÷Ð¨Ž·PÕaE*Ϻ¯Pyµ>UÃèABȇÞ¯«iÅ_þsb–ØG—¿- Ñòc}WžXKõ"Þ|¶ƒ„¦OüuóC¸µú¢Q`çªàÞÛ~šÜ¼mˆÄ£GT»#å'¦òA:üøÍkñØý4³þç†t4§e†_{Ç⥸²ž\:1ô[¯ÊSV_z¾ø£Q­>:߈¸ëá_¾Œx¤ùoy 7Z±´±Ü1¢Iåµ1kc×¢ñÞ±?…ܨm×Uõ¯áKx*Öèh«¤"È,G«ný ÒÕvlôB“O3޽ºü„;èKíÄvÑâû­ûê€"&‘ìœ~™JÇ+˜·# 6åúLçè âÝιópß-ƒ¿ ïh|›Mˆ‘%zCA}ûßý–7tF=ä$õÍ ›tùh´ò=I¿{à¾öPØO)ŠIìÆ]C–Ño;ORy 6)h.lB³ÛwßI7-ÊUÕ”ÜL3\âx·oÍšyoœ1±…öÔ€YÑåØª›(•P7[ê­‡ò­»¼š¤ò2¿ óò 2û­ ÞYiN¼õ˜•(•’äK Õù¢‘êXµê– óVÕªÒ9dmÿXrŽÊ˜¿ÒMæ~›'œú$S9ZS²ÕyžSc“¾x$d7/øVáÊ~JýÖÃÂÀ\síÄ5¥hP¡¶)~“Þˆg%žè…¤¢ßÿé?€W£öX<¾øCƒR‡n_:'ôšm[/(>ÛðP¨ W¼–¬;|Ï7Úe—z UR5Sñ)Ðc¸êH× þÞF¨Ùe®=9ŽxñŒyagÈfËÙVÅŽâqÈÕ«+ăºJ{™ÓAnëõø¾Ï4}<ðõisÈå=<õÊ“Æ{¢ûJ¼€þÎÞ?±²ÇiøpÛätÞ3 Yпˆ<#Ž——¿Z´fÁ ™ sB¤B“®¿\{áÐE³ûùÏãAξñYOR&(1ü]öWE¯ŸÎ\?j‰]7Úp/z?sÍŸ»!AÑåó½qwe\wuW÷¸•(†Ø5µ¼aÇóK{’¿Òx@û¿“iü¥Qú¤k l ê¼S6 û–ש¤´ïî¹Òåº7s÷¡õ];_í[7r÷.›4*gœjù"Me´ ³%RÇtÃ¥÷§9lP‹¼>rô$®ÿ2ÔÕåÒ ²<s~bt´øj˜¼U KLƒ<YßÓcPÜþŠn99„üÍo äPy…–òãÛÛÑ ƒëº£ðCîLÛÛRâ†FãwŒë|ºŒ;JyäoÛ¾C“AYË×½ôA6Š-7Ìärô½^6p€Çö­óçÂIF6nêò&´P›»áÈÓ§ î²ni/„F>—^¹íÊjl]KÌò¼Œ ƒ¾èû¥U ï:Ý8vւ‘ÊP“ÔØhøXÌÁNIÜù›1²^(°Ô±Àï¿ ÝU@nÓÄ´'4ÿÊ‹=$ä»…Þ†ìᅩ*v5þyMeTݧž§‚óU‡mŒQŽ®Ñ¬¥ÿ•~™ëtÅ=-4<´{†÷ãoÔÃ?2R]G³Ó¼òyö,!ÿ³ Ô½a^×1>Mm9Ê=bñ¤¾*•¶¼¾Nåøømßó²ë œxhpÌ(Ûşn¢‚OtQ)•TÁÊG%Pðx™Ð«)‚쟿íbÍQN‡ ©Ù.ï½ÆW ä ,ýlnÇYàÁ¾ÕCŽ;íÑŸä8æˆíñP¶ß±äw7@jFàóò[5Ê{°'ãÊžúlî²k$.û0ÍØß–bÆâ›ŽªÀPÝø·‘ˆÎ{næ* ù aBýváTzËümÚü {F™Çï[ •ÕdíŠ$5 ¸ ›Ì}zA‡Ý–Z? Wsƒÿ€œv𢑬HSÙÎ|_—|½Êó84ØàÜitäüS[g¶PY·åx«Ì² ‰{œz±n ë_‡vºû7¼ù€tÇU¦Xñtªèÿœ(¾´Kïþ÷mgîÖƒä×;{]–TóŒ­k d-Ç•½åÜ®K­77¡ž_›'“_¡oÕ¾Ø'Ñø9áìŽTŽ3[–Ç+YQ}jdÍή/°1ÍìÙIæÍ #)*£åG%˜ü ºçBà¿Äm4džv(Ó }zfUúfõ(Û^ÐæfØE•>'x>¬6–Ú?¿·+ÆÓol ÜsTþ§GýòRVÁK‘>™yˆ x|³täÁ—G@îÈË‹Tƒ<埂ýî3LO£©¤ÍÁã¥A†éÒÇ£\ýeòš›ï{*»0‹ñI *©y†ò+%*v°ŸýþÌ2šÖ[OtWѦ2©¡²FñäF õxÿ20…¦\$«©L·.\òs¦á Ÿð‡wª‚ øã©¿`r8( ©üMà@_?î%£É?>î¤Má 1áyiŽQpk\T\|"lŠzÛF&€gýö?uSx1qÈë(_cÿ¾Þv¤¦¥T.©‡ÁIãQYû uéß<àa=FQcư¡¯n7_4ð1ý;ú„*pûCGërž,8°lØ œ,% 5škÄ4º 7ñ·*ìÒ£ š;?«a_ƒ –1ï+ñ<"OÙ!͉=oizæO‰1ñS  Ïà]&G³—ú耡þ¨uÍØ^yŠ˜ÈüdÇ®¨yòøšfÈEÅxß?-ÒÓy‡©2xùmØ.Ž«¡Tzë„hš^"ÜcuäVYŬh¼ìÈ¡‚Ñ 9À·ºpMcÈÌG2bâ@úl˙կ­¸Fëªæ·Hì MÿÄ[æCŽ 'å˰|˜ ¥ñ„åùOî`ê{™ªÀÓ„ËÎé¨_7c/Îá!½)‰Ç£@Œ|9GYÝø›å®úÀ.æ¤ØÍXx´—<Ðý¤N…™cïŸ÷ Å’c#ÏeN¡Ù_]“ÃÓç!Ã;”?/Be;•GY¿&^.Ô¸cmúò\¨L#Yfí7A&ôaó­K›Ñô¡þ®êÒ $£¶­-5K|Œkó¢òûn§9ÔQÜVÏ´Ïðà¨)þQâI¸ŠÎŠÝÒ ë_œâ@ú›/S׿rѶºq:$>®þDÙ4|4˹ÞL^è} R½zŸ2f¨,6.(L§$¾—?@6öXßßoæqƒp”Ý»K@Ð~Ü›6çkÇ?¬õ¤²¹¼=þãB&P(ÿЕ)s\ŠÄ–!¤ H#à/ô­»­1 „Û•Gœñ÷ôúE µÎ»Ú S…ž%­e)ÒôÃÞôÛª&°I­d>>s ÄŸÔÐ?Õ˜9¬Ý&2dšdÄÜpïv×Û£ÖÊh,Î…üüçgÞœøwÁä¦/”6-2ƒÀ¹J5Õ˜Yßþl”Sí3Èk5F)õ½Ž's¤0Hü©û¡nÕ¿mžþc÷¨§ AN.•ϳ››*PèŸÕy 7»3’úèÍþ{ÓÍâJðZ˜íÁäB`H`ÐÖ“óÂñQ[%f¨Ü:cõ]•F h~÷ïQÄ #üËÍ &@­‰Õ$“@n>î²™÷&×]*? õûô™]‰þhê9Ñ™¹Ö¦~žZî¨ÓôDBûQKP´×9ëôøˆìJ^÷PY ¹\¿“A€=Ø$N÷ð«ÙõùÒ<0»ÛQ•+Ê«¸|@Üà¢SÚi 5?^8pö#yyâ9ãÝ× vn­oÙt^޶4YÝ„~ÙÒñ—¾µIÞxèŒeÏþÞ¯…G«M“—9}AúÁf¶°RÝ¿'ÇMûÒøD½z!~Ä%&6ˆ&­ÀF1‡ÜÂì} &öCWîq_ò) ¿_ÃÝ÷*.ö=4¢«îuUMV¢~ÖÌú~`Þòbð•÷•;r›CRü7 W½y¿NH#y4ã¤é§ùÏý@•êõ¾‡Ÿ\-­‹Æïuõ¨|ÂHôyÁH3)u±Ï‚Ô9™Ð—…àB‘oSøã‰BtÁ©N¤ßr”´e¹cÿÖS¨,g¢“Ó ü ^#øè¦°9þM§áßßòwS^û{à•!•H’jàûä·ÐTŸÍßš»G |(ÔçSú® ²ûÕ«;ºA¼uÇN£Ô4Ô·õi@ø:•xú$YªRHþÒj ôÝm–®²waî„ ÈœoOuÃUuÜß²–7¼â?½Ž—âcF©ýÂ¥w^ˆœÙ‡G=KÜ–3 ¶^~nñ¸mêœKŠ.PéW~6m3n‚„˜ÛCk 4§žÁ Âæ£í?*ƒ4Ã`Íéʯ Sú-ýÓìï†"ˆÅüëÇ ä7(ïýÙ( ¼W¾%±‹ _%¸Ù‚ŒŠkvþŠ9žÞQzùê,ÈöP´éNƒtõ´£:7Ù†gßM®ÝµUؾL%J‰K›– k‹ÖN%`¼!83hr÷¦ëj4[i]í“©À¯yïÄŸÑ||ªGuäbä8- ŸÒâ‰Ç¶~K@rd\Í ¦ÅÉÂö’^eÔ7ï<éœ WW²Âh:¥#»±Ö€ ØF›õöÊVYko‰? å(}”ÞND¼á”OÅ„ &ÌY• ÛòN*èÈíØ¯!¡{å¿'KâµÀk WŽÐpî€Ìà‰ÇŠÏqln«Š aÐ>õÿ«¶/§ºÚ5„L¯!C2“)c”°™ ‘$c†JJ¡(Jˆ¢PJ£R¡H4¨¨ˆ¶¨ ™ÇìÁ°'S(C†ÎOïù¾s®ïœóÏ9×ù}möó{žµîuß{­g-ó‘¯C-ŽH‰ê[¶iI©¨;¬:“Y‡Yu0ÐÉBŠøè×FŸÝq=QÛO·Ä!®Ÿ“CeÛ`féo_/˜ãtÞ*g‚«oÏZið]Ãú·ÏnP0ï±–ÔnhÓHÈmI†%sÊ×øî)$¾/}ŠÁºŒm½Ýïã1ýlêmò|+ÒÞú·ïaA1$þ†R3z±_û7’•½kR笅ªìG=q½þŽmHu1]™ïÚ¤~;e°,œ L·˜7«¢qËï–URư×ãÞ½¾´»)ø^:Úy :YzõÑJjþ¹Yý§8¼)LŸ¾²õv ´–hk¯„á¦ÄÅݶúÌ‚z@Uÿ–:īߵB¸Êí=!#y‘Á¹¿s€½±?ß?FÔ}ô{€6Ú =t M@j§Vî݉ÿçdI +×îE‘¦þÊÀY¤™Õ7ïZ±óÉ>Îþ_ˆß7«Ý­*ã;T$8¡Fúxñ:ëûø¹K¼15ÿËdIå©>I5m£©íäÞÍ%° Í3–ÐôÓƒrwŸÆtœHÇF?¤ªÈ©»ê Óc÷—ž…"y1¶Ì‡£šÈ &)éžÌ š=ˆ+Òn50ÿû£ñ!1jc!nL—)óŠ×ó+Ûvë¶ ¤þÊÏ%³T¨†Ã`×Å`ÑH¤žq]Tü>Ž~öPb=ýRšœd_O5@ª2ÿê'½åÕ/´^ Õ-ëyk$Û1ïÖx‰›)âä©8{:{ìwF‘œ‘¢Hýu$wgGÌã=@Î2Õ ÓtÄÝï¿fž)ü¨Ûo⾃ûIZ)"èýD2~ÄÆXy[#«µw$ ×·øbŽ4vÖ¿¿Ú ‹Ñ’<Ûkspc{xíFaLŽ29«‰”o¦M™°òò'Œ~ªþÿ¼²Hù?_Hù-Üÿ&b8u€qëïÈ`ΰ>íDzÀÖÜ›Üö¼ X”ªåÖ¸ òdëkaxÕŸÕFÛ€«cv/ÕÈž]u¼Àª9#ñSwØ ™ŽkÆ@©V˜¶·ÇìÞƒ€ñH1o“7PQŠF>X«u¢v™BûËÏñޝ€`zœ4Îç ôÓvœŸfy½« ÔçŠЯ9Õ°ù: jÈÊ]ãÙTŽ—s½7çmaù»ê¡.0[O¤¼ D-oâŸãÀÌáÏeÞñ†žîE×éÁ`ä¸ó›†áÊs=¿>ßóQ¬/ô¯ ` qÊÜHÑŠ@ óõn3;”Ƕk»UèêVo¿tXÇ'½¢Dr wéŽ[„€&ÐS¯Äöô3€^ÿ–YFþ D—^¯[›€üBgèŸ×߀úC&mÝÏFUûÛ X3¾uö‚ÕzŽt¹­0PšD>:d åû”Œ˜'ù€úÌðäBŸ®wÛ[`nþ$wøˆg©s;ξô®A¸pt9d›÷¤ÂÐGÛï;À×’èÇ9}L èœÊ¡ÛÁˆI6y8yPzº¿“{®=¯SÜ€‡¨íàšOæíƒ,.ï–Ìêóoú¡ó©TææÛyиþå¶G¯žop}û¯i`ý“°%‡›¾©te¹rú«G8oéB]mpç{GC åz*/žú‹5É#¾àóÀ÷HØ5;EWg%µ@Òù °yC‰æÏ¼€Ò¢}nð‡Pã’/™ü¢e£N¿Ûˆ7¼ý·Îžˆþp9ê›DF¡S©o°V·º®*І!¾Þ¶“Oo[É »N¹Rbs$ :´MÕóf W~Nþ`™èõ•ñ&A…¿Ët@õTøý8©`õP{|‹ZBµ€ð3㼟=¶ÏŠ™·/«Á(îóAÅIs}3*+[iƒoyÄ6CŸDÉ¡XQS`ìÿ\ÞÉý͉y…Ü@ À •i 3zÇaÔ;ë{ùf_ (ô[æª4+×ñÑ XÓΧ c±ób4y¬‡1 E»)Ïf í=È‚ñ£Ão‹¬öƒÒÇsÓv@µcˆ´î†!ÃÂÜ'Š€fãòó|ó `å§z¹nÀì1¶fdì· ŒT˜¾\Kü ƒ}gKõø<`tÓ.ø†=Ï¢ï܆{Zp•GóºædúÙÆ[‘ûdÄŠS(ZBCb˜'˜¯½¹XO>.sÇÃhq¬«óŒ.P“þÝ% èSw¯õ‘‚SiëŒø?¼{º:þÖ]D{Ñàó-e Œ6®VúmuÞß_WAÿ§˜ÂU‚˜Ÿ€…çcO ÓbÏÍŠÞ*ÿ#«Áq`2sŹ+ vôè×vœþ´ù<PWŸ<·õi3ÔºèůÊêŽ-„mSy­VŸF%`Ô°vg„G°Å6vľü x-æ9#;w G<£}[ƒ€Jþ\ŧóHdÏŸ‹R€òV±uOe>à§š t[ÌO‰Çnb׆ŽÒ@_àù¨·0¹&¾8µ\Dâ,N¶Ü¸T™æ #å=µÜ÷ Í꟢LFªzä϶Ӄ ü bëqÀn÷¾úŸ¾P68nßÑ ¬Š ŠàZ! úóD0x<¡;/)¤t˜’påÈëö™s@=2P"›/ û£3²0à•–FbsE*Á]Î÷9PÌY $"Œ~–·\’i¹ncU P4eOyýÊ.´>^&(Þºu©OŸ}I:èþFU˜¸Á8!弘[^­ÿzX2OcÓS%¡Âœš ´f.S±a ™FR³ßÇ%óu´áëU€gôŸÿ4 äo™ßö߯Cíj ÉñøV N]ê9çæÔKÛÏ%ÇØåèi·„¶c˜þØôúF0]Ó:µS€uÓ èŽ3@¿…*’³»€0SÆÔ|˜Ù{Œ­‰¤(ŒU ÷n‚Æ wþ¬ó0ò“ëÿPÇ‘‚&½{ŽGRïÝ1^¡Õ¨ãµ¨÷‹ÂîµÂ xz8?Œàsö`Â(>Lù‚áKèú'i?áûbÇíŒ6^ ÞÊÙb  L.™fò˜(tTn·4/| ãŸgk tCRΤ֟Œà°É Å@nÚÁz…áÌ—­§vMéÓ™´á–ÿÁ³ċڤž+äkãw} rié¨DÅÛìúïVŽðÉ+ø’X :Э[™Dt¤Qd–c߇¸€•h§ý&h-0+G^dn€aû£ùá½K0læÌcåôöbÚEè1ŒmXà…aáÆ'îG.Á^^rË#`¤ø·MHÁpGCg­ Þ÷­¦ÃWß6{*À;ž/'ÊN‹Ã˜Dùoçàt¦ k¥cñÜfÛN=æ"´,v(EX`qÕLªS3™ˆÊ•ŽÌ=G€dzÿ(³ ˆk›ê÷sÁ¸àªšþß~À|Qî ïÃPêÇ«ŽÂtøN=ݺ¬uzkF¾'–ËÁ§èý“‘vl`TæYquç5lÛS¹‰@’7Û,`ˆ ö)î]wûR¡Ç¾(]U`zÖ³ö¿Y͹9æï$#€¥qÙ§ÛÏ“)7/ þ@Ø'Ç€)V|Î. Ý ä߶жwqPÓ2B(­Ùä¥UÊ*·°uíM¹ò¯Œ âþ6y0ºh{.O ¨fÍ6[?«©xí{¾ºÏ@YÓga©²ãj[ë¦a¸û)S¬ÇÛ,\U·OÃðBÑ^¯a8ÈÍß*¾([²ÿÜãpŠu#u³5àU¸ú+¾‚ádW·'bX¼y]¦‘yPÈÕžWHD€hÝQxf÷> má =å’†[Í>Q–j0üA—9qÈÇÅT‰ç˜]Eù>ØzÃßp8;ᎂjáÙdÍýÉë”» šbý“æ{˜o.,“$m€äÍ?«Z©„×*}¹³0¨Þ´!TX‹¿¸>ÜÅüáh÷±¾Ç@³çºÏt†u¶Smá^ Š—^Ú CìïÛ‹‚À¸¶Õg&—ÄìbEÕËOaà±¹Ä S*L|^0?ÙUää!“ð 0&©q‰ø^ãû‘¯â-€KãWÆxá¶"¹¼2@µ<Å]›„ñŸ["“¼¯'T#1&(–âw"TPc—L+iR˜§ Õgp¸Ózv"æò…ýº@¼PXÑï”Åð¸Ê«®ûŠVÑ ª€nÅ=R?ô †š¾rù¯õÅfê]ë—G‰oÂ3e9JÀYøÅ=½;w†/°Zé:w¡}„uü!¾˜×ñ—3g$±©AÃ&_˜Þ ºýV‘ÛRøÛ;Íã7]æé­ü£À˜ºñ2ïf`¿NÌï,FDò‡tb ©ëáÒí€ýV§ï÷®0Ó?/´¦ì:4.ïÜê–Yƒñ²´"3Yo`wY˜ŒMK¬ÙÇ” Ã&ïZùàþݿu÷n‚–°߯i h¼ A/Á9¦¿4˜ç33T¯­‚ï¾—öî%^…ìƒ1/Ùk ?÷¨_GX8°;g¹9ÍD [¾ÉÆø6mµF•oÆ/¶´iï€Ñ¦vçòš0  -¤? 4¢'Î+ç)|Èæ8–Ù tó¡Ðp =)ɣІé ë/OnœjSùLNü"°”þv^–ìÓ5ûtƒ`Häú§C‡aúì”tÅ9 ÕÈö8{ÂPkï{!HÑŠzw¦%.z.Õ“—o«á+.è8 òU·Ù H¤CgB“0>öùìï¹äGMª0¾cÑD¯»3 ô-÷Ä0þuìÁÀRG'mËšZŽÂЕ‡7SE1;àõ’yîßͨ‹ÅÀ࿆[õ^ hÞ/ž-*ý†oÏˬ-×jÿDª^%³ñù% þmì cÇ^=õî«üèÓtÒ1`ä5•½¡-Y†‘1zX+߃ȱ9Ì8ån04Í(J·Ä)@ë™Ûôƒj¿1|×äáý= Ô‡Õ_î÷‡Ãˆž¹ädPêÅûw¤ÛÀðSžË†Ï—€8Àè;¼ èU¸ÏÔ‚ ü_Û½¹ ›²­ýb` ÇgeÖ.ô•^_ÜQs¬S»ZÀ<³p+Xö‰¾¯^}„þæâ¢´G†@r¸-=¸ †yzZÔ¯P`¨V_ÿò‡q×[ôôá'0üšê^…fføêfO_Ô4×tãÓ’uIa`¼†ß²;EɘžîU´±¸¤¢'? õŠ¡¿Ö~|ña þ’ò¼àŒéð=ÿ Ÿ]"͉þ>F:©þ æØß)ʺ½ê©=Úq¨û(Э³8NHŸÆp¸î|_°n8Œq·ŽA­ßÙõ›ÛµÙX¥¤;,¡½É.ݱÀ̈8h“t ¾Ï†Oƒîp\À®BIÈ)ßQ“q-h·Ÿ'öiý¼Ÿp_ÿ¤ ñûЀ½žU,Ø÷ø60ßžÒÞ<ƒ>Eµ}Y@Ònæø9•Cq.z¦‹@ÕHøÄq ¨kŽ;ž¸D?í³ëzÚ&ÓMÊðzQaWNä[po°u¤ PpÂõ‡ÈÕÀð UÖ;4ƒv©±@7$b7ÀÕõ² êjßß¾ØþzÓ=.ýúä—âìœõPæŸ?—5crwú:+¡O®8B;CH§·~–n‚\êy5Õ ) \™ÄÏo „a©D-qaîÛ.~¼X®u}¥%ç¢,aÃíçþsM¶«1Ç‘ Æ¥OÖaû=%oœé“HÁÎa¶ÊY 0ºÍˆžÈ;KSê÷@GÉdtLÐöI78¼¸††:Î"Ë0º‘·õŒ t¾ˆ?š½)è†ZMÉS€ÿœt¯p‹ÐMx¾ErÁˆlOk÷?Ð%0HüH‚ZÇÍÛª®¯Rˆì65œ'Ðè±ÄÌAŒ¶É—=áÃÒçmy–mã• IJ5°¦âÖqè(óÛU_/97¸ëÿBЀEó›>cêôá\aõ0šc{KÈn Ê]WƒžÄi[Þ»ØÿËYš­ ’OþÀV àÝ¿ÄNJÐðÜX506/>Rì´;U=†³m@K¥¼ÔéjvÃÁ{5þ“gMpºËx9UfÔÉQu:MmÀ;w„™‰L€a…Ycq2Ð6tïîÅâ®MtAc<<ï¬Ös¿hï:.=`ª­˜ßzûóÀ›Gt»ÇjøoœÊ‘LWE-ã¯a¼ðsWEûd.|%ÏoanÔB?ªÃ¥Žá@˰Z™ p;ÂËjñýU‰}oÂãT}œñ¨|וJÝ@ÿiÍN“cêXœw±y•Ö ³vÅ5߀PßûënŒÝY™¼R‚ë*2±õ:–ñyaë=tI4í`30{Ö…xëØÝo[ŠPд­dÿˆH`ö°’ÿ0m¶c¾Ï@|çV0Ìy jñ¿2¾%)F­7Þí08êBó^ZÍÄüÉ"ß<Œ.uæÜMJñŒ21†ñ:£ÐŠï@tä:ë×ZýN¾xÍãS–lÌO*—ÆÞ>× ;®\¾”Öàö²ÁL÷ßÜuY²F"û¥xÝžÉo(ð#µ ’öŸŠ>ú% š*[îu¿vÊzÜ‹= x²ýc& Ì½Qˆ»/xûƒúéWÖå¬OtdŽ4l/Ûhôj–».(ãä!Ðé!6ÄØcºµþɆV`Ì>¸—Åù%¥Ê<¥‹ƒÎó’Ûk mGiÒ › %dŸÅýÄøêß—)q Šå數>ðòÞñäg@éíMJÆxÉœ%”T¥ÕkÔÌÕ´ ©ùTX€‹ÉûŸ—Ùxe»= ,·¨ ¹ÏëW}ÈEÕ8 î ÙQ=ŒTß× 4 M•îxWqÿT‹“É@›‹Ý©ä $ÿ”åw·x¨¦Ñè€! Ú+'éGb¸ÏìùćÆÕùú ´ †jøp8Xê/Õ=Ò SÀ>êOU°NZ8 ¾ËÒ¿Äâµ@ÿ.3ŸxiðE G-adö|Þ–™),n³šŒnzaúPˆìÊõ÷«À x®í»ÓÄšœ²«×i€2A°»é8žHí¿°ˆSùQþyyÐü¤ÞÆé­àOI ¸£u@©õ·U_’O„NðS= Ú8' ÷v¡—öÏ€èy¸<1a 6ܽú³§°è&> TÜ[ T™?ûê1œðÝ'Æ—‰€q¬þÍ€å~`jòžß­¥ÃîzKMX—Í?N”ZãýÛÃ@^·ö¤‘6—Þ$ŸÆpjìÅÝFþ0°6üò¹‹{``KŽŒÈ \M¼w¯Ó ˆ/^¯zxr(zoh¶zü@ÉX6û>>ßo¹‚›é€\÷…!ü" ÏoÛñt¨g™bç…0ÿr2äy dë½Kÿ7°Ö ÇiC@m›<Ïä­Z†Ãû’¤ [eé36˜¿ÁÊ­@Ùo¹<î‡áÿÒF¶ æOU“~ª?µ1ön¯lÔPÊJijº_í*þì°ÿ gl¸Ýú ÿ£ŸÅý/X¼‹ˆõ梼nPp‹7PM\sáa8a)¸’äÉ.Á^eAÈc9¥Ù[bºØIÌ5ÍÈjKiøŠ_0¸öë†á¢R€Òòs ðûïš¼‘D³ï¼Ò nPo=ôÔpy”߀¦ûJ­Äú0 ~Äë|Ú›•ºg¸mE“Yh $Áè²÷Üb@ Ú¤é™O×üJÜn±¨ûVZ}€uÝ•òÌ èŸæ)ôSÂô!c…~‰®Iq1Âðï÷»»é©@øœz¥XÞâ 5)€~eD’? o'?o¾ Ô$^f–+ДÓü!} ¤¾YMÃxåp–ä-LGä¾]£ôã´[¥w¬’ê§»fÖÍ~§¹Eà?à¿íÎv?ÔñúŸÛ±Ï}êñ9©0hÛq xº}9 .ÓHúï ŠúñkžW_`àÍÎTOÿ’…]ȶóû€˜ño§>j¦Ö— ©~¤—*7~l¯ø7›1GGU†Í@Úðk¬ü1¿{&¿»í*·œŽS•rÌÒe~ñø~â\ÆkL¯®Š¥Ùèî¦ô ß_*É@ êhy„uÝÏñݼâ@•áÈy½ŒéÀl>ç8ã—ÞOq3Çj ·÷ïÅ"À¿wp-|{¥ëÔé\šªcÉÝG«€"±eÀAa7 œ|qq Æh*©7Œâ1ž¼%ÎH*•¦ErÙi #”Rö1ùÃ@‘ºS½xz?ô8p9›"j8>¢'œ7Ζ¸ÅýšXôü¹€0À³y¸7(ÜÜZÚsç€ê‹k¥y ÔfËSûø”ÿÅÅŸw2¨wö8߸ˆíï¡ë›ÚŠ@Û¡T¯¶V†èuã}•@eG¾ïž6…Æ£I.¿·HÃX¬,(>ÁtÉ‚‹‚}æG£êÜQ=@â<#ÑhVÄm÷ý£µ±óW[ +Ì¢'pé8PÉW˶UÔïæ4æËŽƒþž~›ÿâúáKÞ •lÆ·ÒÀªìõ0ù¨ ø*G1ôC¨!é;5W áFtnà€ݧGWE§YþS}€ºlìïȈþÞž†+@ßÜF¾Uÿj¹ûfF Û×ÄÓÁî0pݸý¶AùI\rÿo—ÿñ“Ñÿm s¹û“ÂäÝ}àD”ÍÏ`rþGoXd“L5,÷ŸBÆí•Ú¤JRq©È;…4„^õìˆA:‡’+­»‘ìÉ}ߟ?CºæÒ#/EUIÝê»\ÑjÀð\—‘Í!lèž2Èü‚tËFYù_C‘v‚À°‰ëz¤ý@i%7 ÉR¯äŽ®G–¶{Z›ÅN8`6†t.Ú‡kyÛ#ëÇ\XHDÈ‚p4eTz¡áïz}þÈâöÇ“ e[äu¯M‹[w"›oSCWÒŽ#u¥pŽC„ dÝ0#uÜÐÉžò9§/í:šîoN#+ðºD;Lµý£4ï_ÄBCÑÆÔÎd©’°ù‹u2 õÚ7VˆlŒîÔÓ’¬õ…;÷'¥õfÉ# »×ÈÆ7ØñL”$²9Wºr· YS”Âwi!ëþ`ã¤ÈækÁœÓó½Hbç®Eõ=ÏvúŸUžgo"iù¬…u Cd^z¸“øÊ&DE“ µFÈ&òqu¥À.$ºéOÀý¶KÈH2៮ïÞHmîÔ¸‹zBâ«5,Ew¤Ðë›l¤mv¥ [ÚØ¢'óðã)ì󪯮eìÅÖñ÷Ê2Z/ý©º@YÎîþ'ImÜ`uÉÔ m}µ~¸d¶o+…8‘Úšñ|þâÇH÷kýÊaü窲»…dn¢®Sy™|ž+¿8ŠàÖgùŽ|$EZp=Ú{ÙDGîÈy{mz^<›9>ƒDŽ<©FFéC-QY±È˜ëÚ’áGA”n›*Ò0±ÍkªIFÊ{_îë>3Žl^WÞWX‹t×­–ÛhôYÊ9sše=BÖ‹A¿°F"«$ßvã^²vKsŸÅýÐ>/îÐGAÖÕ$%¶ Ú¤±æ0ù€ŒP¹bó=øu³êN¾ Ÿæ±w–‰G›âb¶é/Î!]Íø5Ú¸‰Û?ÅÓ§qHt·þ«˜7HW[>ëÝ2?RúJ+âØ_ŒL¬j]¯FuÊíÃÛÒjBkÃΫHòñ|<1~ÙÄù†µQ„Ñz³³ÅÈ"ü•§e ²ñû~ç ²Qé=Ò†`¯Kë]`N†¿:‹[y/pŸY†t#7íŽ~$ƒtŽ·Ýìf. µ¾Ÿr¦=”ÞÁï™\‡lw4_¦=3DÖí[þôþs!Þm|‚‹0Ù3›ìUeŒ¬ë.•+ò‘|‹hiˆÞ‰Tùþ}~õ”6«)ñ>d#tôKïæd=³²jAµÛƯÕÈTïïzÅœC·,´ÑóñçIȆkå¦g’¼ˆy‘¢=²\pÐfVI#K»GãØÐ™šŽièMì€'Í’ê”F»&Ò5<Äß}s-Úðúo‰²9-êì!lªž†ÊD· “ùýN'ù þñâc¤b“hÅõ'é˜I´~–ïDï ÔY1=Hï,Wšü (ÿ5æ™Ò–…$GNm™F=ÈBœ¥ŒõÁ×k%sÏ SRÁoty´Eìú›Ât$¾íõ¨ªÃH×àoï_›þí®ü¼,äÌ_ÜÊ—„§+2Ž9'+©œðÁ& i§>º£(׋c7_¶ÊIJ î :–¦„@3(£ZÜ™+ŸܲN™lšLˆC:š3Sd€Ý?x$ªNY ­˜ãÛ#ó?1º›íq“vobB~#£§ÿî Ä?¥¿ãB–kç …m‘¾VïJ-'vÞñ»$O¦!xø¯?Ù-tžèù€,–4,|J”q8ÏÛ·ÒdòÐF©y#?ÕvTuy_¸ˆŒ†)²‘'‘”Lkñ´¿Bb[ˆ!ø“ÛÉs*HþïÈPñRά'¦bÈØY¦ñ%,wç¥÷Å K“Ë¢¿dšÐf¿MYͳÈ&$ÄÑDX¯S‚»ŒànJâ¨ø˜j8V=ºB±7ðxÍJ¾'?‚[ΣŒÇdD¼²²®Æ%¿&¯(xkãfJâ©vÓ«7ú‘)Ô–ó  øÑ¼Küò9©ô~àÛ¹¢÷ë6Úˆ»P¨ØÍþ øÇ‚Úýß~ÝÜ <͈¿œwWêÐ÷ª_ä2wéãÖÆ¯Õ;ðç_ù:•-Ôæ›–úóPêm.çãK¹ÌG^ô?”zHäd]g%φo¿lÿêÜÿ„gþØûüöƒCÿIÐOö?aÆÉß úìI¡ÝûëÞç·°󺨇mð‘v€ž_'øN£M;©ÿíûί̣>öS>í냜^øÓMJ¿Ñ> J=ôóøÎcœr©·ý_2ÂǸõIܺĎ³ iÊ•¸§e~§ävÐn‹?úOKüÚé'øúÌgóÙO<©~ꀽôó»%?ÄW/æã<ŒŒçqŒ³f~‰[ñ¯Kì´øJ^ÏA^'óF\‰¼óÐO~Úq±½ÒOœ3”GüRoZð’–8|3^Ý’·´àŸú‰æ£]pIyäïÜpüóÊø`¼ ým2N|Ÿâ:âºF›ñæºî½H ¾KÐc´Âñ½Œ~ò_ÁøÐ¨_ÿ§!©—8òˆ¿!ЫR÷¾ÿЫ¨Ã—A/þ^ÑwíK ´{ó®¡ÝG¼€o€ëz¹ÐÃ|â$‰ñFÐCÌG…o¡?Y‘7¼ƒþâ>âax ü­Ð_>l&aô(älbþ°ç`…Ÿz„×Ëy†î#ìoõû’í_u°£ãqØûúèg}è'篮K<ë½Ý6Ÿò ¿JÏ(äY›r¶Ñþ}q¡u>¾Æ¿&ýb—µÿÀÛ¥Ïüª—¸ÅÄ^ñƒy¯jÓoê D¿ÄÏÚÛ'úÁñ³$ùæü·â/屟|ï}Þ«älŠ=hsÿúä7ìØ”y[Ò/ûšÑ¿¿Y›ëá ôÓž’¬Ã ©Ÿ«ßwÿP·d=[{UüX—}@ë—ÖÚùNü(EÔÕ¯òÔOÅÑ†àŒ”z‰ƒ’ÔMÍÃRD½Òü­ ®ÖEþºÔÙRDþÔõOòeç·@ö]9§„ ²ËþkýZÇ5R×Oó¤rÅ.­sF5ºþ6oŠ× Yʯuù²z¿*~Ë:´ó5ã1^¥oEΓk’çÙŸµèyT×»æ‰ó–"⸱_–$nº~¶#Öék©³Ô·,þ±Íyºh|>Ê9Ië®{­w±Ãîi=r.b’{s?ï¹ç=q”ïrï±w)¾sH?ßxŽGý}šã×0o„÷SÌï”óy‹Ø/çj»ñœ€÷ÍÃ"òiïäo39¸§Ê¹™ç‡”Ô)ž»“r®Jú¼W#a‡Ù“ðx㹬 Ç«rÚôë=l÷8 K¼ça“܃q"ÞÍ%¹O—ä†ù‹8×/‹”Ã83^Mþžl~Ô˹ˆþ¶ø¸è}”vØ{Ãa±ŸçÆ)%yÖzž¿yÏJFÜÛ;­¾>YRr?“{Ôâ’ì7I©—\ß'¤>ßoelñõÕp¶%÷ð1VÇÖEÞ±Úçà ãÜ"ç97Úz:ìßë¨wqYÆy¯û±Å¿•õÉ¿óÑo[g¾ý©¾`œ÷švñ¯Î×Y;r¦œdmÜê~ÄxÓÖ»¼Ž œ‡úfyS›çTÕIßø}/Já•|:ÿ½½¹úÆqëëñgï½ßŒúúÐÊûòp”¸¸ ¾ë|…¼Ñ^¼‡òýßü;"ë ÷©A¾ëcœï¨7–ñ> ½|ÿì‚þ¾Ó£ŸïØGÑî÷uCÿîñ<ÿl†÷h@gÓÈÓüÏçž.ð?$wÆïáwlìŠýįøØÝ›dÙû½¯$¾+#i~œ‘x2_V)f4ÍÏý2@Söþ7Ûó~ww·ÿ9Ùù7û-‚&mclust/data/chevron.rda0000644000176200001440000003733513205037574014647 0ustar liggesusers‹íWT[×®1€bÀ€™ ˆ" ]¹È9‡&444 bÎsBÌsÚ*fÅì6gÌ9‹l³˜sÖÓo}ÿîÙÿwy.Îg /zT]]ï s=s®¹ViB°–±ÒZ™™™™›•·073/o85ÍÌÊ™Y˜YŽ•Úuî0 o¯žffåk. ß—·6Ü¢Vnùóùóùóùóùóùù1ÿóùóùóùóùóùóùóùóùóùóùóùÿíó¿Ò,»wС{?ÃY åÛÿüÕ¢}Vÿ¬ÿ9¯Ð³W—~þëgÚuÏê÷﯌Ïê˜Õ®¯¾†³_†O5dkü£šÔøìafæÞ¤ÙÓ5†cTq…R[Ã1$H×*ÎpŒ³dúOÃ1(ûö‹Ã1¢•KÊ~ÃQX¾`xkÃ1t~=G·yx¹¥á70Ø.×p »üªÃgÃÑ¿÷ü¿2 G©¨ã`ÃQ3iXqïQ†ãŽÅ›Û޾.†µÁß5½û—áØ/^ÝÇó{B9–¼Úlg8¦ÇÜžj8&6JžÕÝpT[ ®ä…r}ËL=b8Æ÷ÜÐÈ¿Óo1˃þÆö%' Gm¥ãW*ŽIÛ®VÀó¼þyd·åySÚe~·ÍºÊC9ÂÃv˜w5\§^HD¹;Üõú~ø¡Ê7=ñq¢á>õxÏ%7 Ç»›-P¿˜n‘ŸöB7~L§+†£÷ˆy–÷q}õïM=ñ}ªùÆü~m¹Ü†ß%ïö»%Žù«„ãy_«Ö=gø>!¢uáuÔ?g”ë7ÃÑ7sIêü=ù¯U®†û|˜¶í’Q¯ç{LB9ïVßÒí°pééxÃ÷=+6ZŒzÙÕ~0ú±íÔ—ëtb™ÊS_wÿ¤á˜ì¹8VD¹N•ßöí0 “vŒáè×w‹+ê¯i™Rå×܈}zíß-Ÿï½úý_tìëNvëvÈ;Åpwüƒ3Ú'¤¨É쫆cð§úÚ×Ðóžö&Ðp[þ®Ùöõ­îÔ3L,ùþýuÞùúµÛ•e‹ô÷ì|¨( ×5ý.ŽN…Dã÷kk…ÝD5‰ÔBû4ø’òíÜ<€…ýWmëþåïöÊÁsK_“pÝ}ý8üž·*KUŽ>c6X¼ë§½¯£¾åß”l0U‹¢Æ¡¼S­&NF{õYº~>žßì³î ¿´¢Ú'¢¬¤*ì%ú·¥í\è.³KVìóò8Ϲ„vÜN®h8ºþ3?õ=o— ÝèYõpŸï F G§M&íÆ}söæ |1µƒ¾m»)¨Ê¢ÞŠª¨÷"!Ûí1,¯ºêëÉ)ØYˆeèBŒ ÿWžoãzР6hwß5å¾—¢_ ­{‰©Tnê¾hS@o\k¦£<þÛ~;'áxP_¿ê›‹~ V›ú†Þð󅣿ÆtØcØÓ"OŒï¿ÚGyøÀQ§;ãïN®«›¡½•‹Á8ó¤}ôçÖßu·£¼‘CoÁ^ƒ¬'ïÅ1rYn úGh7¸Ú Ô+`÷TÔ7Ò¼ë‘xžÍö\´§×ᵸϿߣþ°+«Å¨¿×Õhǰyšß,Ê?ÿÒeüÝ¿¨[5ØQØßuRÑ>¢‹„ò°ª×ÃÑÎA+£sPÿÐ[GDûøÉ[Ñ¿ÁmâÇÃ>™ž¡°§ ê+¼wáúÑ”A°ÿCo¼š ~9/ÏÏÆóë<;¼ý²|ÖÓX”{Ôl+Œ[ϯµ]Ð1K_ïø·‹ÑŽÑ->·À}Áf§‘÷\¶Rä GyIŸxNÙÔ[h?éÜÎÛpßô×6èχKG¼B=®½?¾E×o¡œêªÎ¿–¡œe¿ƒÁ©ý~Gð}ð¸®ëPß[§:BÇkÊ+O#æüÞ7‹9èOoÇcܬñc4ÚgbéÜÕÐ[\÷¹Æ]E¢7žãûL;WÝ©| ü]“èŽk· µ>®Â%;;À^§¼/F» Ùë‚3Ð.Aå6@ùm6í{míÕ%¨õ×_ÀíXþRì+!/ºàœ? ¶úþ ¿PCÞ6×¶îlÂqXð9Œ ;é+ÊÖ8\_ÔáCÇËšÅnM ÷ÅŽxÝ  <{½¤B*xìÔN‚ýÄç´Ý žiëWl‰qŸ¸QnX¿Up i’á_Z6û1žÓlÞ“ð¹®ú ¸.82ö´eÎö£Š_8e¡ð£Ä2Ùåw®; õ‰ß÷2íQáêUÃï“X×øœÝñ#Ûßpr­Ai'´cÏôÂß¹z¿&¢Þ; ’ÏhúÇ!ûF´…NÆócàdÒ;uEô³–ÿYå {v¥~§¯töË̯‹öMØ~çþJø¡½M`‡±‰7Œó„¡½jtC}og¸_Ü¿®%ì"6Ýb*ünÌCË^.hÇâ³ÐÿšÜÿ,Cý×WÞ> ߨSHÀ±ÆàtL‡EjŒŸøŠífŒE}”Û ™üÒ¢ý/ܷؾhôJúOœ{ì¶ÙqGâÛâøýÜù]`ñy®C-q¿õ±±æhûk7š¢üi[_Cù.Lº¿ooQãWSwö0ømô¬#ð{šëª±ðk _ž×«‹ûý>Õ€}%¬œ{|ŒŸÑfažßÝ.àô[éÞ¢¿Rμ¼ƒrÆÍ: »Ž{Öª#Ê•rÒúÆE|è¦'›ðüMK¿­…?¬øc,ž“²U^Þå;1c.x¦)û©Uêo¹éüjÔ›cG•ñy%ê×w´GËUï†ÂÞßx€~NZÒaî|_®…ï”'aÁð<^½¥â­„Í5®ƒ›q…ÛÞÁ¾5³gu‡Ý7[†x#¦Ñ«³¸¯wo;ì|óìŸà}T¯Ä àx|¯›ƒØûÑç'P¾Ä±Û޽5\kE+ÆõørS ×QšO‡ 5©dëaØÙj}üEbäùx…Å-ÎÂ.4Q‹ä<Ãýщu ~‰OÆU¼i¸ŽêÞß—Xä ~H¼cÞ©:Ú¹\…Ô¯(Ç ¯}Pή•[áú–Õ!”;²Ú´ê°m|„„çÄŽ¼’ŽrivÛ~KÇ}­WfïÂ8\Õ,ý4:7ñT²ý¤ÛôC]"â¾È^n£¿™ÀC],àíÊ]ŸÓ„s¾»G™ù}KM^»H´ÙŽ'íd¦€¯éGFµ€°/Ž»ŒúéòÎÕ»‡~]ºþV»¨+–YѶ êeqzìEÄÑÞæ;ÆÀoé¿ä¾{ Žnž² öô`à„Ã1Ãi~Z;|_ÑïÌ:Ãïš¿Ó}‡_IeìøËp-jK•x¤[[ôðåõ#oeö½ÕaŽkÓÀ_D¬[ZqTZןR:Úµ¿\¹Ÿáغïð×(·ÇµÔáˆõö7ý¡½BÞ}°‚NÓàÚÑÙ¯â0š}o-Ú÷Û…é— ÷¥æîÙ¾‡?¬òåüËø-cçpž• G‡Ñ¾SÐ_ -å*ˆ¯ôK÷î‡ÿhÂäOm¿ë¼ü2tZ¾)ÞþNî¿ñŸI Ç./©•JÊ7E{8™ÙÞ3èù¿“À_†ÄÞ¿‹¸9Ó;î©vûnK†!°ð¯Wxøâ >vÆ‚3ø½yÞKĵ™“b<ÆizI; Ç?m‡_‹m¦Ú€xÀv‚ý(º¾ºß陲÷ð7-6¬A=WœËÛŽëÉ>:”çöôw Ç—oqþ!}ÉóÏ ×úóo¯›Ž 2uî—0NÚTÌùi¸Î¬üËÖA‰ßvñE;ŽÐí‘Ò÷ã8ÙÌyÀmñÊ͘³ö(gù­Nð ¶ù•2pLwŸì½ÏÛP#µÏk™¾í•úêNæ é[3¿eþž±èܾXð½¢{9ø³ð*5ÀKqXÁʸ_7Â|Ok‘tq‹&¡â…ö†£Å v-ú)c{2öð<ÿ8ì-¡ã¬U ÏK[íºüžMñ—ÙŠßÝ×*Áp¬¦}Tûp»2Óöë(?ÙŽ~Íè?rÂ*ÜWa‚UÌtÏÑïŽ}fOÅ}Ñ_ö¹!þIvXQT x;ºÒê1â–°û51îcvumõåÌY»öš¶ÛÉ?÷omüt<ú5)Øþ^{vØ8øÉ$Õ½µ×T¸^ väõwîÔ)è¯]_`ÜÅÌvOƒÿŠl¸;íéþzp]Ìl„³.hçàæY‡¡ÿ_ÛÚÍÁï¶OÊŸ¢òÿºöJ;r<ñXL³rçÁ—ØÁ[g·QüÇYËA†û¨cƒ`ÄQ¥å_ÂÏ»{ô1¿­ÄoùAhïfa^í˜1öË5¾ƒxÔwɸ®ˆ‚n¥Ëà‚–Ky»ÑüUZˆzÆ} „½9Z²§žGV÷Ÿ R%,õC=¦þ?˜y;Á¯üF”¥kì&¹q-ø¡ÐõË"†M:ê~d¼k=ÌZáÿ”jx~bù;£?¡ÿ,Å|?µªj*âoMøÒÊ 1.Ù‹ßE'wjÞÕØê¨ÆøJÚ—û‚GûÕ} C{é^'yïƞŒ‹Ä|)µÙ¨¯hä4]³ ¾t¨¾Â÷‘Úדæ¢>»ëf¡Þwn*ãŠIXòÅvûoЭVUø›äŽ?¿Äï§ÝÛýÙ­Ÿ%â¹x׌K‡`¯×ï/Åø{ Ãßõ%ûÌKT{Í;> ãíUüãK[ Û }í3kä¬vsõ±—6DÜíâ7³âdy†‡Ê]7ÜM‰FL?‚x%¬oNeøEõïÃÙ°]nõåˆÇ3>?ýÕý7ï ?àyf¦?ênl­ãff~?â:ŸGû9Þvö˜ž}x¦“á>{½\v`SXÇÇÄ!å_ÀN[û*€ŽÇÕµ+qðœQUÁë  oÒaG)•öMã=¤Ç# [94óŒ;«®9®]¾iᇽ ‚ŸÅ½5·<Ê¿ ô»;Ê‘êt 0?9ºï[\ö·E¡œæŸv(Ï?‰x890p²áè=iÏCܯ]Rîl ÃuðùeéÈ ÅU{^øv]{p>ÆO«m6)ù¤3k'ÂÞ#ÆM­©ô÷¤O}a¯r[ó8]™ÕSä‰2bŽ:b¾gÝ8 úú¤V_pʳ™ ÊÁN»6?·ãô`á^äOô+F_KÓ῾XPQñ¯‡7hz>šÖóŠÐ SMVáxÂ*ù©€QËö`~»æ@Ƈ玽gÊìÆþà8;¯¾ù]dR-´»¾;xýùÄØGÐçùâÐØÙÁ•o |ƒGÌ1”C®}ÄÝ0¯ñ{Wv) õO\uz,ü ÿ)­1â‘ôn›-c/èÐ ¼Ë¨ôù&⬤ΠoÂÕþôÖ÷é¬û€86¥¤¬ Žú¬Øš!h·îõóÀgqàž‘ï ×~Ä„Ÿo0n×QÄ¡ñ¯6ÕF´Û:Ä¿±Ñy¯Ðþ4Ì{ÃïêZŸ»Nk¼í2òƒšúG£½RîøÖÀ|'Éç²3úU§}ö¨ö{ÔñòW[ìAûeôtŽÚ?µ¾c¹éð×´‡ÿÉè[ñácäÇöŠq–,¬®ðóçzç[ï|…m¡[¾å¡bŒï™½V Nc3]w6´×—!¯Yô·ýÞÙ阗§ùé²§ìÊÚûJp0wÆ<ªÆÑ+1ÿTWZé…ügË“ÖÀ¾SïÞ¼SíÚT_óË”¢™IàYR`‹^ð[ºìW¼'â«Ä;3á§ØÎeYNù}{~ÏFÉ{Zê‹ñ³â@oÄÏRªÀuËl~W¡ËîªàW‹Œ­c'ÜÚ„8&cÙÜÝÃ1~Ö½5CÿëÞq±Ç8¬ã±b?â‹ñÓá7ô=ôÏÿ6\sK¿c¾PuâÌ'SÕw"ŽO-™ƒûwÖÅ<:¡âó¥c`ƒtF¾ïçÏBÌOZ÷ŒÛ NE¼”Ûc>–ú¡Ñß.†ï›M¬­G=2n:ûôAÿܶqÏ;dc ?Û00@É3´9ô²6ìcˆù-ø‹”ª­ÐÝÓ,w hå4Bášj8üBÕ•§÷bühõÞðqìºdÄåRP‡&ʸ>¹­.ì"²OÖe%~|·®yå„ÖI¡˜_D¥\œx íÌx»‡ðËc<à¿2ëZ þ©8n¸®}ƒ|wâÂËǃ ö#š~C=:¹~ ¾è’Ìê »Œ»…ßåúx'hæÀ à_ÄŒ¥J<Ù<åð¶‰y§ÿטô£úü·BäIS›%ä(y…“!¢a‡ªwKBûNËvÂ|F_iS|²áºeÃM¡ˆã‚ïÙÏÀ|VØú â¼”^9ë0Ž"&ž{ñ—3­6ÚÅ-³Mäá2†85Ùcø¾V³cG2×Õ ÜÍÜèp ÜIY½óÚ³õ†€kðW#£»ÃŸE¥Nh“o8–[6ùÝ×¾ˆÓÄ8ë~ðWò¼±ñœè‰ž©˜…Î Ž =G­‚p{ûs1øÄq4pÄïû§}Õ”ü޳}Ä éÞ~Í‘§‘WÚ¹*ùÄa‘‡1~ÕÇߎò3üÝ9ôþäµBšë¬A|™\©œÕ×¼ª‡Ôm϶E=ƒ·Æg®G»[&6„ß‹(ºX ù—Àª ×€ßéŸûìÿ`¸Ÿ‰‹^€x&¾ÖÙá]Gm^»LÈÊB¾CõÕ¦¸ïþñš’OÍÜWõóhùþ ò6ú°‡Å˜‡zu€GŽ-tA|n3èì ÜŸØ£Bì fÉq”3Ù-üòñþ¾ö—·¿·j‰ò†Í¢O2G½„øW½eZ>ügŒG‹ÈÓ7së\€qÓ¬zïSð§Á‹Ž~F|š1àBgp½áø}OÁKµþø;ðñdãñJþ[þ¦ðß1ú`,xâ6ñ« â´°sn<âŸôÊiÛ…©ÈÃÔs~6ßÕêzóÐ÷–/~ˆ¾ýÁÝè•§”y¬~Û¾š˜O†/Y7vªgwY£ÌÇÂÎ¥§ÓÖƒÚ££ÂÐ/óg>Ü ?q·ó;\ûW~—»k©³^r$®: ~;þ`Îá¼{nøÏ‰• ýuÉÖן³ë]m#¸–VyâÒèçÁÑ“ðjKÿåXŸQ…²{ã÷i‘_J<òr8ó×Ï͘¿§ú¸î]îM©¹ö úÍû„xÓ0ߎöJX>:´ÔpLuÝy&ÊqoÀKä§4 ’÷ÀÏD÷½j»k±±ã[ØAð²ø?ýº­°Ã”aÛw!ï{·ð òY•§Œ ¾Ää‡V@ü×Ú²þ9³íÏPf{üœö[YK%oÙþÈ?a(Wú—í—IŽÖÁõ «Ìk¼âN¿EÞ+bÐÂTÔ+‘Oœ…<ûÍ%ˆËå€u"áÇŒ¨„qx$Áã:yß3{´gØý)ÅK”ñÒy*ò$ñ•¿g!¿Ú0z3æ!ùÖæX¯ˆ¼ñx5æ§QªUàWÒ‡ýÞÌð}бKáw/UÆk˜Ë„6X¯SIæõá×cß·G~:­Õ_¶6°ï]Kâ§ùTZQªä‡[9²Ç¡÷U¥…ßÔ½ÙiûÐ Ýqrâ_›]lÑþu¬š~ ÝüíÿO××°¿Œág_£<у¢6ƒOrÔp-8g=izØEÀÐ{§°^“zuŒ0ýèüñcbóm'{¡}6ާ¼jÓ`žëìõ"õŒ_y~6½ßh_ÄyN¿uáÏ3Ç]ˆ<¸þ‹Í‘‹xžÇ«½Èû…ÞnpñcwõoIÉÓX<@\á³®²²n—¿ÝåŽ4VÆA¼èØ~¹áw^™õT Ñ¿7{VGùvÓ¢Ñoá:ãD½…÷É?«l»†öÞ}µ#Æyæ€Ì­/U úm7š+ñ±vÍ£KG`G¯ó1? »6ä¸9#®7æ{®YE§à73FõRÿ¤«ü#a¡E½³‘N ‹•Q¯w‘ˆ´¯]/Ȉ;m9†r…—=«¥Ä‹¯÷5Ròë[“O#–׸-â˜FÇÕèŸðÎß(ñVOÿCˆ†õê‰ú¤&húfž+ÅL¿œÜçÊ“m¯ãæu‡½ÇmºÚq†Îzñ ÄÑêÀoVˆïÓ~4²B^J˜\6ã/þªW#Ì·C¿'ýD|”Ñ.èòož¯'C+zȓø|³À õ­>Ïà72G÷îîè®ç9b=Úÿ©c6®¢J¯/Åx=?q$ž¼Ál:¸ÃXNÐb|\ôNSòåçŸ5ÇüO³dwâÝôaGÜéw6à¸6hËÄmúa_îa\¹,r…jÝe+ãimâKð1úpÂpÂ}¼7[Yps9~•ÌBœx:Æñfð”=«Ð~gÍOZÏØ2ðâ³ùí5é·¬"ú£rѽUˆ‹Â؎ż8©uÑjÄK™ûÖÁ/EúW®ù§.ºòÓݰ‹ ñ|ýýd òAñÁ—waÞ®_S8¦>xh}üvê'Ý>]NY7ØöãMmžV<<Ë}šõÎä5÷– ¿¤ê^6뽚€D%wíÖuÊæ˜yMúŃ#pT7=cø×ëz=Kĵæ4™¿ÏæFÆb)¡ï“Kˆ›B t|‚ñÜpäŒÇ„1ΧÔ(·szgÄ—Â*ã>uÍ'åkaœ6Ó¬sDyô4Äï~Ÿ£Ôa°?vm0¯Vÿó°?Ö?«<€üµ~ñsð7A×¥6Ê‘T§f¸Ò/ŸÏOSòëòCÄ‘1a7333îC Œîwñfpj—戳"Ô“ŽÂN_§zVQÖ;ËñØâŸxÅ õŠÌtOE½Øf…G‘ÏñÏ»· ëCá×v\‚¨eñè¿ûf;ÄËîâæJÞìÀ“§J|Ð6ûü/Wúø ìÂ{½ÝnøÕÖc;!þ޼øðòÜ¡M[.r†êâÀ©c®ÄÏËg:ž†]œ»quPÓH'ü.tJò÷(Ͳ›qíßlâx‘iWý*g¾ò¾æãjÆa<ÛvÍHUâ}}ùs°çÈ®]ëaþàqŠFÉ+.½ˆø)¸s° q­W•ûÑÞaö=;bž»pYì-ê…k3ø…àð‚ÄUŽOîñPüÄÀ‡%øûF‹êÈ7¨–xbþê“í{ó±Àäù¹ÈÛ„ l{ù³ÓWÐo!MÞ†ëj¾EÙ¯QÒ ÷Î{< :A \n#?ÇÞ{„6ù‚þ±_s»íîü`ìÐêøÓÀÙêwðßü›ü¿`WÁ]:Í]Žö:”\ ùñ€G>B·ñ»mSq¿0Gž Ø¹ª÷¬».úþ~É=*½ Ú«bñk0[$÷Sö›HtKÑ~A‡-ß`^ ÷Xùíè³wø\åù.…ÃÏ#ó¼À›ZñööJ¾Ãg+ÆK í•àZ­éö7aW¾ƒ;€®“Í\1>¿–‚¸¡vi‡‡æïYƒz4TÇ ×¡Ÿâ"oZÃËjµ½òTÍ–_êŽñÖ¶yĵªq]V"~ úUT®;æwD¹}øuj®ì·Ø¿V)—oŒuš2ÓÁßJ/VÆëÃV±>ìéë—‹8;HÝÍvš·a æÝ¾þsb=Ú[?Äüs®ÔpŸ_Y«igÐNÜò}_7„XúC?4»q,ì'$uùFÄ Âµ3»QŽP—"Å_·ZÑŒ…‡lxñõã:<}„ç†ï_° ûš¿¹U†û¸ÉŸÚ)Üùù1ût|¬Ê ÞôLñ»‡qä=ëºæõaoû”)ý;ªÕ è¹úù·B½ýBz¶€ŽO·ŽÓ0NUß~Ãøjö}]cØoàÌðSX§ËØôÅŸ}lö±ýÙÇögÛÿý>6¿÷KvbþÐòÝûàSzµAO‘ÒyÎJÅ~bmQÇbÌó=ž¼Ú¹RÉ›¬¿2qõ_­^õeæ÷ãb¥Å°‡´,ÉãÔéÕÙ‰àXZÏ-¶X_¿„5mf›[ÁîbJxð,ýF§X·Ò¾úô yQ­sÖ<ÌÕw,×IË9õë†ú.;/e4üV8þZÑMØÓäH_”W3Çjñ?˜/Wi]Ý@ýo¬O†mùù­Œ>¥Ø/‘1m'ÚÛçö•O°ŒßN#¯ÎÙ¢þÉAþƒ1p¼ê>èönÉCž:ÂÁÇönmÓû¥á~7FÝv–RúÐåʹúÆÔŽX·ÍpWï¡‹ÁáŒOÇn€w-õbý7ñÍŠ³ÈÏY¶æ‘²Þ“ã!Àµmšc~ž¶Êr,ö½X?Z‰þU_ž¤äCÖ¨Ž}Uº*æÃÐΚÄ^‘/Ò¬>yi]ùϰ5{^$ƽîü—ƒèG¯{ƒFb<§§˜Sæá/o;`ý,8÷€ãÝýÜÛ”õ¢ÀX—Ô9_«u+ß- Ó‡(ëØíz7@¹jOîˆ}% ½ê>Á¼(®Éî·×•ýS­‘'Ë,é4óÑå-#.ôl»¡â ¿›ú;ˆ»b›Ÿ½‚y]Ú–òÅÈwdœ‰ºŒuzõ©ý7ñ÷„5n±QîÑ/ôsãþ·ê+ó+§¦1J¾™ ì>ZóàüsÄÓì8Ø“ÀTž?æû=©=üjÂEçýxNÓæQ¾¨_Æýž›±Ï%퓯 êï9jí=ÄevUìóûsø¹V@4ú;âJ?Ä7 ÇV÷=6I)ý‰ñžÜäõØsœl¾ùxí®Y÷Ï ˆù^ñsbº¼XÉ#Œ>¼¼æÙ!m{`.³v÷FØß¢-_Í ñjÒ¼z—öbàQë9Љo<à2ö‰Åútë ª/¾wt2Œ³ŸÓçìÆº›6{ «ìƒÙÅð‚ý6ÊCÜ“èëvvî?sÑ]p"ävõ»g”}þó>"¿£i¡zuãØkÅóõÊ~‰7^X7Iû°ëÞŽNï["i©]ÕLÙW÷ö¼?âïä† ›ÁÇÔ=,‘¿É˜.ïSÖE—§û"ÞMþ°ò)OJa‡^½º¬Aû¬½O}Ù¨_jñÄ}i– ´VòEó»ãyªjþ7Q¯‡•}’ÞçúöÅ÷áê|KøAŸ>'k·UÆCQ]ð_›ØÏïG$ΟXyloMËnàšZœ‡|µ_öù¾è¿”<~éØýu±/#õv+¬Kª¿šŠ÷&bG½ŒþÖ©bœP´/öã±þ3¹d:ò>Ñ?ª>C¼”î6+û{2ƒ2x=¯8ñ°Kø‹°ÓäýÎaï¡ë£JQ~]Oþ<Æ¡G¿sí{*ûî6ÂïZ»Á™7 3Ð?lLf&£¼ßQ6ùìà;WGbžs ú}´oÊ$ßÈ/©7ÎïŽrÄ‹<@ÖÖ`äaž¼ú1œÂÏ58=÷6æ]‰L‡¥È«Ç¤ïÏCÿêmÞMQö{λ€<»úȦ‚š°“¥ÒtÌ+ÓCº ÎIúñò.ò–ý†ažþ²NOì» 4?qG ¿ç?ç+zýa§•—a<¥Ûk½Ñ?qãl× ßR~~µAñ?Z\Q·ŒjŽ÷xÎKöx¾å¡Õ`WŒ(ƒ¥t[7ê8Ö×ÇW¿Œüœöy¿ŸJ¾q›ö”²uÏ+å}ž¯ãÍ•÷[*lÉÁ>ÍïÛЩ÷vvÄþ3Ç—ýðÏõ×d·ÿMôÜ?3ÜØ·1Ö³“B&ïø¤ì›WÀߤ9­‘÷ ¸Õnâ§´ü+¿(ëÖ”ý/ÆZ€ Þ“J®c&/p…þMSŸ EXWû`p'Øïì¡ íâ÷«ãÚ{às´¸°×>Å®6ŽVö¿ØˆõnMÕ­_‘×Jÿú|8ê» ÿ.ÄAé¿E*û=bc×À?øVïpt¦2Ú‰ïÎ-ôaà›zpó¦h¿ëþ¶hÝê%{³°_†ÉÝ‹<•µkÀ ô??1Àã/qê5eÿ¶¾O'´wã=¡ÿ’­:þ}û=/€iûõ±X§·Öóûì‹sЯ‡V´«Ì#Æ£ñߎ>Å:ISÛâgˆ+=s·t/öÙÖ–à§ßOû[ÙGzôãH”C¿ómÆEÊTi(ÆWò¶³ËÀ§Ô§³à‡5¾ÅÞÑ'®îƾ÷Èʱ ¥-±Kźiæ»ùëginúöÄ>äè ×Ú!Ÿž»I˜²ß¦ø:Ö t_fz!¾NÞ5«5tS'ò¾ˆ?£ËmŒCyâwÙÔÁ¾që‹b:ü¦ÖnŽ5öhrO,VÖg7z#?£[ì˜ýaÝ".(ñlé€,ÌG"Ï÷‡¸MõñÍ%º¿&ü—Ðù¶îóÓ²ÍÞöóñA|”6vXOÄ[šž·¿#þi;ø¡«ò~À˜Wˆ×ì®ô”}eU¦ ߢ¿”ŒöHërå3Æe•†£M"Çoš€õœšïÕˆ¯‚›>85E±ã}“±Ž©r(ÄøOüødÖUÒžj…u÷Ô³Pÿä~v°Ï£Ö…¶ â›0nJä#¾ÿðEÞ6Á:6~;yëØuØwšÚ®nMì×K)™­Å÷—s'=Â:įle?Zâ¼o6C•ý’7‡ ®pŠÙ±åÊ<¥¾‚yO|µ°xŒÃäMú-l€º ú©Ö‘Òîm”¸òU°²ïõÛý«X§MZý9ûZ÷ÌèßF™W/=Œ<•fÂ2ôSÂâaï¯ÇëÚ΂}¦viþû2cÆå‚‰“"¦`_FêèÁÕjC÷Ѻ¯ÈÇõî€q’\óÇLeÿlÕãžÈ÷‡­.UÖ“ãNu.øQ7v?âªÌyîµ7µ<†ýåéûÌOS®ÞõÌàK§+‚“-VVk‹uÒ¦K†Z*~ÑëãóÚJþhÍ1øk·X[eŸdTòâ|Øãî<À!ißé1âªÔâ“¿À‰4ŸÓ1#û Uâü¸±o+ã¶Ú™~X¿ªs$y‘¸ßyQÈÛÆ·º“‰qÓû·ò^ÜŒKGÎøõæ#æ3iìÕÚø}œý9ÌÔ£"¦~Äzzáæs˜×Ö_\u”²oõCüÝç`m[p(~²Gü ¦zêa̧¢Co®Æ¸Õy¤?ù¹µo(ï7¼owÏIÖ„¢]’êmt<¬¼×’ýûàÌm=Û£]Ô.»âÆb] Ç§ÎÈ—e4 ›„u¥tÉç,üHÄÜ9ƒÉïw:áyA½J?<þ/õ`%î‹üñwÀlÖ ë;â ÷B¥¿ß8­Àü5äïýz%Hú¥¼/™é·øöÍ…–_¿ã9`½Ë ì·Ðm÷]„÷"+†—ƒ?hQ2ã[SsÆ|Ì¢7ÿî ÿ(F•œDœÖÛ*Ï \9;q¨.®ÕSÄ!ƒJ+ë–·÷bÝA÷ê´¸wpÙ<Ø…ÈÖEœÓ$zîJÔ7¡’{ ÄmÁ+ö)óemYŸlì;Oê{¿ÙÅλŽF¿¥·~sû2bûïUõÉÿ¥ë‰ñ4êŒú!a ¿ñNú˜´Ì_}ûÆóãŽl^­Ä-nšWÖ?Æ|Ïoõ¼L<—±9y– r6m*ó˜÷0¿IlY¿\ú;[­ÞïªÙ ëŽá6ym•÷{Ä!0Ž\MÀ|OÛ?`ò©ê*öµ‘`¾ï«‰r¥ºjÎÂD«¶]·£ÚVTââ OË¡¼aßv[a´¾ï”KØYž_Þ«‰[`¨ð=NÙ¢ÛÐà€².³p„žÃ5½Ü]ÙGµ¥=âô”þŸ]ЉR¸åóã\sÁBø;¹¤â ìgNZœßñRÚ¬Æ/àObêØ|Çû9ïí>Àÿk¸ Jüä=;ì#üAÆ·š³ÁiÏ5‰…Èej>œÄ:[X[ó X_Óu}bŽyP½Z‡‡À¿ªc¬þBœpsßv/åý$ÿ›à¬úfj6æC©?.å!ï¢?â~ÕzO?ˆu«ZŽ 1¿öå7€ßÌ·}Ÿ0¿Ömj«”3¼ÁœdÄY™“¶^Ç<4ãA÷‘°V6›áGÓËët9JÞçåôCØò¤Ø?£©5qø»ÿìï?¢ì+{úÎþ ´¨RÖ ÓªÝXˆçØo–‚¸ßY¿ó™öÍOØ[h Ýwåý˜]•û#ŽÎˆÎ¼®‡¬µo„ýíñf?ØËhçmC¦bÝ´nÿÜ¶ŠŸ(Ìn²qF–ÇøU©Fcª«yãÚgpœõŸý ‘ÇÂR?'ŸÊ‹Ý {+™sï'ëíªÜWâˆï)xÅ­nÅ6Î_£5>ñùˆ»´1'±ïÅ«ßq“fʱ¡ˆëbƒ: 멬K7îŒ|SšÓεˆ¿\®Ç¿VÞcøñ6:þpüEüÃoíÀ÷½Ë?£ý¢×¸\‡=Å[¿ÈEÿ«×÷<‚}MA‹±/V?6v.öíe.™®FÞ!*ÿûº%¿‘^€þIÙtú,Ö£,¦f®µ>¿_£Gî1C~Ö£ö‡ª(o’g®ï§8y³q¶¾`C-ØWæ‚ aÈϧ¼±ÍãNø#ÄQ.k‚_c?u#÷ƒàPø+¯ XßpprئÄçmÒ+ Cy;¯oyE³Ƶð~`Ɖ…çð~GÆ íž`žäk1EÙïùx}ò‚Ò²ZŽÊûwKZ+óÛ¿£Â± Ñ«~_‹ÝõGÇdºT—'hv‹•_‡¬9úñOÍ‘+•8G[X| óžLæï5hGçáW.–*ûxŽW‡=èrSÎ@¦^äƒú§Ø2—Ñ¿)!e|°O¯¼]O+…»E/°¯'¡0iÚ-žK°þë/\¯{þDn=µÊ«ÉèüëÂ54:Ô_;ìMUÔ?æl»!ˆ Bæîj†ø(èëÎZð7Ò×Fîð» 'î5ÆþѺ±cÇ+û¹ÙqþØ¢Ï]½íã¾xÞde½ì¡uûùÊ>È-À“øê¯=çHod£ê‹vv¼wûÅbIJû¨gýVœ1Ž-­^ï¡ÌSÃN ?—Ö(:[É›ivèá’.i°/yË£HìûÐÆ~9y;õKºò¾NjÎ̯’_˜·‡ÿP/vš[vZù6ÚAÛ¿`økŒãU/’°¾¢qõççš_Æ>V7~Ì3¬+%Ý•c¾”º$@á¿Í™p›+jž£Ä–wã=° [§7a&ŽîÜLyqÅÓ³ [sX]ì—ÍoÙÜÊ=YqmÚµý«°¯*ìËÁÝ”ùtÞa—BfÌRçž OÕž·n#ÞNýî|q†]á¦ïˆKêµK¹‹£vL¿gxo ¤Èö­ò`£ùØçºY‰ôg*ÜÆ|'"¦© ÜŒ¿— ž¦»NôCÜ‘±º¥û švZá†ügjlÕ·ÐníÅÁÞã»ß°D¾)­´û=ܧ›´w ü@]·Á²½²1ô Þ×ñ;yQ‹õ=í¼]·°_/nâB´›n@Í‹ÈkKú¾À~Rï«úc¼5^´òïIX«ûLY\²æ§òžcÝ×ðïî}Ý#ï5,qELR´/ò…šÈa‡¼7cüDÜôé×°ß&´M©+öË5Ø%MYçÕÿ A»Ö+n‹¸?þG=xéÐ]ÃÃ~ÓçlÛf_~_m¥ÂîÕs¿˜-Ÿ^Uº~7i×ß1ÿÖ,wTÙO¾/­òB«7vRÞ7Ȱ…uÐ#žÔ´Ýqž>ty6ø–ñrý›àÖŒYW”÷‹‹êírFþøXï^J^­jjxOµþòqèí›r³b”8»Òø¯˜gc,Oø×V9(û‹5«:)y›A%•a÷¶®­Æ8NTŸRÞëÔ-8öà íÞºdì6óS‡þ©÷¤Ãke?H÷ ÄãUŸžWö;ÄõN A¼›uëçáG;æcþ‘ËõB\Ø ð¥ò^öÒu˜I»fPxá3ÐûÄ #Îyìž}JzŠò{ïsYÙO2·`$þÝ‹Ôì(ƙ׼⨷wV÷<ô¯öcNÆM°ã)ïYJæ^{ðÊcþì*Êû«§·,„}7¹Ó´¹²ßËaðŒ—4»ü ˜?4­†øL›s ñÞÿó)=³ztø÷ŸO)ÿ¿ÿM•ÿ\”ôïÉàÿú­eß^[›üÞBýïªO˜OØO¸OøO„OÄO¤Oäÿ91WyÏTÆ3ÆxÆÏ8ão<Œg¢ñL2ž5£cÔ`ŒŒQƒ1j0F ƨÁ5£cÔ`¬Qƒ5j°F Ö¨Á5X£kÔ`¬Qƒ3jpF ΨÁ58£gÔàŒœQƒ3jpF Þ¨Á5x£oÔà¼Qƒ7jðF Þ¨Á5£†`ÔŒ‚QC0jF Á¨!5£†`Ô¢QC4jˆF Ѩ!5D£†hÔ¢QC2jHF ɨ!5$£†dÔŒ’QC2jHF Ù¨!5d£†lÔ²QC6jÈF Ù¨!ÿ«Q^åáA§*:e蔥SŽNy:èT¤S‰NIMEj*RS‘šŠÔT¤¦"5©©HMEj*RcH!5†ÔRcH!5†ÔRcH!5–ÔXRcI%5–ÔXRcI%5–ÔXRãH#5ŽÔ8RãH#5ŽÔ8RãH#5žÔxRãI'5žÔxRãI'5žÔxRHM 5ÔRHM 5ÔRHM 5‘ÔDRIM$5‘ÔDRIM$5‘ÔDR“HM"5‰Ô$R“HM"5‰Ô$R“HM"5™ÔdR“IM&5™ÔdR“IM&5™Ôˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘,±Pya‚s•É9crΚœs&ç¼É¹`r.šœK&ç&º*]•‰®ÊDWe¢«2ÑU™èªLtU&º*]•‰.c¢Ë˜è2&ºŒ‰.c¢Ë˜è2&ºŒ‰.c¢Ë˜è²&º¬‰.k¢Ëšè²&º¬‰.k¢Ëšè²&º¬‰.g¢Ë™èr&ºœ‰.g¢Ë™èr&ºœ‰.g¢Ë™èò&º¼‰.o¢Ë›èò&º¼‰.o¢Ë›èò&º¼‰®`¢+˜è &º‚‰®`¢+˜è &º‚‰®`¢+˜èŠ&º¢‰®h¢+šèŠ&º¢‰®h¢+šèŠ&º¢‰®d¢+™èJ&º’‰®d¢+™èJ&º’‰®d¢+™èÊ&º²‰®l¢+›èÊ&º²‰®l¢+›èÊ&º&¼R™ðJeÂ+• ¯T&¼Â[e7~ùßÿõŸZáLmݱoVÿóÿŸþú?Òtä†mclust/data/GvHD.rda0000644000176200001440000033717113205037575013775 0ustar liggesusers‹týIlœÛ•&ìÆ_U8U¸·@à÷À(xðMFÂ0hgÚ>îãô:=Õ÷RP-EQbT߆tl§‡ºfæÃÆÐCsÈa9ÌÑ›uý=+±^Ë d*DF|ßÞ«y×û®½ö±Ïüøœù£Ñ迌þëû/£ÿò_ÿãÿý?G£ÿkôßFÿý?þüòàȇ?¼ºy÷ÞöæÆhô_ÿïÿóÿñ¿ÿs4ú_ÿ{²v0ú?ÿ3ÙžþåÏ““¿üù£¿üùÞü/>þ<3üû­áߟ ¿÷hö—?·öÿòçÎá_þ¼¿ô—?ß~~:|ß©áó¾þ¼8üûÙÅ_þ<:|Ͻásï{<<ïò_þ|ñfx¾ásŸssøs6üþ«á9îíþåÏáßW‡? ÿ`ï/.¿>üy|Òÿ|4|ÿ«áów†ux><ljáyçgþòç¥á÷¶‡Ï¿ï¹‡ÿ¾7üûæðï ßû`xîÃϯyÿá{§óþ½;ÃÏo?w~øÜ»Ãç® ?¿9üû‡ïy¶Ôí`{øù;ó¾Ï‡ÏÛö÷ƒáß7Æý¹göoX‡ûË}?ØÓóás·†ï=?üüêðçéásž ëô`øû÷‡ïÙ>ç»Ãß~îå~?ÿ|Ú?ÇzùóÙ°Î'‡Ÿ >\ÖmøœËß_±“i·£‡Ã÷>þ|<|Ϋ? Ÿ;üÞæ¼ïó³q÷‹­á{ïöŸó½?þ¼{Ð÷õÜðû†ßþþÃá¿_ ë´}ðö}÷=w†¿çÏáïç³¾®7†÷÷øõѼû[ý·õ>ç7Ãóž_ØÕ`ç†Ï»?îëc?ø5û÷œ/‡ï¹;<çÖðs¯ÞívµþucÚŸçþð97½ŸýþûtØÿÚ´ÛßMë5<×…áïÍû{®ÿýtøùÃÏkÔŸß{—] ¿ÿjX—µáÏIë듾?žëÆn÷·áßÿ}bøý‹Ã߯°“áïoE<7¬ïΛøÞáï§ÃÏ9üyí ûùDüõß»âsůanÿýÙðç©I÷¿‡Ã{nûs:ÖåÑð9¦=΋Ûw·†Ÿ ¿wo¹ûÅ«o÷ü&¯®ÿþ¡¼»èþ¸³ÜýåÎðûßõu¾·ÒñÀåásn ϵ2êÏõ€? ?aÿ_þûÞac¾ÿû;ïñó¾uöokÖãÐûó·6†ÿÞÖį́?ïç£wž,÷Ïç'âæ½i_Ã>Ÿ™w{¶ÏÖm%ð’¿Ÿó³QoÃ?Ö=^ðkïñrXÇ닎nŸ7ðÞüÝŽÏûTÞ÷›µÁæÝ.í3Üï~ysø¹_G°cuȸÇç‡Ë½.äáûìÝ:öçcÇ?–ºŸ­DüâŸÞw®þ‰8ϳ/öà¹6¦ÝO:Nzh}ôºòñ›^¯mbýØý$p”õZîøäöAÃê!¸æÑ¼Ç}ëtoÞóÄËQÇ•ê+“nÏׇï{>ìÃ|©¯»u}ú¦ç7vùh¥ïãýˆGâà'ÃóU×-züdÔq|Nž÷é¨çñÂþÎÕUÃûÏÿÔ?Ç{³ï'Ãû­~‚Vâ¹ùëãå^ÇŠ/áäE÷ë'?{üÅênÇC÷â'ŠXôz|ŸîöxWø-ê3qïÎð|'Ä9¸oÞãÜ+ùyX¯_Œz^Þ‰¸÷úß{|þ‰ŸÛëuï,xv wxß'ãoÙáÕIÏcx•‡ßøù[ó¾oÛQÃê"þ,^«ÿ_éüˆ:A^» ÿþä çõ­Ý^OÞ > n•/ÕÓ>¯ø åžOz½ò~™u»ߦ»=~ˆßòÕV¬ã«7Çšïõ¼x:ê\xl ß4êu·|pgÑׯꇥ÷Ø1Äz-:_ñ¿Ûߟ]ÿ:âñÃÀ[K=Îßt{Pß™t;úÕ¨ó ⪸ì¹ðø üÛƒƒ^Ÿ‹:RÝwÉGâ\ñ°Ã:|ù@Üמ ëyaÞq¯ç‘·Ô£'çQßíw|~/þ”·Ÿ,õ|$þˆÇìAàïÏÏþ ‡©/çæ×øõ»€W‡Ïy4éùÀ«>:èvTï½Û¿ç <7üÜóá¹Ï:Ny¾èyíÚ´ÿ»ü‰?R‡?Øíuçƒûùó+û½ßã |†¿žâVz= ?â-v‚oé·_»ÿ÷Vàn8œÁgpi=Ç`ŸÏ»á™E}$/‰Û›Q¯®G\÷þ×¢nÀC¨ë¾õ÷7ÅiëÎÎ_F>¸uàlÞõ<{U¯âí§}¯äö.¯ÂÇ=žTÜ~ÿïG=Îð×Ãç=>ìþwï¬ôzÛ÷>ôü¼×‹â-žêŤǥµQÿ=qóö¢Ç9ù„§œëþƒàËý÷ãiÏ«Ÿúû¥¾_xë Ôêñk+òåëy¯ëÔíÖ~¦G<]éÿ}{Òuø¿_™wÿ¾ ýI|N>_$^ývÔëj~ðt¹Ç›[Ýîðë³Ý^?ù}|ù x`Ñí—|vq/ê þ®>¼º\„/ã¯Ï‡ýyy.ytqçû~o·×ûø"ùߺþã¨×ßü@=Œ—ü*p«}ð½¦Q„Ÿ?‹ç¸øöÆAkx8û5?ìv'žÀ1p‰8 °ûé¢óç¯Wz>R?Ю Ÿ·qÐñ±| _^õú‚~<ï¿g¶žê‰;ú=Ž84 ®¼<»ºž_—}Nû{³7xMü›Ÿ%îÛOz®üŽOºúÀ^¯+í‹ç ˆÏâå:ÙðïWÇ=N°‹'»Ý_/wÞ_½óbø÷¯‡Ï½<ÊtÜížX÷}¢óã§„n®Þ{¶ßóÿ‹qÏ—×æ=O³/üÄ­ÀÕ›Ýá³Ùr¯ Øœ÷‹ò$ûŒg¿å~Ìÿyð“ƒÎ#Ó»¬Ïí¨K~qæçÙuÃ_?Ýu¯ü'þîݨv~zÑù+ñ[~ÆŸ‹3pÖPgüöpÔã•z^àcüà˨‹Ô5öIòb¿ó$üõñ¬ë€Åã>Áÿá?žçz­ô8Ä>ï¡ßàïF=«Ïá§—Sí7Àú\ ޽Â}øûYzÆ´Û‰zy;ðÖ“Q¯£å_ý)‚Ÿå¥WkÝÎï¿?›ô:Œÿ?<%õlÔŸÿRàð²ãy× Õ­KaGò;œúiØ;ºù.e‡úðƒò^N^‚ ép¸Ïe÷GB¼.ïŽ:ïÁîè>pÌ8âòvÄWy_»¼ývàTü¶:QüçG/C'kÿö{|û0øv}aòæü߆|q¤Ç_¸¬ú“‚7át õíún÷W¿Ïþ®‡ßê¿ó\òþðeØëjèsðºú™àÝØ3¼ødxNýa÷¢®ÞŒºƒ.øúŸ{¬¿E}XúУ¾oïÞôüü@|Ç[zŽSÁ Êk¾Ÿn­„c¶ÃÕÁOgŸb¿ØÞ n’Å!ÿ®>PGã±Å½Âßóéû.¯êg½<:h#øC}¯âν¨¯ÄñT}úñ¨ëpøfú$û|¹èx_>Æßãé`ÅÃϺVÏà7àtþ_}û}á·Êû‹Î»­Äßo>Ñ¿2õ|ÿ"xmzê㨻ñ›ê°F½®ç7WCU×øouí©EÏûôu$¿Rw?Ùëxðñ(x¨ˆƒ7CßÅ÷u¹ÏAçÓût¯óFâ”~o}+Õ—·Ûy{}dWÃï¬'œ7Â;âÞ–nçÝ \·ñþƒÀÙê-u=HËjðÅúõí®‡>®¾W—†?Mz=çûŠoü@÷UOˆâ|_}zÁÏU¿Lè}¥ívÐO¢Ïg;ôbºû‰àá展¨‡}ÞÍiçSÔuâÄ­ÀúLôgѯï…^lý§á¿êG:<á¹à¶[ÓŽ—ÔQôM|¤ºìÙ`×úmÕ üêGQ}<…ï~Ñ®ŸöÉJÇéðþAp4ðó“Ð?­\ˆ_–'Ô-ô‚ª£—úç±gïɮ΄d¿·C7ûö¨û·|½:ü¾>ëª^â×~Þ¹¸>85ꊗòEÖsúNžîwÞ þ…SØ£¾õعà5­ÛÍÀ·ôx8ZýY|Zì¿UÃÓx*ýáú§ÙuåÿÝŽ#ÅQùD<—¾^ë¸äbàë:pØíöbøßüÏÑ'"Ácê |¤}¦·Ó¯õ•ÀQø%ñ¿òÁ¨ë7~Ÿn¬Þ‘ÏŃê3ÛëùJ½ûAØó‡¡'¾< <¹ßñ¾uó¥Ÿç<¼ykÒõŽ:3îõá,ð¸¢O¾[ŠxŠ×“¿îD}ÅNô1X§«oÿýüV|Ž>ù™]VÿÇrÛó΃ÂòîzÔWx4ýrò(ÿyø•þúwÿ oÓqéŸ~OVg܉º ï§¿æÝÐètû ç>ÞëvÆîÅ#:Âvè3ú Ôsìޏþôü0òÁ¢Ç£Q?©cë¼Lä5x’Nqm¸w©ãF|Fáõ¥Çù/¿¡O<þmù¶téáùŸÌzÆã‹×ü^“çØ?Õ§f=ð;üS^ó½x”ª7F}ßáº%ÿ†§ÕuÎ Ê_ú ô‘ÀÝ¥û‡î¼:€]àIÞ'è.êë:Wü}_ŸÆEõÒRÇùüS¼‘ï}>Ã>Ñ;õË=ˆø$ïÁ‘ú°OÌ»8&~¿òú/G=‹:§òÝJÏËOÕã®÷=÷º˜¾íÜþ. 7Ãú*àñQøbÞó¦þ¥b}ªÏ{ø¾WÿÖqu¥#V~§›ÿ±ãEuü¡ÏïdÔsx|²õS×Zß»ßðÒúRä9ý@ô¼ÛµX|áƒàUÎÄúË+ðÿÃûÍ—û:³û§‘·Äõ ¿Çë{N:>Tçv&]{–â¿«Ou¯ïŸ„ÏÔ+Þïs6ò¬>‡êkž_¿¦zðiððøBu|àùðÊtgü »Þ žŠVŸÞ›®Ëàù=>ò‡Á{ë÷‘W<ϽÐKç ¾ûyðJêt~ž¯êp}‚ò‹s€ê/¼£ºDÒ/'ÙÏÒ'è§ä+~À~ÕÙx¼ü{< ^U¾Ñ¿á\¢xìyª?tÞãHö3Ú?ùRÜÁ¯±¯“ñ=Þëf<óžF÷(ûŒúÿóп¶‚Ñ_©~ÂÇÁ_§ãÁá:¸Á“±×Ïã Ù‰}wA½öÒóý1xÆI¿ìçZ ZžOô'Õ¹îàɪßúM¯3jƨóÉð¹|{#ôuµ:KÝ-;ÿªîµ§§êû{¼ßã?bw¥ÃÍ{]?ù“_ÃÇôrvªîdopnâ~Ÿ7ÝÐyl}n¯'‰u~r¯Ç}|ÝÕÀñâX#w¾JžpR¿×½à·îªŸIÿSž‡|<úú'ÏŠOöáö´ó‹ô)|.¼Ï?ëü͸ë-xpq‚¾+¿&ŽrÞÃz©oŽ;~¾ù•_ žÛ~Š£ø`ïñrX}í—ÂŽùÑÉÐ{Ôkprñ¶û]Ÿ·Nê[¸®¸´èõz¶úH–»nŒ·Ú‰:·æ(ôßOª¾÷u’Ä ý˜—å9­â]#¯è_r®Ï.þúÓßÓÕ=ò ^HåýàüèÀÞ›_ð× Á{ÀCâ—¹ 9oF߈¼ÁÞ¶Ã佚ŸrØuyF߬:Û¹¼$¼bùSåÇÐ?_.÷:hè¢ð›:ÜþàMœ…ïÅêÂçû=Ï©./E}EßÅËë ¨>§Ð­}Þàœ«#ù¹ïƒ×ô÷ðw~Pç'½Ö§¥ž«:aÒ?GüR×ácÄûSó•özÝ)¿°qþqðm¥¿,w|ƒ¿?«^û§ÓÀuN1ðÞ ¯#¿áݪqøýù¿vûTï[¿/R¯Œ<gèÓ)þtøÞ¿ ÿ`秃·«9eÃë»Ö¿¤þ|5üûJðçÅϪ»‡Ÿ¿õ¼¼mÅÓóß輄zv¾Öë/qîî¸Ç}¼Ü&ްû¤¿ oŸî„^x>¾GÿŽxIÔ¨Þ'ðuꨚ7íy¾p‘øõ¦óÃÕç}ØuìëQÏÐÙä~¯>cÇþŸAOúNØÝgëà ûøô°óÊæŠè·ñ¾·ÂÍâ»+ýä°óKò×vÄax׺Ðo¦¾3îqM†¸R|Òðsú[îF>ÅãÃ鷢γ¾â(=oõ(x!uÜÆè·¾O<Åû±Cx žý’¥—žéŸw¾šô|Rsi–{^¶B¢GË“ì_¿«>7çhÍ'”ðUÿ/wH¿}èýðsu„>ü%^øEäõseð¶úSç=þÕ™àc=Ç£ÀUð§s¦ækàøÉjèdò“÷ºõ=„¾Ùëu™}3¿¡Îé-w‡‡£'ÖùeúÐnçüùEà xQž¢7ÌB§ÇÕ¹Ëqß<Šó‚t$öø»Ã^oÓKÕ‘ô|~Oõü©@ý‡_Ù ÝùÓðç'ÔQË£îgÖ1Ï/ê‚çÕ»¥ÿŽúþè§ÂËЃñµÎ…Ñ!àªùAÿo¼ÊÑàÄ7óØ>ÝN}¥î·ÏQGûýÐåjžÍ´ï³z¾µ~âÓåðƒŠÛ=_ȃpðÓ¨7à‰;¡ÇÈ[agøû%žÓ9å‹:¿0íûZü#=\=6ïÏ_ïå3ý~Þy ç­_¦NuìüÛ_©CÕëÎg˯ßí:§|ôwäß;sðÿþo¢ÿàeäiqz't«íàÝñft:uÓ,xÅ'“ÎkWŸÉRÔi‹ÎŠkâ©úÆ9EqI]¡Þü4ò>QÝÏOá/úÞï;nj߰NÛwå[y o'{¿_yh¥û×Vðoü€Ÿª‹^ }ö¯¾ßýœÃYð^åvø ¿y]»ú÷õ÷»=ÛoqíVØ¥ý¼y@>³Þâ…þa<#{ƒoO).ësÂ3Ð œ#¹úÏ«¡¿Þ|bøŒ}ž¨æ-‚žv<¯ž†Ž‰§µžtWúú|ºð^Ï“âå¯Kew“À•÷Cÿ­9CónG¾Þ:üÿöžâ8œ†_U§É£¢þÇkÕyèYßßâ 'ý½¯ÂðNúTõ÷œç^ð;êpÏÇ/è#«QàÉôõë/-ž'òöCo‘׬Ë˨§ñ§âŒõ÷ÜU_À ÿÜyl8[<©óé»o¯àLqô«ÀI57n¯Û‹>7¸çhðŽâ7°ÿ-<܈¿ÿAÔ¹ìC>tÞ‘_Á5òM͹]t?Ä \þ~ðóU¿Œ»~+®©Ãk.ÖAÿ=ñÀ9*çð>¯ƒŸ°~ß <üEÔ›óÐoô¡}šzõr¯»édp“À‹æ¢©·ô׈«Î›ë{­sl£ŽàY8Í{‹'êɵˆ7ì®®T¯ÔÜüÍ¿^ÜçÞCŸ³~Ѩç9ºËÐÌÉWwðK¸. sÂÖïS:Êr¾:75êõ(Þ¯úø‚¯ÃãÁ x‡Û¿nG½¤±¾?Jþ}Üó½uÅÒ1õ¡ªËè#ìÿqÔÏÕW2ì¿s¶_G=O³ksãÙ!]æQèòü"çÑÕ|Ç7×ÁÃl…ý˃5Oïû/é7UïÀÕ§·ÜýúNÄ øðTÔ9¯âãx>ûG¿3'ßàÀš7 ÿ._>îø¨ø”•Ï<ÿÀµÎW\ ÝãƒÀ}ë‘¿è—wbð@G#ÞßëqW~§¬ß±¨cñlgcý¼¿xZ}È£ˆ'ËVÿa÷ºããÈÿp‚:‡_Â?K£þûúV¾ ž“>EïW·Ð¯Í¯Áš»Oªè@ö·êŸÃÎkáÍ#x<ƒçÜ ^ƹU|Žuð=pÍ,pw­¿ºj©Çc¸Fݪ~ÓÙ/ü4ü\s˜†Ÿ{¼±|»ü¾úŽÀïôµ»oãFÔóÞ“óò;¢sÀãÖ/¯ëߣÊ¿âž.“ðjü¹úÞG\Œ}“èMêœg¡c‰ÏòÁʨÇevJwqÑÅÐ}Äg~†¯Wå8Íçñ5'ÿM÷g}CçƒwL=ßç|‰þ®‡‘‡ä1ó›áÕÊÛ“ÎßÀÿâíÇ¡?ðøÌº±ßQà9|ßVà‡ªãö;ߪÿ>‚ƒä7v¹ïï9ÔùÁI¯kÔêl|‚óÙu¾eÒíœßÕ|„áçÏ8ü°>¸Ã:é»ÇGÉ_>Ÿ_Â_êkõkÍ#¯2ëûÍ/­sòóu?É´ÿ¾}6ŸãnÔ Û¡çŠôÃÈÛp_Î Ý žŸ'ïáõax¯û¡Ï= þ¾ÒçF'—7Õ³øÂ»±ÞO£N§C½ ?ù:xõ€úýlðQ¥sv>Q?«Ç;³'~øÃXùfñ]>Ç'ë=u…}¦#;÷*¿=ŠxQ÷¤»_ü4ø¸Ëùk|¤¾¸Iòt³ÎÔ}»¡_Ž:ÞÀóà¯^Í;/ÊÎè[òˆú¨ú Žt]Üç>Ôcá/úÍáqÉ9&ø¥ú\C¿÷Ÿnƒ3á*u‡>q«ÎùÃÓ¾¯ð¿8ÂîØuáwÏÍžé>tÿÕà„.¨Ÿ¨úE‚¿æÖÛù©£ŽÁ7ŠCwB¿ƒ»>ûà/ů«÷¦]GÁ+Ñ%·B÷„¿ô©TݰÔëþ_ý¤±oúÉé±Á“«®.¨¹ú³^Ÿ×<ªáçjž×¤óZu^¿9éñ–޲¸ŸâÏüQè¬öOÞµ_#¿ò×Ò'Vºþ÷㥼Ƿâ9ØœU÷R:Î[ R]U}Ãûë{¸z¡¾Jy]|Ô×hàæ£Á/<ÿ"ø9z<œŽ×ħºtÑñ+]˜Áa«^Ïzœ¥ò}ZÅO{¼õ÷æUÕœáQ÷xX<܈üÔ:M»¿Ã]ð{~úsˆúº½wÝ‹¼†ºádèLžËÏóW<‚þ„ò÷¥þûx6¼jΕ„;ôIÓƒè]úþä-ùÎÄÿ9(NÑsácö_ÿ,ô(ûXÎúçš["Ÿ¯„½Ôý%¡oÓUւ߯ŸX'û罦£Žoä9¸Sýá}ä÷ÿÆÔê|Ý3°èþo.ªø*.ª«Üs‚o)^b·çÕšãsÐñ¾¾ûÒí;øf¹¯÷VÄ÷Œ|c]ñ^ø×ôçâÿ;ñeàÑ“Q‡Ñ‡ìøJø;€¿Ù=¾”}‡â€ú¤ú(ö;:OÍ_\t¼òê°ÇC炦±žê ý:úfœ?ÇèGQ7Ô< å'ñw·CWROÕ¼ËàSà>ÕÙŸ_ç¿–z]¤_õV|Žu€§ø[ûštü]÷¾/wÞÊùxý±5ÿj¹ëlbŸñ:tZ8;ç °¼¨8W÷øM;ß'oèûUÒ{ôYÁñâHΙÿ4ø4|ˆs{øó)jnáðïOBw†›ê~´°Kx›ŽUç)†ïw¾L?õ>ýïUó3G=ŽÊ§â#þ®ôy÷£ê«žtüx5ô·ÂS¡_Œƒ?ó÷sõÿJÏÏp&~P½‹§×oŒWÇñEìígÁS[oºZõ3 ÿ]ç‡?õ›ÃÏ~_Ý]ç÷§]wÅŠG#Vž=èyZ¼ÇÃè—+{¥×cð—õVªk{sNþwžËï¿xß‹·ðóê~u®{ÁËÝÏôùŠG¯bàÚc¡¯¸÷•=‰—ú›~<6|Só­Vús8—Rç=þáCj®ø¼¯ÇÈÇânÕ#³žÿé-ø[ëVózÇÝÎÄOý6ußQè‡úä“ß®yû]σÇ?Ž|™ùAÖ¿ãûàHvŒÄÏBaOuLèvòâV¬ñt±üÌœ?õbÍ/ijîõ}xõõ­à5êÞ½iÄŸ¥Ž?ñWC7ƒÃ«ïrX_sið Ößz?Œ¸Íà?çY ·ìõ:Óz9—‡˜ç—á÷ñ(Ö¿§ïÿ\ý›{Ó±ÅÏß/0Žº6ï;Û ~¹î#ŸuÿÈy9xzþ\óÚéÄü3tõ9üÀ^ð>p¹ººæì®uÞg;ê•;¡+ž ]ŽÝê?û°ã¬êç× ¯«cê¾(vjÿ"ÎÕ<×i/÷ ¨ÏkŽìR~ñ Ÿ¡~ð{ðŒ÷”§ §NúÏà î¨ù“‡ÿßé÷ƒÏàϫϥîážW?½¼Eo„‡¾q5x'<°÷©{Åv»®÷ x ïí}Õ}æ„¿ £Ž¨yBóΨ{égôOù N‡ëëÞ€åæwe_ê7z|ž§…óÇ~³CzýîDäç£áU§»~Z¼üAð ‡=ÙŸ‚«ÅËËý9ôÁÞ ¿’ôw;ÇúqÔò4û)ýaÞëP<°¾dŸ/Ήó›ßÕÏï¿ÀÅ]ó|j~Ù¸ÇüÞñÐë¾ÑÀÁì>–Çù}ƒ]±oú]ƒ>t:ð\Ê›ðmËwѺòs£ø‡y¥æ :/)ßÞ‹õŸÏߎ;n_v‚×uIáèð£ºnø\}u¿ó¬çYóÚÙ%‡ŸÑ·Š7ÁßÓeo,¯ù~÷|Šÿú!»uèV¬P¼“oôOÐÅnãÏêiûú“ÀGy®¬æÀ,õü™YxpÜñêÕà›ÄmøÿaèðWÞˆ7bÇxL<—þÿKÁ§< ýQÝ)OLF½.ÃÕ9ü7=Oê·ÓæsOÄóã!ìŸxR÷ìö¸Ÿ?^­îKšõ:V>­ú"ê~ùeˆ[•_êöÝ^«7ž/õzšŸÕyøw»¿Øç¯C'ÏÍgQ/æýô5ñ ¯§ú¤æ{ãü€8ø­Àûxu8»æð½éyÒ—¼ÿÔ=ÕúØaõw®xzǃàOF±>‡?‰Suf¥ãj羈úYÕ³à‡ÄUq¤ú[ÞôøqO­Þ’_à(º–:³ô¡qǹÖYß¶yAxçFÞ°Î/u?ËsxΡ¨§øóFèê <¦¾vSzõAëTõ]&žWÝè|ˆsÁ愺¨Îµt]Ïs?=|'ðë½ÐÁñv›Á»TÝ1ü¾þ‰ÊŸãn¯ò<ž°îó[êúUöû㑯…nƒ?Ò‡Wzõ1ÿ'=/¼ç=K7™õ|O;ëüeàÂóñüð›s(â°ŸË9ýò®>ôº—b©ÇÅ:ï Ÿîw¾Vݰõ!Ï#òWú#\R¼Ë$øþà5Ôm¯ÿÜ®î½<èú.^Ýû>~D¼Ÿå©Ò£Ž¥ÃÊÿÕÿu„z^Xç÷æGö{ý®ŸB|º:š:³æÝîu]W½¼—º/ÿb§x|þ…7Î9õÉïÑoχ~LW~þM~ö Çù~Æ«ÃÅCx½ââ¢ã4ûèsèÄâ›ó7ò­Î]†¾êù.ÿÊü^½ú8ônño”ýÛü§æ¬:_q.~.¡û©óv‚÷¼7¾¯úÇöû~©¯„¾ŸŸ>o'©sðPÇäÑ?õuš¿K'Зˆ?꾪OßtÞÍ:ðC¸šží=ê^âq¯{Äq~´¿ÇàOº•Ÿç/êŠû=NUÜzÓñ›ùu.|ÔíF¾b‡¾ïxÔGy¿«¼Ïª/hÚóèÓáßÍmÅŸŠâ¶øÏÞª?'ôÒû¡£»g~±_òãNäwö)¾ð+vâ÷œ§®¹Ê‡‘צý=ÌÇó>uŸÈôíüDõI<êqãã¨7ïß1Ì~à%ì›ç©þÕ¥®Ó¬…êï–¿õÍúwëïÃõò|g½«Ou¹×W¥ãÎ:/eÿ Œ»ÎUuןû¾ÈCy6¾oä¼×­à›­]/f}埵ð?yîVèÐð×ÉÀÓø?|p' ûÃk«o®„^ªN©zÿ/w=Aœ§Ë|óN¯»ðÎòûÐáļÀÓiÏ“âTÝ'5|®{žðîðÍ_Í“û÷Á.§ÝþÄ_ý—5waÒõÆ›Á·ào /Î:þT¿²kuî«xOø ærùûyÏ›#Z·Ô[å{ülžsîz;âîG¡ããKáÍíÀ긚ï;Ä+ýx»¼/G? {›¨§ôåܯZÿ½¾/x.}'ëQ;ÀnêÞòY×£ÔÕò…ŸÇ[Õý ³éÖÇCWÛ]·î³uÝÚ{:G§ßs#ðõ?º9>E^ÁóÜŽzéAð#GC§ºüû…ëü;~‰n£_߀×Ôg›“àž#¿¯NªsK½®Æ;è„ûõû‹gú&Õò|Wùë ë§Öñbð…§TÇÛW|‚øÇ.j>¾:ô?~l½+Ïí÷÷«óúÓnŸ97ßù|·þÁºÿc7öcÑymßo¾œ W;—¬ž“ôs°3<>¾X\4wá7Á׺O&þÒ?k.ÊJ×gÔáp0½B¿¦Ó¾ú×nW/Vº?ˆìþ…Ok®Ë¸ãˆß®—OF\¼µ<…—9º§~<ßÑð[ñóç¡ÿé»ã/êxq¡æŒÏ;kõ3Ã1æëUÿÍ~ÇòŒyDø8æeàª×£·ës5÷w7øüýž×Ø+üg/>ÓXÏâWºßêsú$ò­>Dï_ç£Ç½úIäê/Úëz\Q÷vzž±ÏÎëöVqèEà°êG ?Xu}îÉsùê@ñÝ9g<ÞVü`/uaèæâ)¼ÿR¯ËïÛ‡éì¶úû¦]G û\uÓfèP_…#ošWk½ôÖÜÁÃŽOgÁsÉOpÉRÄ«<®.Ú ý¤ú7ƽ¾¬û‰'ýyà0q§øû½ŽsÄ)¼…9>üŽ¿ÁǾ×ßËøv¼”ýÖOö ü„~âÞyÖ¾éÃŃ˜ËŸÄý<ÎW8ß­ŽÃË|y _öAà=s_}»ã2ü”¾ ñú¯úˆ–»½ÁkC¿«ûÍÔsÃß; çÃÙë¡ïد{Á«[g¸“ýãë^ðI_8Ÿ.Vë>ü½þ`ø¯úÔÃßÔñςϣ¯â5äõŠ­úõ¡»LzÊ~ó>nyKü€£Õuoér×-Ô5îu®û^—º=>]î:§<ËÔâ˜û8«?`Öù+uxÍÓ÷Ù¹1|¬~ž×ÿÜ׳Îk[¯ÃnŸÎWUÿõðùúyÄ3çåé þ¾ÎU-÷zñýÀóâ ¼çÀý?‹úüAè}u>~¯ë,u/í^Ï_öW_ ÿ½z:}p;êQu”Ï•ñ5Îsò'ú¾H]®o»îu ][Ÿ?þÂ<ÒѦÝ}ÎvàJÏÏ Ým'ôqÿnîõÔǨïŒ>™>,tdùPd}_é<'—²Oú†÷ÄË\ {RÖ¹øå®l…>÷j¹Û—¼!ÿûoÏ­nÝ ]F¼‡·ƒW|¸ºú=÷zÝ[ý¶oB‡wÞ–?ùx¸î=^tÌûzÕ“p¸yŒì‚.'®²Ã{Áµ[Q7>èFäŸë|ôbÔ£æÜÕ½4ón?ê¼MÝO½Ôó?1Ÿ o‡²“ÕÐ[à&ï7 }ލ“äÁëÁwyë>‹¸ï¹ØáÉàññ”î/<'ûQ§Ð©Ô©7"ŽÊkêhñ»ÎÃ'ÀûL;¾?}Þ‘¨³ñ0æº=v'xñ×?ož5þN‡/ç?ñ%x…š§}ØëqÏY÷3‡=ÙâJ݇7î8Bß²øåyå }uÿvÿÄâñžîv¾L½ù,øc} øP÷Þyù—}óssüõÁU¿Êa³tÊêož÷õJ_/ëšu±ç—sþüÁùgõ#¯ôç·Å¼»<°ïý$ê ê˘FzÐ×ÏÃï¾ ½¸tÒIç“ÅãùŸzÞ…Óœo6'I_œ:}4ê<™ø Õ½lx-¸â°×£x } u¯xè 5s¯ëú7ßµ;‘¿ðúiÕuüÞ¸ü~¶¯‚÷÷â”ó,ÖοüHñï‹^ÿý¯ø’I×ø?~Kܨ>·ÐÄ:ò+ñnµ?¾Ë—î:|Ž>uþ¥>WKGÝí¼¶þ›ª C¯Á"çvé$ú®èųàeàŠC«û}v§½éñþòœâ„|h¾x®)ùXù/®oÉþ{Îâcw;>ª¹ƒ‡g¾ú÷žgáoEÝ¥^¸ú«|€ŸÀ£8_^ý•“ˆkKG²W}­òî㈠¥×ã¿'ý{³ß^ü…ä+ó¼Ù7ÞN|·ï~ÿÙ´óP…/w»NX÷Ž.÷x> ¸î­yÓëTuwõ…Fý/®¿¥¯¹ÎÓ^ïx:•z…Žø|·?ñÁðq¬‹yúVða¯B?¿º}àëÀç¿  ˧ø,<¥9ðÜI¬û†W‚×ú©ø&?¿ÜãÄãø{8ÿHèºæ¹²sŸ79ñæ^Øcž;{ý¦¯Û­àä#ü‰Ï…oäÿͰCuš8t&þ^]¦^a—üÄùþêç°Ï+]·€Å›ŸG†sÅy\¿ÜñÌkû=?Yë]÷cMú~ˆ?ð»(ü8ëñUÜã/âaõÍL;|~ŸÝêÏ5ï¯êáiØÝ¸ï{é©ãŽÛñg57í ×úB̧ÕßWÑwØŸü á?¬7û’OjŽå^çôïÀKÅ -w»,=ûÛ=_Ý<ô«ˆy¿¨ó»½^×GõëÐéçÃðöãÐg7ƒÏÃoé÷¢gê—QWN‚·¡³}z΋˜“fnPÞ“]óÅ–ûþÂu_sౚ²Ò÷CüÍ95_¯ã]ñ„ÝÓ)½?»“ï t\ãÜŸçüfÔë“ÒßôzC¾37›N7„®€‡y:«ýàgwBŸöüÂèô(qMœY‰z£îþ4Çòsðâ‡Q?ñ7ñªxà3ù!ÿ¨yÃç8g(ÁAp³u“Gù´æ-‚_ÖñEÔùòýNð²#ž©sও‘GíÍ7ßíxhqðõ¿õÏÑ—_ý„}¿øþ¬ðûa··: Ž.õ: ¯«ßŒnSõì~¯—äGŸó(ôçÜðÜgÒ®‡7Oä >ä óûæ­°ËšGºÚ‰¿ÁËÌbýÔ­úч Gv»|qÌ~>žç^ðê$ótë|sè€ò˜þiu8þ o¡?®æ‘ÏB· ÿ¤ãKø¼p,ô€K}_ß|ßâª?h¥ë§â²8IÉ{¿<½GþP7âUä±'ñœö—]ó_y›nj?Ù­>úÂÕá·x0Ï'ïÐõƒ¨ð÷üIþ§Ëï“QÏOê?늯½û..–Î?ü·{ê˜y_ou`u¾•=ÁI…Ž¥o†.W÷!/wšÿ™+¼v¬t-ôtý9ö[](ÏÀ“Ç"îÖ=N‡ÝNÌ宾ŒÝŽ7ìç½Ø¿âG½.À/ñ/}F5ÇîM¯ƒo%ÞŸô÷¨sá»}ÄEý†7C»õúÖº™?ç¸ßázÔ‰[¡«ªç†¾ÿøŸÿ9ý¯ÿýŸ}ﳦKTâ¯×¾éÏc‰xú$Þ¾¼úÚo~D/¯¹Ýoº¿~^‡½^ ÈóèÓTç<ŒzÁþ:gr3âFõŒ;Ný~ð“úhÔC57gÜqüKwÃS9×v=p¡üé<£¼ÿ¢7–¾wØu&ýžÏçÊu~`©ã þw1âœ-þ•ÞxV>{¹yÑ{éK§wèP7‡¼¼Á×ÁßÃÏì¯üu·×›ö/cÿù«çÄ/âÍ«¾Ýïõ^~¸•]Úo:ùŸ‡Þül¯ÛÙw"¾×½v»_=>äxÔ?5¯e¯ç ó¥ìSÝ'<|¾¾ß3w½ç•¨/Ôaò‹¾|ëÅK?Ÿw^/…ߎ÷ò}ìÞ£—ªÔÁâ<=cyùqü÷óÈ×ÞÿDø—þõ.?ÇCÂûì?.¿ògz:^OÞvîJ{vðöøæ>öÅðaÎïʳúIžÎ»½ã?à«gQß=}Óó¶ç¤“Øß·<ÞWž×7x<ôu Ý×þ¨ûœG€“ä:¬>öÀ_>8ë÷ôgÔœªyÏïìÔ9ùÚyTöø~ø;¾d%Ö Qþ:üœ>ÇíпFÞÐO¦nÆSÂÛìãËÀë§뛹¼›ó¢ø9qÈϙ߀wTÏð7öbàN÷Ê£âóN设ÿ«àÉñË÷BGú‡àùjþéR_çc«=—|F·ÒGr<ôç*ìûÓq¯ÿàúij¨“õuÓ£ô—˜ã‡Çqn®ª¹Ù»¿Ðow°³í¨è:â çeÏx$ç–>=ògQŸZßâ Þô}dzˆgwÿõÖœþ{ØëeûPûo=æ=__9îyÇçéCƒoø?Q>žÛœÂºoÞ×SÞÕw$Ÿª{à3xX_ëõˆ3ò œUsMÆ=ž©OÎfžÜï¼v¿Yêu¿¡‡â£Å{vœçôªßp7p-~i¥ÿ7;u.G_<#¸‚ï6OO|ÄÃ<ŠŸ¡·¬j¿ð[uOÜA¯/œcêøâÁæÔyµ°ÃÍàËéÝ5×hÒñAÅÉyç“Ø‹{©Ù?|fŸÔOƒ9ú :Ýþ×ÏzžbÏüQE¯=õЏ+ÒÕ•âC݃:éu–<.o™/û$êoþRs+¿EþxuÍuYê~ÄØ[éŽÃÏMG/ZùнxÂñ2p‹¹FxÊš§·ÒíÔ}¼5Ÿz¿óÏø‹¡áÍÄ9ºTé*»±ÏKÝŸ¿È?Õ'pœûp? ~X~õý·ôÔi÷[| }†?ÀawŸËoV‚‡òû‹ŽCëÞÓy?ð>dy‘ã ¬wÝ˲ÒóXÍYîu ÝýLäOñ¥æÅMºÝè“gK¯¾÷ãàw/G|ª~§ÝÎÛãOÕÙp†óGxíÂù»·Â—ò†óÓòdý^ÄQqþ’G~õÜËoÔ1꼞›½‹å/“®·Ãaò‘þÊãБ'½.ö'½™ž­Îø8ê]¼‹ÃgúQÕ!?Iþr©ç·šó¹ÛëÍÇÁ×Ë·>·î Úíy$ùú¿ºÃJyLÜqîžxºÎ‡ÁËðý{xÇQÿá'¶‚?ÇCÑ¥ä-8ÿŒ¥óÁO5Ç{·×­ß ^P>óÜ~NœTÿëáo/ÿ>Š8“|Ù£àóÕmÞ3çsÉ7뉛g]?Ö\¼6½eÜã ¿ÿ,øåOg‰êk8Ƴ¨Óœ_ðwŠÕ7±ÒùÜâ èL“^§ž¯ßV¤Ç›Kõ»Ò?@Ÿ<8Bágüùyðµü'^Ö=8“n7¥.:^=úoñ’ÓŽGÄ«íÀx0}¯tÐÐØ¥ïñóu`äuÚ£¨wž}ù»îŸw ¯aýå/qo¢®)¿]t½ñxè-pàзÕ;ü@ßöµi×yëµàóðÖOÿ©¼]÷Òì÷÷7§òסŸã7~:N0WG>xu›¸x%ôQûŠ//ÌÍÂûêÁS©Cá᜻§>¬ù3ûß |(ÑCܯ{7óJìsÍ«…Î9꼄8¬î³/OBoÄ9þ_üînÏg#ŸÜ@W¡Ô}‹wàZïÃ.Ù¯uô¼òãÅðxVœXêû×WõÅà3èWþ|¼ =°ê¹ýŽ÷·þ¤ïL?úÃÐĵOB_RçXWv —>I¾*ê{ùŠßê—'¿Ž8&¿5‰¸Mwø‹T¿~6uLÍó÷¸+¯Þ û.>Gäw‚¼ï ½ýàtèað?ôIÿqØëìçû}=j^àrØèjêiu\X}è{ËSâ¡~ëêS:ìù‡ÝŠWçÃ~j>ʤÇ=ñ€®,o>é¼#œ„Ô?5÷û<øKq?s/ðiõ‹O:ßPvy¿ú8xœ¡#;·°ºŽúæiä݇¡GÀÁò =ìÓˆ¯òÜãùà1Ï缑¹j_ß­Ï´øá¹ô“‹Gôü{͵›õç²o+QWÓƒÄU¸Øyœo¢Î »Vÿº?C½$/Ó?øóÀëT|PwÊ;Î\¸ _ŠëΓÕ<÷ƒŽ³Ù ÝŠ_ñ÷ºføùÀÕ5~ÞóÀƒ¨[ UÇѽ…}Á+çBÇv¾ßÁ?Ÿ„ŽîœÝJ_‚õUïˇâ»77Jü•‡ðÃy¿|ô ÖNQz@ðòØå°gøEžV÷àÃÔê|ùF<…#^Îð=âÖÓàcG}æœ ~†_¿ˆ£Ÿ…Ns%ð<< >•.e_??{ù—¿âêÝQÿœ_¿Ëà&ÿ _Ñ¥Äy¸”=ð_ùû§‡§}u<>í^ðHüÞœKqz;p¿ø®^ÀU_ôJÇÇú–ñ|Çâïõ7|ëö ôžÕÈ‹GÃÿñBúeáHÏ —–Þ}Øã÷µÐƒÅóð{â“ý¡/<  õjÒyS}u?Õ^¯Wôgðƒoþ½ï÷/CGy|‰>t~÷,ôTý}Û¡»æ½ø¸Ì:ø<þVû‚O]>èñD©sèÿ»§Ï/>ê§âÏÕß:ëçŸÕÍÞ»ìcx¯÷‚Ïåg¾Lß߉úLþÂãè§=qy:$»:þêùœ/P'ëw(¼ºÒý? Ö|ÝÝŽÃØé¯OðWxèLèÍuž|¿¿>ü¢úÉ—{ýo=ù#ž¤Î]Ÿƒ—tîËúU}õè{¡×®¿NG[/‡?ƒž_s-ð„Ïý†N}ÐóÅ㨻ªšw¿·ŸÎ‰ðÇŠ›ãž‡þ%_Š«â29Ï•>¾†ÝÊ7ü}ëRümĉÇñs_…žçã諸a·Û¼í¼þTþ{/ôL¼ üq3x,u=îôü檌:¬îËòÃç9wgî„û«Äïºßd¿¿¿÷¹ü¼…_±uüð§óßÎ-;/òÓÀwê%ø=ïe«¾“q×˾ »T‡:¼‡~Uº‰9W“ˆÓÖQ<0· ^Ã?‹×¿;Òý­tËiðÎoºŸÓ3Ô×Åã¥^vÐãÝ•Ð#ë¼É¤¿·ú¢úLû1ÿY|Ã#³kçÿñçâºs&ðŸøŒ‡ÀéCðï5pÖýaî_Í×yÓõGºüÄ?à‡gÁgT¿ß¨¿¾êãý«ƒáz¯õ+ þEŠ|¹ÔëO¼˜9îꧺgdÚuû'V\u{§7Ô=´Ãóê'«û€zü…‹Õ)O"¯Ñ+è¿â<žòyð³'ÇX›¡sâ“ÅϧQÏ?{…G.E}÷AÔ)yŸ¢ŸW§èË[Mü6ï¸àbð¥U7Ïëœ?þfxñAð;÷‚7Ö/w*ôSþ_óCFÝ.ü"n?¹ÛqšúûjðtÚ§¡kÒ}^­ër_·sÁOãñÕ%Ùt7xC¸òXèlu.aÑó¢:^ÿ2öÑ\zÜûYä…ºgb·ãåšÃ²èyY¾ñðJõÙôzÎå§šS°Ôë_Ï_ýFˇ¨W¿ˆø'_ãcà—š;îùÆ9Œº·l¿ã÷‚ïòžðvÞ+oÄ»×|¾7}=Ù›¸¢Þg¯†.ÆÞä{ç֜ǫþüÞ¸Û¼£¿¥ú":¿u!ð¿³oú³ï¿ Ô=¦£^ßnÌñÝÖQ_/]Ÿ_¨¿ê\–8ºÜy˜íàå•—Á Á)ç‚×5¯£ú']7aïâÜ׺•ø^^ÚíyÙºž ½®óžÿñ¿+ó·ëuÀa_¿Gaçì_÷¨Þão¥ÓO;¿+¾é oÀù5W?ôàÃn'Ö½æd,wœ÷qðüð,þàƒà•<¯:ô~äÛÁOm‰O“÷íŸ|~:ö§æ¸ÎПgËSúÈáÔGQ—ð ç ªf¯ïû³Ð±êÞáóÌÙûeð©ü>z:ˆüÈ/àÏ¡.NY\°¯ì@Üpocñ[³§áhyHÝ ¿×9)y–>Œuõ½âÙ…ÈwÖ‰}˜›Rõïn×=Ý'D¿Ã_âþ)ö±æÂE=Purð>—ƒ¯ržFßžþ¯ó¡/ÐÁÌ›—Õgþªs\óð×¥žØ+ü)Nà=Ü3¯¯\ÜúaàYs‹œwÃßë>ô•žì—óöC\Ö7cŽÀNÔ­7‚/¥³=Šx¼üÏÝÀéðă¨í[÷¹Á{9‡H‡Ug±WvUö¹Û÷gü­õ·‰üšN¬ÿŠNE¿«s;“ÅYu•õtnþËðqS^*}ã°óòHÍSšvû«NˆúŸRóîº=ýñHçWC‡Â'ëß°®úêF~å—øtxIŸ·|«þ®y³¨_§çáÁøË÷Â_ O¯ôzÅzǾê³Ï…Séõt/ç*õÓÕëÜÝnàÔÃîçîW†ïGžZ ]ÂïÃ[úXôë³wnh3ê#8ÆzÒØOÝ[}ÐñcÝÓ;üiþºìIøÁÃÔéûúà|¾îhÔ©uïÅ^Ç¥t*xŸãâ‰Àópõ“àŸÕƒ÷cýêÜÔað"ûýù/_ú8tqZÿ±ç{vè{Ì·á¼'ñ¤âû˜…_Øú¶ûÀª®9èÿ®i;â|õ—.uþÎd7×Ã^á(u¿ëö"ê§Òe{þŸ¿Óã¬zµðã~χ՟0íùŸG¯ÖÇNåø<¡ºîHè^ò€~1sfü{âÏú“ñíú“¼¯÷xzQÝ7uÐñ\ëÛë¸YÝ WòŸ§7î†ß˜a*þîõðãà_ëѨûñƒ¨ àο \¨Þ¿«oqøÜÕiÏwòÏ£ñÛyqø»æ„Œ{Þ4×ܺ–N5éz‘z‹Ý×Ü€Yß÷¼/Á(uãÕ¨oíkákqjxmøþš_:é~ü8ìæ~ÔÝòƒz[×#^ñ“ÀógƒoЯ >¨³­cÍÛ\éyO'WüðyÎI×~ ŸKç×wô­¨Ç¬ózàõîWáßúœðò…WözžZ éQè–x’Ç+_¿/™g¿˜Eý…ŸR¯ü"xpqÚúáwî^Ñ?¯ï£æ}ïFwºó´ó…kv;Á·ÓMñ×𚼺=ïùX^Ó§a¹yHì8ÏáÉìvü´8øw‘ÿ½×Ãð¿<ò­àåí³>Ry Ž×ÕyØÏð=ú@ü¾^_[õ+®„Ž3ïù«ÎQ¾ÿÎ…N‡Çó¯Ç>­‡ß;ߢ¿èŸ5ÇS=Ë®jâr÷[vä÷¬»¼W³CùI¼]|¬Ÿ ¯*/m†^%žÖ9Ü7×d¿ÖõeðŒ…#B/ÔwA7šf4éxê“XϳÁ×üîÝî×?þO¾¿ø©t-8túö´Çù\žõ"Ü¢®‘wv"NžHf¯xIó×rL;õ±Ûó©9’5çhÑ?G~«û×—:_ç.²oZ=…÷©>‚à©ôóÛoq™.L—€/j^ÑJ¯w 'öüv%øJõ ç²n…£}ù=â§_¿ù,ê%ß#Ÿ]‹zäIèöuïÁ¼ó(ž«îCž÷ƒx?ü2ˆŸVÿÍAÇÃì?áŸð¯ÿ¥óÕg?êqÞœÈìzõè“àñ> Þ²îS?è¼[ÍíÛï:QÎçÐ÷V÷³N»ÝÖsî{=èqîÇãôW©«ªr/ôŸÀÓò> ¿S¼Ý°O?¾ýAÄϺb¹óàî¨þŽá¿ÍÀ[à1êþ‚á9–è>ó®wˆÿÛ±oÏ#ù=üŽû w ßc^²øÿ‚ckžë°ž?ŒüR}LKo< ûLðú¯ç—l…V¼Æ¬çs8é›iÇ-æê® ýX}&ïý<öžTGšŸ]óé©ðTð%Õo5îøêAø›Ï{:ˆ:›ž-ßàkî¿cÏG^M=õ‹¨wÅaþðYàóÂé㎿ê\Ϩ×3ò)ýÙ>â¡kNÍJçC3îQÞvÌùϯ‰s7¢žä?ìE3 ]H~P¿ûõˆ:ÁÜ ñ Îïõüz3øRúSÍ[î¸ã‹ð›_Ÿè¹åÃÂïÃóí,úÏ»7Æþêÿ#èR[Á7©÷kö~çãë^¤q׿؇<¸<;ÿ®9$×þܭŽ{5î8Eü>¼Åzðnú™Ù¿áŸžTWTŸÀ~¯+î¥5<ÿaŸ³Ðk¾ô^çÁêž—åÎTõ¸1þ Ž××RçyçôùóQØéð}ú |®üàý/„^Ì_Ù©ø¥þø^ðÏ#Þ±Ïá÷ÌQ0oÎþâS_îõσcÌ3Sçá»jÞÀ¼û|\} ãŠç9ìùÆ>Š'úÅañOZû?|þt<ýÅÀ=w#Ïâéá“ã‹®·Ý ~ŽÓ÷PsÙö»o/ /èëd§ðšý2gä£ÐíkõM¬tüòQäiøMÞòÜ5ßj¹Çÿí{‹÷>où÷ݰ»§Q7|øD½g.çëw{žòœâ?¼‚/ñ½âïñà+Ù“sŒðòvê £Ž{†g_êÜÅðßú^½¾™]›_ãÜ;-}ù ã z“:L™ú®ÂïÀkôGqRþ…“¾¸B\чB/Z¡3Ë›{=Ãoú¾;ß ?™E\R¿Ð·;ª{f—p]U¯O#~úoõ(;À¿é·¼—¾ú“QïѪ?.tÖç‡ô;=|Zû¶ÛuÅoâ=ñëøäÒw»n]üú¢ãuç­3|ZzÁA_¯k‹àÈš?éøN]$žÕüiÇÖñk‡§ŸëV×Ê×òýýà]êüôr÷{|uÍÕ—f}«n©Û‡8º5iö_ó ÔcðTéÇ]Ÿ¯áJukÞ?å9ðQO¢Þ,ÿwÿUw‰»Uö¼Wè·|ú~XJé)£Ž‹jì¢ûçõðïœK„OÑ?Æ^kÎö´çQõ©ùÑâ2ÝýxÖý“°c8MžoÜ"/˜¢_ýaÔ9pmõÝÍ{<ªy;á§ìž~ú ¾Øï«á'8žÇ©'ëþ–Y÷“:÷5îqÿñrÖñ„8Á®ì3>¤æìt?ÊyÔp]̓[éþR÷¼ ?W}zÝ>ð"ôÁâcöºÿÀ—ÖCWâËÌ­ÔXõé¤Çg}#xÚ;áWöAþ7¿çFÄ9üÌzàøBÿkÍE9ìy^\õžgÃŽ>õ¬øºÛówù]èeâÂJèòOéŸñø{þU÷pM;^*þq¥û žEŸ!Þ¸ÎMîv|¢žö¹Õç¸ÛùÈê ¡oNz¾­¹³»o=‰ç½|#?¬ùbÁ'éƒë_s}¦ÝnÙÏ©ÈãuÏÊ~Ç·ê꺇løwçšéð:NÞæÅgM»žÕ}C‡½¾b/ÏBWÊû½§¼íyĹkÁÿ×ýÌó·óôúÄñ<ê‹ù¿u=HþÝ ;øeðGöñÇÁ=Š®ù “Ž¿ô%âõÔãúùùgC\îõ¯BÇ¿Bÿw;Ù8èxF¿wÕ©û=¾> ;ÂSîDáß>‘½¾ô.ñŸTç‰ßÈâ!^©æ¬‡ÀQâ%ý«æ¯ìw?{z’øWó’&=èS¢ÉG—‚Ǿ¸ZÜÊxø,tœº—ô°ãõä£Àçì _ŸW‹/ >G|y¢tý¥®g> §>;z¤¿ÇÇøwøPÏì„?ÖyŒE/ôñÁÇÒ?ê¾›E¯ı•øýšÓºx{©ãÊkð…c–:¯‰_²ÎuïÆ¼ë{å7“ž/­Cõ«L;ŽcŸú¡ª¯ßõ×ùà-ñA«¡«$¾b7øDöYsJÂ^àüŠËo:.ûïD|„·j®üR‹êÑåQÏûê}Gìÿú$x_|—zX!œüƒ¹râ¬s6xhù\+>sÖñ^êáÖKþóÞ5¿j¿Ûóɨ¯áÆoBOs>R¾ÐgOO¿u–9gò¸u4ÿ÷q|¼PüÕ~«ì8uËg÷Ì,ÝyÞžÞåó÷§ÐÍÎ{ýWÁu#áÓÌå·ïOF÷§Ïà«Ä±Ë»~ù«æ7;þ‘7ÔqìΦGl¾¼¿Ÿ7~;qñ´ó…5‡"êê'ÁWTßô¤ÿžüO<޼®¾Ÿ£~¹zä“°óšwz û¦7Þ½F<ÓãnõGt>ÿâ~CñoU÷ŠÌºž^ýyó·×w#^ó§ûÁ7=þõýyÇßòGÙ{äKx§Î™Mzügëž <ÇðùúÅå›ì÷=x ñ¿¨£áóÂK‹^'°cqYܨóäûÝNôí•γßõTøFü`ÇòåŨgëüüA·‡¼GN»Ÿ{5x¿º/h·ëXÕ4ëyDü)þv¯ótúððÎyÍNÔÛ;¯Þ ½‹ß‰óÖY¿0ÀsÈ«tGç‚êþô¥^?[¯š:¬›ûÖØ ûcßxˆêßZŠx6ëþTs1–{ÞVWã»éX¿~ñÏþÁå—oßY}ͳ¾®U§Í#þ-:?¡Îd¯uÕA祜-¾yÜybsÙ¿¾a¼ÿ«þæåïëþöån¿þ]=jîvß§‹ ?çœIÍßu›¾å<^Íï\ê|dοÄ<þ¦æO¿¯ÿ½æ> ¿Wsív»³?¸o'x›ça¯øÍ:ù¦ëXGŸjÎõ¢×íÞ¯î…]t\àý> ?[ ¾æWaævåÜ@s//„þå…8ÿ«oô¥âjŽÝ¢ã1ïQsSßõç¯ym+Ïç¿–õ!ˆG_G^×s®Í“`ÇužjÜíå«àÁôËÀƒU߆®%Øw<¯}Ç_97©NŸÄ¾ü8x’Wó®+îü ]¹îå ~Ѿ”½ñÃÝïv¢îÖGXzÈaÏŸÞÃy¼œ÷Ëà]O?¯°ã⟯Ô<½Ðàoë3ôéìO+¿ÇÏG§ÂK˜g+ªçS‡Îû©Ä£ç¡7ÀÇß Ý®«ók½n¸q…éSÀ_uNø Ÿ>¯ÕIðć}ÿÌIão‡Ê«u_¥º7x&|Nò‚;ßðo'ƒ‡)þ{øùóQO»_ /$8Ïo^hõ¯¾év/лĕû±¸¢æ6Ïz‘÷“zn¸>¨{Ÿ–;¯^s$÷;r?êÈõE×wñ=Ï"_Šƒâ›~¹í¨wù%þ¦æ:M:Ÿ’s1Õ_‡æû¼þ48ÄÏáA<|{X­3?W'±Ëσ^ \ÿ4xâKÁ°³ßD~õ¹â`ñÆ£®ÃÀâòåàýªï|Üñ5\¯î…Cá|vû(tZu缺èöZ÷¦ž÷2ó³SSñæ5Ïo·Çku˜9 òÜ1õü&¿Ð ÅŸ¼§Á<4º¾öLè2tz÷ÓÐ™Õ úèŠÇ›wýˆ¾×Ó߇@©9·/Þ8¥ž®9(ûÁÓ.õ‡ê>Œa]ëâ‚u8vÿ(¾~a]ÌuQ§«ÄGsIН>èûVsZç7׿$Ÿ[—Qß©ù›8¦n:zºó–ê­qçwðëô=sgåÇš‹¼ßyÔšk`ÿÆ=¾Õ|ÙÝGÙ!þÊú®„Þlÿ^ÊcÃ÷èk0?ÖœÛ:0ê<’ø|ûYó‹Þôºò|à_먮…¿ñNtš< '˜wS÷½¬t?‚SDå}ÂÕïñáÎn¯§ù9¼.žè‡€×WÂÏé ê“â—gÇԼδçå®S²'ç ò|‚x*=ŽÏ‡꾌qÇEÅÇöß·¿pâµÐ¥Ç£ÎKÀ©âKéì{Ýîå™Q—Ÿ <)~Ö¼ å¾NÕßõ’}såRðµöëØ¼×‹?>¢æZÎ:_ª>tߤ<¬û”V:?JÕö0tµâuz‡ì£uªußïqZQó¡Æ=~ÂuOÞr×ëá<ÞAÜ«ûbÿòùÕßÿuèñê‘Ò‚ï³_uoàJÇ›Ù×çyKWÜíùþÕg÷ Ö—]> =d|<>éEäºG%x›¯¢>¡cú|çŽõ39³urÝ£q8eÔŸKS÷-õú†Šsuæ^¯wôûäyØ“Q³WñïyáDqCÞãú~î„VsËÆ}ÍorOšó„öõBࣧ¡ËÚ§Ç¡×èû‡­CÞ'ý<íj¥ÇãûÁ_òO¸¸î×õu¬|¼ôãàíØ•u3g°ÎÃL»NäyîD} _9G­>ª9û¡§è’¿†ŸÿÏó;ã΋Òýár:ïðšêŽG“÷äT~›õ¸‡àq> ÷0ßÔçmŽÁmò$ê?:‚¾’­ð³{ñžüUÏNà8 >¹¿GçЯWó ùûa¯o矮‡ê|×ðûÎÅÒ…ª¯kÖyëºg¯óÆòãÐÍ} ï³‡ièêÕŸù¦ã¶šïa}Æ¡›nPWÔùÕ•žŸð]uŸâ¼óoxv£_t}ÔŸ[«ûžßt»²¾[Qw{uß8êFÏùyä[ñŠN Îm‡^÷¸þ†Ëœ³¿˜ø|ÖóGé,òGØ)ܤãåCóûô­ä½Þê›3Á×\ ý^ºÞS} +'ò}5ßgøýoßÎw×ýgËg?uS·z¸â0xÌ•Îc>Žüúb¥û+=ùiðsüÞÂVØü<ëûóz¥ó„g"Žâ׋ç>Gÿ˜¾.|§óbæ³¼pKu|ö4ø´—‹®Ëѳ‡Ž¤¯ÿá=ýž>&ý{yŸŒ~¡ÐÉØ‹¾,~}1ôqqk$ì÷x7Ÿ4ØÅƸë2ðŒþüÏ™ˆoÖ_¨‹jîðr7÷B?·Îô4ùŠŸ ^G\rRÝŸþ1_z{ý¢ï >8‘û5<÷õà½é¨5õMÇÎu\õ·ª/aÖñkñ¡§?ˆõ«x¯ó7üϺõ¤Ž|V÷†ÞÎþà ¼€ç’¯äÓO£Î±ßîC‘Ï> =_~.øeõ¡øPý<{]‡¨sÜ㎧ù¿ÿÅ%¼‹¹&øNqÓ>‰k5~¿çÏåüfÅW|ôכü ¦cù¼ûQo‹ø·Ïã9è¯ê¥ÓÁ«ÿ ðjÝÃ;üÜ©àáðÃêJ¸îYØ'»ó}üGü€£.‡n–÷xÎ_F¥®øyä“ÁóÔ=^Ë7²ê x¸Î /õuÇŸñŸ²³áç·â9ìø¥®¢—VßÛ¤óUG»}‰úƬ“8çÜ©ºªìrÔó9¼\ýƒË=^ Ü¡Y⹈7òÌC/¨yسwõ7UŸë¸Ûþ^Ü©ù)éñ_>}õ"ÿÓÏ휫¾û}"pŽ>6ûU}»Ýïà‰êß‹ü¤nª{¯G¯àïU7/÷|t$õ¢E¯»ïï_ófï 7ã}à£cQ= û´.pôNðqúoNoõWs¦§/ÄsÁ‘êÍš8îùÜz8OÑ÷¾_ìu*}ŸŸT~:áÑзwâùëž¼y·kýx>ç>ðÔÅWìvûeò$½¶æ…íw\z1peõÏŽ»}á÷}¾ücN’8ÿ x8x?@çSÿ‹ÃpìËÃî¯uå¸ÛeáàIç?ÖCO”7žG¦ËÍôú¬æ+D]çà—;Ï€Wyù¦ãçìOx:eõ#Ž{†KÕ¹ìßu•šë¼ßý§Î1îw}P=ÏîácýrúÕåÓê¯}Óëbq Ž:õû©À­xñÇ£é’Õo´ßy¥ºF½Ðã;ÄÇ™·„ÏÊyîÁ‚ßå[ñ¸x×Eç‡ðÜŸ….#Ÿë«ö|pI|¥³Lƒ'¬‡÷áq´æú„n4  cû^|†ï©s øœy÷/8¾­óû=ï‹tÄo…ÎXúï¢×õ^‹‡jÊ´óIâY¿^ôç®{FG]gø+>dÒu|ú7+=?á×è]5ÏqÔóTÝ?;îï; >ëlð“ø¯ÍÐñJð!] ¯^÷Jƾ×ýÞËaÝÍ /ômËÕ§9êü­s¿Ãú¸wã“Ðáä <ÀÕièLËor¾Ïx"ì®Î Ì»î'>}ú ûÈ>O<ÓÀwüg~_Ž8½ÛãäNÄ“:¿7 ¾|ÑqŠ< Õý¿ûÝnÙv=êÅû¡³Á=ÎõÒ'/ïD7¬¾˜Q›Û¡Çø{u ¼LWgË#Î\xßÕyƽŽë^aýn†Î•÷D‹üÈç¨ïêþ¥¿áÇzžÐ›ø¡þ‹ºg;xMü+<%¾òqµúÚ÷º?É/ÎQª‡ù¥:éNÔÃÕ—7<—óú¾ïlð_ð.¾°îå\ ;<ü±×ùâê¯ÚëqêIä/ß_uyð¥Õ/3<¿ûñÎx1ç0v‚Ç­ï[éÏýiè$C—“—«ïj·û‰ç…WD·æ+ýsGü½ºRñðƸÇK8Nƒ«ë|Ú(êþåàÝ–:­ó#»§Q·Êg¥‹,wûd×øþz&â¤NѤn¶b /G½&N?¿5‡²æg-w]E½u:ê ûv?êùûtÔ]øÉKáWÕï´ßëÑׇÝNê¾ÛE¯Ï‡%^áÞ D^”÷žDü0Ïïõ0âÝvèVâËÅÈÛúŽjÞÆ›¼oå¿Ý¾O;©ã….Pñoø9ý`ø¢óÁ“Ö÷v~­îÇXîõàÞëýˆÛÃsé{’—äaýôpfÍ»žõxDßø2ðìÓà]^ß §:7]÷6-õø ¾[ =¿á}Ž-Þ^?êg¿qòlì\®^“G¾ ½²æ³/õzß{<üâ^ä‘§¡»Ó…œ÷-%ðjæ‡Ç¯|®>¢êÛëëXçWáµIÔ“·ó#ìNü©9ØÃï}|Ý•¨cN¯u!ôŽº§cX§—‘?×¾ÎG][ç²§Ý~êž±¨‹Øÿ_ÝG:üý‚_ÞÎzmyn¯ã«{ÁÑojþá°ïuÿÚrÇÇâ¯úŒÎE—zztÕé{]‡oÔ‰Ö©øÐYÇÇx·g‘·v"?Ö=¶ÁÛ‹7êcñ…nË?ÄãêÿŠý¹Ôýð?þçŽFÿëWKÇ¥§8¿%þ¯û8Ôõ}&W‚§Ûëg©sOQ?/êœ!û†“ðJ7Bß½|•ykú­›¾oü²¸é{ÔÞo>Ü{¢AÿÈ«¥nçú™|?;§Ó©w¯D^_ÔMøÃ¯ƒŸ?å{ëײÏIðŽüÕû_ Þ‘¥Þ¶îW#þÌG]‡×V#ÿøwú®øv>tã׳n'pƒùrÕ2üÜrè²ÿ¼ˆßsß^ÎeŸ/w{½úÜ ¾š„?L#þzüØÍøyï_çjæW§Ó_‰úÍÌÍ›?Å_ô¡Áùìø—‘—߉|ªÿÓ>³ûiðªÖÅ>οݿ×9Iç‰ðü…}OßÐ3Ö">~8»]Èsu^gÒõ»ÓÁ[ܯ_¢ê×àãá6÷Ìæ¼[u¢súÑð»â˜|Ëõ¥[7ëaÿÌE8¾èÏýƒXu»øëûñ¥ê ￈³úÖ&¯¯p>ÖoO¨¹o=ŸãômŸüðqÄûÌ~fÁ[¬F=Â_ðüKÃß›Ÿàùñ<â ^ÂûŒÃ?Ž„n¬ÁçâÑÅý¦—ãs? ^Û9&|=ÁÞCŸ–~gù@ì n)ždxj¯Ç9y¯é÷è3ŸÆ>ñ—g¡û¥î$ÿˆ“ì”ü4â%œ‘8§pì´ë0ê*Ÿkäy÷è¼p9쇮õò çU<\…wg?ú+­›øŒ÷q>šYgøÃç¾qÔÏÁKêlsÄŸ+QŸ©¯ì«u(¼Ä>}}õY'xó|ðÑòË‘÷ñ„g"o¿«æ¯îv]å· é;úD}îñ¨­“÷•WS—u;½¿? ÞÙ÷é‡:¸ßu-â›ûóæûÝäM8¿ìvÔqíùà½ç«7Ýí?]P€ý®ù,ãþž×¢:ù˜ÔÜ“Qά‹ÄGñðhäqý…tPº¼æ\çÅÀ©úý^½ÓùŠ£±¿âàBwÁïŸ1uÆ›¬†ž}.ìû—ñ|â]N>—ÜÃq-žïzì7{ñ^úk¾ŠúYÃ_šÓ/›o¦¿Ú¹Vßw-â…{RÄŠOÁc~u˜÷¼?Ÿº>}ÅyB<œàÜ<ÞáBäGõ•õ|õíþßÏ£ÞǾŒú¥pÙ{=‹cxßÇ~Ë+ê58C½ý÷a÷ìé\àûì÷àóô…ŸŠ:õ\ÄOö‡×3÷Èûø\Ïñ³Ðùaõe.Gœ Þë㨶B¯¸”ö<ïø¬tåqû-ßÓèWB÷6ábðòøJÔ£ÎÏݾÉ98õœò~Ø•þgu<èùõÓã…à@vGÂgàçÿÒóÀOÙw¼›<ìÞ/ëªøaÄEøõXøCêaâü¸-/N;.²x©Œ|êçÕÕ¯|µïý§ÐYØ¿Ä>^âJ¬×ñÀ}âš:X^ÓÇôQÔ…ÖãYĉ²»à¡~øéTØ÷™¨ó­«8é9×£®€›^éøHœÄÀ‹òý‚¿M‚/€ø¿óøâ>ãtà|à«#½>?uôнºN‚ žÞúQØ‘z¿æ>/uñ^ÇþÎ…^Ëûì¬îÛ÷¸ ãMàN|-{¸x˜®¯~9ëk^¼þìc¡O² uó]×£ž\ <2?ìû|%ò+üãsÅó²ëªw—âS׃G½úг¨'èyg£ÎRgg¿¿Æ|ó¨?Ïçáÿ¥£Þ“·Ö‚ß¾úœ ¾Ékò…úFŸ™ÿ¦·ªÏ>Ž}a/{¾û<ð"{>8E?Å8â›zìrâ¹q×Ù_Î;ð~ð”úÃóx/ó ì3|Pç­özVïOâ9Ä?u?<ùþ_kö^çgë^x;òu†ðQç#N¼uÎWÁ"_ê sŸÁ…ÑÛuBøËü½¢ŽbßüBÜI¾òfðÿø”÷Cåïæ“] Ü)Zÿš{Ø?ÿXðµð…ó€UÏ/ú:ëKª¹v{½5z»NUqrÑóçJð¡âÏGaÎwë»Ô_q6tºh°o猭óüLÔeãßÌч«¬Kñ‚Ãz:or>Öi-ü.±îÞîÀã›ïu[ö|:ô-yH½þeðZ"ϪóÖ'ñ/õ㇟¾H~q·û-|úó¨Ÿ.ocÝðê„£n:ü­ý;¼ª9”ã¨Ë¾~èÓÈÏ—‚Ï;6y;ÿs-ê^øáëˆpcåçàÃÎG=ªßýë¨gÅûcŽ„ëÜâZÖ=óÎ랊¸~$xÛ«¡?Ÿ ž¤øýI줧 ^‚ÝÖýãóîpà‡¡Û^ >ØûÉ—p±up¾äýÀMâÕϯÂ%·B'y/ê:çTùë?õƒ°›+Q§[Oþõã°{ø.©û ÷#G~)öñ‹àïÙuéFÓ<¿~#þú ê9?§oî[i?“‡ßºöiøõ©ˆ›öï‹ÀOê*ññ×ñ¹Ççd¿Þ@=ññÜtõ¹þÊcÁ›¨‡äµE¼xuÐïãàÛØ]ž:|¥>™[Qß™7ç=ôÞŠý9ºÔ_ñM»¯¿¸ùJà‡Íà·á)ï)ÿû<ûg;’çõëlGœ?|!žDœ(Ü4üýò¨ç•ÓÁ¿/øbüÊFèÛêCq–ŸÓ¹äñGÁ/9ÏTŸ³Ûùnñ›ß_Š|Á.ôI¦Ž&¾úñà|ðöó?uüâüæ‹ÈKëñœyž_<•?/DÅw\¿½õ{xë¡oøÕrçÅ¿úŽ:ÿЇpJù‡ø¹Ûã Ü=û·®ê:³x²4z;ïþjÚùD<¡çrþÿ㜋çýE|¾þÂ!nÖ¹9ïÏ®.¿}-üHßtâóä÷á-ùåï¢î¯=¢žçlàâ‘OŸG|U¿‹ß{'ü n+=ÿ ï÷çá·Ö{5ÞëÝàǬϣÐßæÿÚu¿Ÿ¾ãð×FàÜ÷ƒX­^3'öläq߃ôsì÷'ÁÃØýÏgbÿÕݯ#ß©[ŸOùQà5~ý{ö8êö»üÜ WÀ«7CW?¸äGÁwË—GÃ^®…æüÂõÐqNÅÏócù Oø‹Ð™ÏFü­û†?s¢ÜÞ þï^…^"~À3üÐ9+ù­újz|tâZÔ#Öy5êý;§>{¸8¥æ<®ôõ£C&ou-êŠC_õ×\ýɹ»Ä¾W~ÏÏG¼?ûöNäë¡þÀ«ªÔ¿p¾¸ÃNêrÄ!ë)ÿ_‰÷_üÇŸñA¯×:_ûý_Ë͉ÐOA?Öw?òu—øþ½xvv$òµs9kgàõº'ë ?§¼3žÇyEú:ÚºŸŽç„×j¾ÐÍ“ðœ×#Μ‚Áéâ »p®FœPÐ>‹ç¹vôeÄ_üûéÐÔ_Ežûaä­ a×/ÆnÕñY¿É{ìOÆóÐÛKþýièîÎû©+ÞºÏe=«/dÜãÈñà nÿç˜áy¾ˆ:½ü&ò§}¿8ãó¨ÇÕÎmV¾Ÿw>µê†ˆgæt‰ïO_{^çkñ «Q'Vßê^×{o/…ç«úëL_7qS݈úYà³â["ïÍBÿVGT~œö|¢O_³G8×çêkUoXïº÷õ°×½©çªKߘw<*î}|«s*¥+ŒûïÑWßÕ}KÃïσ‡[ ~4xß<蜯y^ø~ëu"êÁÔ{œgð}î#ðïÏC7Wìã…À¡ì¶î ý”>v;ðÉfðžâçC—ϼ§8|#xRûdŽ{¯ûÐÝ^õwä\³ù¿w½|5tY}ˆâ»¸þ~Ä-ü1_x=ô!ër,t ?7u†Ûð‘îwy¼Çà¹áOŸ+ö{"øYu]Ô¹l~ëܸç—à[ýoÖÉçÁ™ðòÅàEÙ3(/ã©N†¾Â®êþÞ7=ŸØï‰w¬óÇ{ý=’_…Ëá¶âoÇGY78mûp4â sµÎùÜŸD½ù›¨ŽEÜZ |[÷Sív]I^´Žç¢Þ/áKëçœü8ø²uˆ¸æûàÚiìÇéÀaâ´¹:Ç"NÁµ×¢~wþéýØ—‹¡ÓÒÊ_ç=/ÐQéðð×Åàµ<Í)YévlΰûIñðyñì=¾Z/ó.„îš}ÉeOÁ«”î0éñ_…·ÿEÄþ†0g‰Ng=áþ»ÁÑçÌ-ƒoèúø(ëÿêߺÿÞ ü"ôm÷®âsnôõPÂâÍ7ÿÒíÝ>¼þvçeKO ¾V?Ó÷­†n\÷­ÏaÎêjàù棈p§ûÄ¡º—mÔq½xôYà’SÁÏûvp#ê+vô÷QW~ïÇÑ/Oϧž‰ü,ãåÔƒß œ¼ºÇëïwh=òÒƒàK¿z#ûÀ— >ÿs9êaë†Ï¸8TŸ©s'ò»U_‰;ìúV¤¿/žÅsÁãâ½ùFyŠçÅ[|¼”þø¿pqèÎÇ¿úów#žzß/w¿¸u‘8üiàíSQ?ë{W÷¨³þ.ø+¸j¶ßyñÛºÀ'üíèƒäIïMGcGÛGàêûÁûá›®ï(Έûú_ý;–.x"øAqD=žŒ8Œ·rê{Áã~+pÞ»î} ½”^ ¼x+ø…Kaçæ7Ã?xRù•ÿäû™÷ö"òÂ?ï"Nå¼ ?¯~¸zªŸ{uÑ“xŸ³Çnû½Á¾? >Ó<Í㣷ã0þ|3ðë…ðו£·óï{%ø3ïSç$æ}ÿðCúÓŽþ³Î+Á?˜[ünÔñ§¢î¯9k©ÿíö:¶tâI×7Ô—¢>²¯Óà%ß Îz sÇJOÞ æhðeO_ç.Æýçn†.p>ð±}¯úê ÛËGÁ¯ˆ7òÏ£×á›fQŸ½ ¼}üïÅçéwÿ2pº_Týe»½®—à:óVáçŸ?!?Ë/¯þµ¯ }Á÷\ |•:û_ÍÙˆx ~WßvÔ+úÌøÑ×á‡â$ýî»Q7Ôùýá÷ïŒþ_ëEÅ3‹+áßGâsðâÀÉX·ã¡û°û|2t@|nóßïÏî.†Î‚?Qì,zºûu-òÜ'(½à ¯ƒx¦÷ÞÞÇúú}þt*ò½ó›'#.}öŠ—€ÿáP|Ä¥ÐEá`|*<ÿÃÐSøµsQß ="ùØÏ£N©ü{Ðùó7øí$ð[ΉûUð\ƒŸ¯Oé~£ºoŽ~´Ô_8üXÍS‹ü æòäO‚ß¿> ½‡áwþ.x sñ/žó~@¿Ã/¼ ü†Ã/Û‡º÷(ãÃnÇìÞ<yŠ_ Ýœ>Yù"umuÕ¼¯‡ÏágGFoç·íóFäq‡ýÛ¿ÓQÃßæû$o_uÓ¸Çëe>¼ýpïœÆÏ¦Q_ÝìEðmxÿ«ñÞ§ü$xCyEñ=êÚº/"òõ“ð/öv;ò×зŽ_õb¯ÛQݧ³×ëN<õÀ}æZ™c§Þ€“OÄs©3Åa8ݽ›ö­æØŒ»^}úüðRðQyÎWœ¯Õuø ñÏ~'x‰ì‡£¯œ‰ ¿ºº%ÜôuØyæ2~'ô¢êÏžö¸q#t yz-êíb=áI<Šs‡î™‚³èuôûöýÈëÖ‰ÿš?|&êF"øƒ“Áwçyýì«Ö7A¯ƒ'>~Ç—þ†Þ€·äü/q0»»õßwbýáVøøQðfúmáMû±~®.²øá¿âùé8ßîï÷U¼¿8ñ0x „žþ‹¨‹ô…³gudÝ ¹×ñÈ¥X7øÿòIÔ±î?ñyôùÿ»ñž•?CwÀŸGÿœß3Žø»àEáDüš<¨Oè~ÔÕk‘_gÓþ¾ìëfÔm?‹8ïÞ <à<ìÅð»«ÁC²_ûp-òeÍÃÜ{4tëiàë$.œ>‘ÎJŸ×w¯N÷Üð%œ¨þ¸õ9½csÚŸ_¾¢á»kÎøǃ“ÁS×9¹Eßoû#ìïÕ÷;Žø0ôßìSç§ò¢ó±? ýKü×ß¡ï÷«àw­Ãj¬£u¿ú\‘sÓê~4øxÞ?ϼøFœº:žóÚy®Õ½H~ÿëÀïu>lÔùسQÇâëðòþñÐ+¾ùS¯ãÜkâç.…ޝ;ùëýˆWò¾9}g‚Ÿ®<±èöìßñ¥êšWƒ?è7x?txþv+t;:Yé«Ó/ê<ö¼Ûºûyð)9ï@\PO[7ëbÀÑȯ…úþý0ü[¼4o2õ_q¥úxú\ñä|ð›5÷0ꙫñ¼ø­c¡ïü•Þ:éõ6J~±ß§‚ŸZ8¦oRü\]¼=þÃ]Þ%êØiððç#?\ ]Ž‚£Ï¿z1ðÇoƒ§ÿ0êõòjðöâgâz?ïyÝ»séoè˜? ;v.ý|ð üïÕZ·Ÿ£îœþ¼Çßù³óU>çƒÐì7¿ÌùÞ¯ßtþâ³ÀûŸO)^]þ¾xºI÷÷÷"¯ÀÕt쇳öyÜýž?³7?ïO8™~Rúúa ·'¿ ü潯ŽÿΈÿ¼:7½öoýô»A^<qœá»Î„n(~ª‡ä1ç¼|γà1õw‹g¢NÔï®®á‰ûÁWãgÿ!tRsx.Þ¯{¿÷zž´ŽÇv{üòþ¾w-ê¢óQGàM'Á[œ{<øàTèU§¢Î6Ïâr|¼à~Vu«ø¯ïHœøQàtõx,N}'ô4õµ¸«o¯îã]t>—ýŠoâ‹>QvÆ_.Eþ‡žÈ¿Ü¿s>üßïÛüŒº™N¨Nú4ø7qîj|ž8(~âGÞþ±ÎQ†_²‹äÅò<Ùµø^uèÇÁÓ¨kžGü9Ýë®ÒùN…¾)ïùZt½I=y6ê|0ûçŸúÔ×C÷ôx—á'Ïž þ®æ”…nÆñO¢®ð<;o:ÎçümŸN.cÿâÈ—aÿø=ý §C7:'ù,ôg|¸úç|èÏ"­Eû:x˜œ§Zó9ÏøNè¢>¹º‰¸&铸æ°ó»>—á!|Ž}f?òÅùÀAê?qÕ=KÞóNøKÍÉ÷ýd/5tö5éušx~2tç“áeùYü[ ^ñzäAõ˹à‘´Þk¡gœ ý§±¿/÷»ÝüCðúÆä¼Ÿ{3èâþ~Ó¯$ŸÁ-ú=áLsÁð–ΓÉÓx±äø7êÜä©à·.¸xl-ô¡U?xÄ™:§$¾ïöýÁG±3¿ç|±çò«¯w‰«òÌ‹À¹ŸkŽÏ¢?_žQ—|;ø þ /;7ýúÏWþ$âæé¨ã? ^Ç:Š?êxÔ{Ô|þÐAåùSñùöIÿÞ;ƒÅ_ëúAà"ýPxqý„Ç#žœ.þŸ~CœþiÔ+ò\ùiÔâ~”þóz©×úÌDœz/ð;œx6òïÁŸà}ê<ü´ÿ»zø‘—à$xàräW¼Å¯BÖþYð\wCçÞ\{:ì¦Î+¿ýjÑókÍ­ZtÿЇÉ_õIâ«Ù鉨û îöx)ß¦Þ ?8¿5év\ó÷CÏå·¿œQ÷íMú÷= þŒn'>°q!çÐWù!}ˆÁãì÷iÄo‡’|ÕµàÅ[=yüiäùCpï!»‚Õï'úvhîÄñ¨‡+Ÿ‡n5 âûäyZž Ó\þ}|ãg¡[«ûÞ8(>ì,:úUÔ©ž‡_¨»éðyNgg‚ï¾|àà“ª$êÌS¡GLOø}!òPõ³îvžÊþÕü­Ißw¸þ˨VÂáÀOâù®F½nï†p2ê*ö©/ëtà q ï×= ﬿~"ñJ|ÇÓ] ?Óyo ®4çñhè⡺ҹâ[Á|SsÕCošwÜ,Þ†½˜T}«{Ýñ¸úHjNäãzyÓß{}’ætÀÍðdÍ]š÷÷ú»À#¯f}?ŽD<ý*ê¼ÔQÏïçŸø>Ûºÿ$𿪹ÜÓþ§¸Ì~k¾SàmŸs-òüïÃM? žòý¨ãàqz£8§_&ï“gÕ•öA<ú øàó¡^ |ÀðWúð‹_Fá'pÚùÐíàUz¯}×> õIÇò‹÷ÿ"ì£îóÝíöT÷¶N»?Éæ©òŸš‡#­ôü:¾ >ƇëStþü»ÙŸ¹Åp¾“¯¢Î˜îÖ‡Z÷ØÏ»ý²Çç¡#ØÏÓÓV¢®wàÉã‘oN‡ÞàÞù£¡Nƒ‚Ÿ~zÆéÀÅüH½ N ]þW¡K|üÙ±àñ<‡uÿ8ôYz·8õí°[óe?Þ?öÍî­¿{­ÕŸg£®ß^éøåÔèíüùãQ¯ΆÎ=Ÿõøü^|Þ÷cÏ…>Í~¼–¹0µ¯Sk®Å¤ó)x^zkîöÍàEÄüÿwCäâ¶ý*=uÜ÷C?¢¾¶gÁ×Ê{ꤻãþ9uîtù}Ôí./éÛ¯óá³®«ãÌGñ}žëxø‹ßWWÒ]ñȼ q+깡oÉçB_ùÇÀaÎE|?ò<ð"ô7v‘õÊ÷^8w&x°ÏÂ>­ç;áwpÖ{Áïœ ÿùVع<•÷WÊãô‹#zÞªÿB7vÿ골'œ«õ9?ësÍ£¹;éû÷½:OËáÊ»Á{]ˆº]ž<uÃÖ¬ÇÛiä õù™È÷æ¦îZçëç=ßÛvøñ}ðòjا|/ìX~¥WªNÄú™¿]úÅ¢çw|´¾Hüï¨GÙaá±Ýþ<Þ󣨸_ÞSgàÁn ö§Ï@ž5/¨øùÝ^_}õ®õ~uÄzèìâþïI÷†«ÔÙg(©ð)ôùŸßÏÄK[¿I|Þ©¨wÕŸâ“9t_nñ9o9?•s®/FüTÏ©7îïöß“¿Ùgöu|!~¸çlk·?Ÿü]y ðþ;Á;Ëcßt¾ëZèßæ¶ðÓG>ÁŸÐmôâcÕ×ê¼å¨óWC9yýZì¿>âÔ-ÎľgÙ4øÊkÁÛœºZüNS÷W.z=ç<Õݨkóžö[s™¾ßñ¼÷Q؃÷»öb>Ì©àµðžS\üæûÝÿÕ9æ@|õíñˆŸösþnÇåô8Éç^žÎ?ðUwívžNÖ|1x.ó:õ¥x\|bØoÝ;E/ŤŽøã«Ð]ÔsâÞ'Á§œúŠ}] û8ùKþ)>{ÒãØgaçòoÕ5£}î$êªó¡Ç­†®üAì'þêFÔ“×"?³ã+‘oGü2ølñÌ<í£±ŸìðJäutÍ]ŠúþÃO\‰zK? 8öƒˆk¡Ã=xÓãôOƒ—T?á.FÜW?šôç×NFÜÈ>Nþ"^ >–8÷Í^=×v¬?\§žàŸêƒ‡=.°_úÁÏ#N¢žæ7Î¥ˆâá­ÐކàÕË⸟«{“ƽ.“7z:ÜÕП­Ã½ÐÙø¼`?áK¸W½9 {²®Gƒÿ/â…8&ÿð·Ú¿QçYC‘çÌ᪹üËO¶Þ‚o=übá©Ý×Ïï _ \ésÔ/·¢¾»öºzjÞãP÷Ã.õú”}¾ú—ŽCk×¼×5ç,xK|̯¾.~€»«/9â?žá7a7üìzà+yéï#¯Ô¹©ÝÎWÚÇ«‘oЧØí¸áýà%ž6|úú=N_ ¨{fÇ=ž®Ow)êÓŸÄû=]¥æ Ì{>ÿ4øá3ÁÓ°£Äxeþy9âÎ>Ñ¿8®ñ¾ê{Ÿç¹.D\º~´z-~)u’¯‚ÿ÷~5ç*êõÕÀ+âôÏ“O¼½Ž­óÝ+=?\ ÞïËÀ;§ãß=ÇWÁ׉繸^Å_ß[îözoÜqéÕˆ~îVèQòâ…àµùå±àÕÕ™¥ ‡¿Lc½ÌQS_è_ÚV¯ŽûzV¾Ûëz–9óŸÇºæy°Sa÷§Bב§.w=Æ:è¸õÖåàãN¿ó,tÑÏ£¾ƒ»èÚ¢nr?BÍ{Šz9Ï ¾ŒýÖ?îûÌ/ÍùtWº-;u®ötàéSÁÇÑ;/†Îœ÷¨Þ -ñÕÝÐË«®œô8o约~v=ôAö¡î•×îGÝPs«‡¿ßþ–]Ë3êxkõ»8èskQðÍúþÅ¿ç3j^à¨çWu‡ß»¸b-x/u¹ß×Cg88[Ÿtö§ÂüCü^ =U_ȵÀ“ò³zø£®XÞÈó»çútèÙâ©ûÊä™ÔÝ_?ôEð óQÇCÎãÏWçtœ£Ã·=<Œü0îññõ¤ãgskÜ7Ë>Õ/G£Þsî^Ñßá¹ïD<ô{? ÝÓ‘³¾ùMÄ­Ÿz¼¶¼r*Öåyà¯ê£šDݸûRð€ò¸ûKÔ™k+Þ‹ø/ëËê¼t¬'{9üÓ±°OûÇo¯„Ž%^Ôý§£^O"Ÿª ªOgÔýŸÿ°nÎC}÷xïE=å}ÙÅÏ‚—’Ÿ_ÿk×)«?&tKù§æCM:~Å ˜¯?öJ¼çΤç7qf;òŸ8 N¶nÖõÕJÏ_Å~ÁͧƒÎ~d<Ìw7Þˆz½ætÜý0t/uైãËkä]qõJÄ8ÎýyàÀOÿÞ >S~~8‚ûy}ð·zž¾UóçÚ‡ì—<|¨s¨ê¤º|Úóê‰à®…®$?™ã®.9;n¹ü5~Nx´ß×ñÙ´ÛëðÿkÎVàšäóì÷íÀ}9Ïy5pÑì°×½Îƒª#Åýøë™BçøEð^'ÃïV¢ž¿üÞÏ"Oð‡¯¢Îªy8“W¬Ãvè*îô9ü›N½qÜûê_9|ç•È»¿ˆxª~µÕ§5ë¸ËsÖ<‘iÇÁø–‹ÁŸ‰ü©¿rx,ïÃS~;ø©œKÏnàôºß%xïœWt6üõtðŸî/(=`xßYÖ;“þÜüU?ÿÍÈ×#fflþŸ˜;ðEè7"þYŸ;‹ÎëÈ{KÁcŠø~ûÄŸ^Nåoæ. }•ŸãäñÒ<«¡'MƒŸÝëœõ+¼ ^~¹ÜyÚïD}ËÞ­ß<üÿtà|çn†îTó ÇÁÛêxúræ¸`;òèÙÐ Ž¿'ÏŠ};º¯úݽ øEûˆO…÷*\6îù÷óÐ]œq®òrä]¼¬ýHþ‘­„¾«~¹ø<ùò"?ŸÞT?y<úˆÄ÷Yà:–¾ñÕÈÓü‹]äýfëQÈS_‡Ž ÏŸLþ&žï\ļ܇aÏËÁ“©kÅÏ÷BŸâ›Áo-÷Ï…ÌY‘7ùÇün?ú\ÔÎAãMܳu:ðü…¨ƒî„žUsCÆ}}ôˆcæm}‘ÿyïù¢ëß žPüQWý ôsùþÃÀ§£Ž§7¬EžÖ§Qø-ê³BåïxQóíà:ëçÏsš5u·Û=V¿ÊJè&ìW\–×ñ7ÎÂ5x¡Qÿ¹ÿúLðÎ]Ÿ¼ð£¨o«Ox¯ÿ7?ý$øöl½õk׹Ȩ²OÚz‰uê¼ëéæ°< »µoÏ_žˆŸ£‡È._žìxºpÛéÐiäƒ[“äñéDÔí/¿YOuï£7å”{Q¯ŠOø—ûÿÏ×:x,ê¸â^ðÂ~þbðèÎ˺ÃÐWCW÷9çë»õ}ê÷ ¿ ÞÖs¼~ótèÑw£^Oà$þJ_pNìHðRø[qòhÔg§âó¯~¢wX}«ÁžÎü´ˆ<7éuø£¿úÅp/Gŵªÿ£~£«\<Ìžü½>ΗÁCÕœ¸q÷+sD¿§Ÿa-Þ㋎G+_øþ;ý(tç3ñ¹…F=þ8/–çî7Ïà¡ìï¯#NÐAÕÝÕ¯?íþeN¦x¢^ƒ[­3ÝN…÷ðˆò¬~8úÞ:‡½èûJ‡’OØ·~cö{:xí¼’Ÿ˜Çv!ò÷åà_á’äMjîÙA·ÿã3øû¥À§Cdz¿žÓ|{ñÒ\ŸÏ"ÿž ^ôl¼?œs+ÖóRèKòúOC‡'ôæ9ÈâõÝñÒî!ÄÏ¿|\þ~¬»sæÇ#>_Œ|6ºlø Fwy?~Nü®:}ÞãÕå९WçºÆÝÔ 5ç1t:ûð‹à¿¿Ïa=¿ï?:'þTœ|´èþd}ÎD}Ÿ¸§W¼ª>¤½ˆg“O¾ Þ¾Ù/ö ˆðÕG±¿æ;àKñG¢>„ÏåE¼ì#Ïšƒp,ø7yœ«7áÿ÷‚?ÅÏéŸ9ù+ø<¿¯ÿU?øå,ž“Ý­ïÇ¿N¿ ¯ºÏ«O‚ýé3|ñb#êóªŸè`³þ='#¯< žìW;®µîß õÝ7+}ÿ§Á ˆóÞnþ$ôø³¡‡°Sõ؃àí÷hÔý,ë:ÿ­oøXè-x)ó%Øázøþ9sáárû²=íŸûAàL¸£æÜ…¾Æ¾ÔO¼‰|Cß’o­“ùÂ+‘ž…¾‘º¡úä½à¯FÝJ—ûVÄ;v·ïïýæîñÆ[—ßÌ;¾÷üø0øØçÁqìž:øO_íñÐýððªº?!ìüz¬ {õþ­x~!òvÍ«œ¿ºëó$øSu×éàƒíÿ‰àÙä5ñäiè|çCï8zÞäoè<âäͰÿ:/²è¸²ô±áçõÙ”»ß×ñeÄ;÷;ÇŒGVÇX7÷9o—]u„{ôëûܧ‘GòÜ#cî7‚«Ø¼nô3‰û;£·ãX}ÇxŒ‹Áûá_^ò=yžöDØxt1òбÐ3n¾UïTÏ¿÷ÿæ‡êZúªøbnÅÓ¥Ž 7‚'°ê¸É÷­ÏQó!ìÿR#ê1<Ÿó†âÝ¡/|øœ½è?¶?5O5ð­Ÿ¿8åh¬oëõ|Œ·‘¿ð5g(pºŸoõ»ð'yîBÔ¥ø}öìÏâ'|+â|ÝW³ÔýQƒÏà{úâµÐ³ß¥âø¤ûIõKîu¿ÿvð:îç|²ÔãØjĽD=GÇêâqÏ{Ë¡3á)ñ–ôÍž»¬ù˜‹ßýHÜÿ"âü¯C/ÈûÄç‹ßÏG]'/Ðéóž]Ÿs=ðSåµà1éÜ¿ ~ä‹Èïø ûœçÄÕQï…N+OŸMý™þ±Ûqhõ©F=}5xoû¦îÚzÓëÈQèŽxn|Çç?~øñëà«ñÌ+·ÿ!Öÿ\Ô·úìì—}ŧ»§ë‡¡Û±ƒça·üí›#Ý?øûÕäŒçâ‡êu|Ù‡‘o«N£ívü~?êaºð•àÑô7óûš·û7x’ìÃdÖ-Ï×Õ}SxªÀ+ꜵÀoÞó{±Ÿp˜9MG#ÿÃc^õÀ¢ó<§bÿ꾂áçÜ> üÀßø½}û(ø¨Òï'L#ŽÂ§ü)ëCý•ü¨æsíu}>týy÷O9o¯.”êüvÔotâ_…=]ý%ï¥ÑW÷[Lzœý*ò­¸ß~:øÅÐ+ç/­ï¯B/V¯ð}±âÞɨ?nÆ~ˆ›ìSÿ"»û0øyýN—Wˆ“ö—ߨKõMÌûóèkz?xŠù¸ãë‘çO…îÊN¬Ëo£Nªs¹¯àûv-ø48Æü 8ç'Q·^ Þ4óÚwBŸù*ò×ûQ¿ÁÛÏb}kîlÔóÃóV\Ìü/îü]ÔE…g&½Ž¶®öÿËÐW'=¯¿ ^³îñ‹xþYø‘|é|ÞÏGoב7‚wâ?ãÀ7~îdè<çÌ–úÏÁWòHõÉ-úºzï[çKçýûN‡]ã?ÄMxùfðòW"~œ åjð>npÙ÷¾z§¯^Ó\ñçrðÒðˆ¾q@^»·×ëûl}> J=|:xgvˆç¼yâdà²cñ=×Bªù:pÏð½ú¥žÇóÙoº?>úyâ“Iø×åà]ñVWƒoª¾ÓݾÞò5~Éç¿ú|!|3Ì­ÐéßO¯‹¿§ëÓ±¶ßôz£æJF=ðiØãÅàkí?qŸÎÇÇ>†ŠWÖ㳨Ÿä·“Q'Ó)^„^T¼pðÒpž~ZufžëúuÄ:'¼»<™÷Þ‰—êŠks¼ÿÏBo÷Å'çîõa©/ø·úûAðÞ÷vÔCÖïrèr?ˆýS§æ½sê]õiÍ׉õ¹ü仡_Ý[êþ)ê°:Ï|üênÇ©êü…>ë/o]èýà/íÃfÄ·SïÅzZ¯fÿùËçÁ[œ üñ½ÀõöŸ¿ŸŽ¼þ~à ùoò2x÷±ÇwÀ1ôWëþ‹ð;xýlèÐø¸ÔáõùÝЛÏß.>‹÷ôÌß.7¯MCoÆŸš÷¬_óVWˆûü‡Ÿè_÷>â®øW‹'î³½|¾ý§QÇÕ|èq¯kë^åIÏ7ò½öBð<§ƒ§ËûÌÊùs—£þñ<¹Þ“œ/Y‰üèsu ^ºMC§#ùiÞ›õ4ð¾ü*ÞNâû.‡}u <)ÿ }|•? \'žã“ÔÇþ}ú=ö|,ô9ñäÁ`úûçÿÖ㤸,òÿFð6ó°Ë¡ë¼Œ¸_÷«FçFðt:7|ïÃ¥ž½¼UçzóOì/Wëu£y¯â€óø]ëažTÞ.ý2xdy„î³ù‰¾)né[ªýül¾âvðàÖ=âŽú‹ý9_¥.‘WÖƒp~èNÄIýÄ57a·¿{¿ñ–žðW~9ê8ïQßÃøŽ›Ko.´¿Û¡Ëèûb7ÖñÿÜñºéQìûÑÐñx [á_æF_ž×sØ:ê£Q_ûYç«çÝN¯^qþZ?ÓFä+ýAÃ=&…ÄSþ²5àý^öázð}òËÍðoùÜyç¾}®çºúÎOCÇóžtˆkÁ븥ìbÚ×eg8~Úýu#ì΀Ÿ6Co0gÎ:Õ½\Ãïé[óþÎUÕæi÷#ë¶ùN|ß ¾Žösò­ç¿õ>—õàçåyÂçÒ“2þ‰Óüc¦.=ìvÎþÎO¡oï¢/™ÝM#飼á䯃7•¯Ôe[ûýû<žOœ³â¸ö‡ýn?Ö®/å‹;á7ìts·ïû¯9ÓŽ_7ƒÿ]º¯ê{’ðò?ýªsð“ŽØýõÀ#u¾<øa}"âØFÄcvôeÔýÖksÚ÷_iÿÌgß̺~x¿WÿÚ÷ǹRûm_}.×FÍÂ>Ì‹ÄÇÈ«w¢¾µëaâ¡8õQðoøñŸÝÉ?t.õÚý¨+å7<˜ü²õáõà-àªï¯%®8w*þÔyù+ü“ßõÉÕˆ¥îv{ÑÇQý ã¾ÎüÁŸ³ðÓKÁ“È7â÷¶wû¾¸·«æû.z\‡GáÎ;ÁWÿHð{êGñ%ã’çÈûÂðø¸í¥Ðñc+¿Ð‡d.Ú•¨jþârÿ=ïó‡á}õ¿ùÞßOº\ |-ÞÁ×?Ž:^cOp;æß·#>×ön¹ü&üUswFÝ¿ÍÑbuÿݤÇO?<ö÷jÄù[ž·_òœûøÄÕou|oÓÄiyz3øªaÇ7¢¾µ~Åg/º.Aç=7qé¼Ç}øDq~BœX÷ÝŸÇÀ³ê?ù¿).Ë·ö™îŠ?¤­D=c½~ŒýÂu¯ù¢çù3‘ÿ«. ¼j¾“ú€.¡/Žyc9ëóô§E¯›Õkö>¿õ1ò^è¬ð4<ìçŗ̇ð™ü?Þ¼]õâAÇkÁÓâíÎÅþzïÊçòÛ¨ÇS¸y#t‘«Séø¤ºtÜãbò·{¾›¡_‰w#N˜C9›ö}]½M]ûÍa·'ë«/Ÿ½¸/E|ÀOx~çŠß÷ý®‡ÝøoY­}žt½ÎúºwýÝŽºébð/×¢u¬ê±øœŒCüÚœ^qúT¬‹õ¶¾·Âå7çTô½>çwïôüŠ€èU÷¼éxµú(‡Ÿ;¾èqùnø5ÒysLÎFàǯƒ'ROÙvœ:ˆûOøõºüRÍ+~oñ‡½±[sšÜ_u%ü˺|+ìñ‡Á×éû”7Å›š¯õ¼ºüòÈxç\î?þ^àö`ßÔ¡Õoqüv蘾Îo¯¯ê¼|ÆéÂÏ»o× 6‚áÁ»®¯n}ÙƒóŒõùÓŽ£Ä?v?»¼ <©~Ö¤n/¾ùS_OúÐøœ\­y¯—Šw÷ï] üé<æ«q¯ç¶‚7¸º§s*x ös7êcóØý¼xŽGåïòººÆçéϱßa—ò$œc½ñ¸òÍzð6øÝµÀa7c?ñƒÎiþ0ìš?°|ÂVä˵àùoE=†ÏSÇé_ÙÜg]>Ž8öþ¼óKw»®øŸõÖðïÎÑ®GÜ(^hð—ið¶Ö±ÎùM{\3çQ,Üz$t¤«QwŠÿç?ÝýòvðøŸr)xKùþNÿy:Ä›þýø–Ïÿ†ï‡þÅoí×¥øy<º>vw=xÂâÉ‚·WwéS¨þÄy× Ä_xPýƒÔo}'ô%ûR}×óŽ#áá‹¡;œÌý•G£«sÄÝînE]³õsͧÁóú>ËËpœ>8ñv-tœÄ]~¿ñYÄ~^xnÑùxû‚eOð×àÅîŸ{'tf8ò|àv¨Ÿðvàõ3ܘqðzıš·ÿNï‡=ú½ò«ƒ/œŸÈû>O„^t,xk?¿2êxæ›Ðso±õñûx­¯¢Þøý¿u=Ü=;Î\ŒzãTðò⌺[ß´¸NÏ„ïáÃÀQÕÇ4ü·¾…{ÁÑ#àÞÍàëjÔó•ï_‹ºñFØ;ý×¹’µàÓ.Ç÷T?͸ïéË}ÿê¾7:ãRÏC? ÝÎúÂu>~UóVÇ]–ç<÷jԳŻÏ;olÿփϳ^³ÐÛá¿o®„yÐk‘jê¬ã 8ôFÔ~~+ò%¾¦æÐ­t<,¾ü8üîVW¨ãN†ÿ~U~ÛÙíï{7t±ÂgÁKz+tÍã¡ÓÐf¡Ÿã1ù»¸G=/Äz~#/ÞŒýö<7"{.õ\.ï°øâiä%øîŸõQðò꼓úž®Yù2ðù­xßÛCo‘ë؈‹ÎÉÜ}èZð…Ö³æÐ»ß°Cç®†ŽºùÒçm­ôuå¯ÎünàÉô7Ó#ЇW½QýnÕuô1ñº.< ×½À‹¨»#oߊý”÷^oâûð5›;ø~¬ú´–º]ü~­ÛÍãI¯‹é\ë·Îß._›3w5ê1ûu;Þ5ø”:_;éºÓ…À!u~3ôxõ¿TŸ»J<Í>zñÞ¾ÎOªwÎÆ¾¬%¿?ïùåZðfU¯,:®†ïͨó Ok^(ûúÀŸ‡´4ê¼µºÌ>óqsŸÅZ¼'Ü^Æ]^Äzé/ö¼Õ½Û÷ÃyCqðzèc·‚ŸÓ‡\zLð?÷ôòû¥;Ô}Æ£î§ò;ð<Õg´ÜqOõ+»?ýÓ"ö/ø²êwvÞ®õù© Øøôþ›þï—ÂÄ»ßÿ©Çó£Á»àC~üyÝ×—:ÓA[ìC^ÿUø‡ü¿¸åNðx—"ó£S±ø2ý„gï–®º<ª.½º ¿„«Å;ûãóØ¿áÿÕ/~Ðù¦ÊK3ÝÏžxi=ø4ùî‹À ΅΂״¿ÞïVè¨k‘×Ôóæqè·SZõâÓÐM®DÝ%ŽãœÓ¼y¿ñ»?u¿¨óK{½®–OН >ënàɺ¿yÞó²¸ü"øuïµþuþdÒŸÇ{ÒèÖÞëÇQ_ã©ÙóVð6ÅëEYçbƒ_¶¾øT}ÉÛ¡w8ǧÉSæÚöq#ò£u¡×ðÏÆç•9ïxr-x–×ú~Þ t'xèëQïýîMßz[ñ¨Ÿ<ÏóÀ‡tüŸ;ùn:øô4ê·»#ôCÒÉÙÛ­Ð6‚·¨~§á÷ž-…Í»ÂCpàÕxNþðýȾ×>úïÛá·§²/¾ê¦EÇ‘·C“WÄ!õŒºTÿî¿ÙËrÔÉx“ÍàyéÅ…KÃ/ÙËfØoõ»à¡'¨³ä}qÍz¨gàô‚gºñŒ¿~<æóø^ñëfØÕï†?_Þ®'Ý ^õzðEp%{Sçߌu_罚8:ø|2}.Ù}öròéÿ›³Gþ~øoç;õÏã©êÜðrò¾QwÙ¼ÂΧ¯ñžÖólä_qþýÐyØ'Ü\sâ]GÂçùZ~ðìcÙYéq\•]ÜŽzTŸËëÝnÇÕ§<Œþ¸úFèËâä7oú¾¤þ&^á¯E|¿NŸÈ^ÿ¸|ɸÿ>ÞJœTU};éõ»:…=›ºûÐËſӡO•?L{Ý"/˜cè{à=õýz#tø3Áç‹_ðyõÍtÞçÃбO$ßu²8õ‡¥7#ßê/4W î\éqˆ~â÷ÖÂÿÏGý/þ\ þLáo…~ŒOv^Tg®\gn7¿·ïðßÙÀ÷žóFðwüöRà­œÓw'ìƒÿ½žuÿ±~ò›~åã¡?âåéYä‰ogœŒ¼Îoé€pPåËÝÎ{±sª¿åcñ*y¥ßÏ;NÅ'Ô\qÿޛ῞Ã=V7GÙßÈ»ê%¼Øï¦=ÿÔ\ðà1ïoð ¾çZà$ïCW‘_Øéíàéø›8¯¿Q~S6¢Nawâ']€~˜çlعý‡gEýÿj¯çµyÄÉ‹ñ¿ûcŽk×éöâ«>¿—?ï=nÌß®cß‹8Ÿ|3îùÛܸ'‹Žãý;?=öSüμûáíÀ;¡gá9^ö8u%xûZx{ÜqÊב×ëûðù__Ïá5Ÿ_ý‘ñûæÛ”ž°èu³ü#¿ñ~µuÐëêͬc‚·®{úwÚw|·úÖç~?x80Ï—ˆ×ÅÆ:šgU÷’Ïw1êö|%pvÕ³Qßù“ùžÔAÌŸÀãÔùÉÐGàBüÀ«7ý{®G">©ÕÑ5tÜñÍø^öŽtÍê{×ìÇû¡sá×O†. O ÒùñWZ§˜w{5ìEøêß{^Ýštf#ôþ…‡¿ø^œÄkÕ=#»=ž'>ÝS½#îÜŠº½ãó¾s8áÕRß×:¿_ýï7]Žu©¾rºpä=ùªúÆçÝîàzŽ÷Ÿ„.ªžÙ]ýýÐê>õÐ1? »¨>„ÐkÖ÷³ñ ¿¦¿âFðcÖi¸ pÐzð?ìËýl·CïwŸ,÷õÒgŸ_øx5â ÿ„£ÄyÏA·¯ù)Ã瘗ëZú ^žžóž.®¶ÎÓ‹ð½¹x{õaéÆÃ:è·‡#èhÅGôº¶æLúûW\Ž:Š¿Šã…:Ÿ$Ü >äáðûú,Õ­#þêÂ'ÑÅïEý˨£äeú{_?¯JWþ^¿ëVèôöóêx#¾×úÚçš÷<¬³¹¹5ßaÞëøÔ¹;¸u%ê~çJœ?Óäÿ}4ìíFðP¥ïo{#âÒë?õxp3xë¹™uÎ"ðù¸Ç™“Á‹Ÿ{ußð½øv÷×ÓQ7VÞžwþ½‹OÇB.ü±ÛqßW»òžyq ¾ÇK^ ÞEÜP¯Âçt\qˆ=àsëü+¾:ìÚºé;gè@Öåhè«êdõŸóp~_Ü‚3÷ýÎUßhðf‘ò¼WC÷ý­ùQÓžs¾þz=ð×w÷—ž8»ðû¨ã2q–_ÃCêÞœ«Sçëñ ÓŽ¬‡8ßYœê{ôɱsøÕ<ÝåÐ#Å~ôêLäçiÇßÖO¾ªyì“ûg/ÎÄÿ¼W}Ë‹žŸùÓ«wû÷ªÿü¾y;ö“~"Ž¿þ~þ“¡#Šs›Q_=þëFèbÖ ž.|1íö‡ñ3¡‡Ëï…ŽdŸÙÁ륮OÀ“7c½·'oª¹‚gÕÃð!½4ïK¹uûÓàíô 㥫¾wC^Ìû:Jß_ôüßÝŽ:IÞÑ!â¼üöYàÌ›¡C¨WÕìOÿnÔéžW}Pó,â~·è Àkô šw5íþ–÷‘û¹Ó±_Õ¯7êöÈîÍA»õš<G—~ü݇·Cß]|3p­y‹â.>ë£þÜÉO»O–_]‹çY }Á~ÿ|˜û}̽òyyï±¼Qù8x‰ÐSÕ¿ÿc߯šuÜv:pÆfðòÆ©À ðæfðûóéŸ×¸<¸xµ¼{«~Ïiß/~üYðXø‚3ÉLû>ùxKñî£Àáâðµà3á2Ïi~ú^ä?󥊯>žÝò#}ÓÐAØÓo¯¿ ÿ¡'Ý ¼{/ê²Â)Ã{~úB×÷çû&pÔס'°Ïß/wý¬òÄJ×)n…ßð»…^^÷;Ї‡ÿ6_BßQõçGÜGjN>>xÜu\y¶t”iÇ_ur·û÷¾zÍVðÏâÀfÔ¥ëÁOê·ªûÀ§}_î/ºÝåÜóõàÍÅá'cê~´ÈÞOýT÷Ùú~®‡Î±sÐyÈš“=êñ­òúRÇÙê]¼þݨÓó>nù×¾±çÕÈßê¼#ÿæÏçƒ?Y]¼/tÎÿ£ÐñnE|Po­Ï¥Þ|?tŽËQÿ¯‡^r&êܵ૿&ð>¿’?¬·¾ÀsÁó¸·×ºÍBûø¿çÀ³.¸õ«< ³úއçýyÄ5õg®'ûV—”½Ì;_s=üänèËüünĹ¼×ôvè—ö,®ÖþÏ{œÈÞíÿë•ÎçÖ<¼à;¯¥>‹§ÙëyS¨žªùÃÁ_çù1ûz'ê8x‚¿œúÝý?_o_óÔC¿X ~Ažú:u¨åWo_à<ÅzðuìüBê§ûQG?Rúļãvø"ûÛÙ^Ñ{?¾çÙ~wÙ/¶õNõOzÝ®.Ñi.­9á/5-ô4ñÖûÉ›Gâ=êüÑð9îyæo׃§SÿžŽ:\<®ó‚‹î§Î;ûËíSÝïٳ÷ü,âêõà=Õ§æ"\úÍü¥yðÇâÑýÁOÍÇ=<•u·Ÿk‘߸Y¼À›> >ärðÖùhèxHvÍŸÕGÖy)ìn`.›ó¶ìAd—~î›î¸zû§‰›óÀWCo•'ïÏ»çy;¿GWÙ;ô57pRé}Ó®ð7ç*®†ÿ³ŸÔŸÍiaß«¡‹òG ÿ½º†ûØ¥ó!ðÌÙà{ùÿ‰¨ ðžuÀA.ü奈ûp¦û×êÜF|?ÇCȧp·sMê°‹Q×޼ʎÿøý·ó†pþFÄEç°ÔôZuÜáhøÛù*:£÷/ú~¸W¼­þ—à êÞˆY·³š§µè~r5Þk<¨9»ìŽÖ=½GôŠ«öýjð²¥-z½·y•ÿ¼:<ònƒ7—ªãñkú\Ù^ðzð ê]ùî|à\ú^õ}ö߯ûëƒ7+ûÞó7£ÎC|üâøÞêçšÄ~LzËó)tZ|sžk°¾×ÏY_ñ×Üâßï÷øXþ;é8ûfà ŸWºÒaÇiWCÇ;|ÇüŸ{|…?}~WsC\÷êÛÝþÔUòà³Ðž„Þ}%òÈZð}ø÷ëÁ}u°8‡Ÿ½y‡Ž°8²ú”Æ¡ŸŽzÿ{¾ªëä¹àÝÅË:/9éï!Ñ¿øk—ßíyìdØ5Ý(ûbàmz¿Ñ´}ØùXñ¯ìcÞŸ÷XÔýGç\œfÞ0¿„cÔwú×ü><õ“¿Q¯‰§'Ÿß ^íj|ÏÝØŸ¡«èïQ¯½øž?Ü[îëZçYÆñœ¡¯ŽG=Þú~Ÿç9ê~™EÏŸ·ƒß+^aøï§ã^G±ï Šwúk^øð½î/×ßu+uÛEw? \À¥w‚oS÷^˜¿]§¦+ËŸ¾ÿNðÈøoqúxä¹Ç‡=®å<=û%¾ÿÓ·»ý×<õƒ·óKüýtÔiægê‹¥Gá9ôuÔ¼ÌE_?u…ýâï7§] Üh_oÏ*žä=ŒÓÐ/ðn¥‹Í;$O|/uq¯;ÅO÷B\‰xq=pÂüÏ— »¸|É­¨;åÁœ³È/ÖƒÏágÙÿÿ0ô€›ñýâ¿ó³î…gà›ß†Ý ölè‹ë'áHûï{õ“X‡ÛÁ'ןÃÏ|êÝÀåÖ/û¾_ª9õ»×p_¶ýòüÓ¨·åŸ›ïê\ãr÷Ï|o<üÿj%xši γ8W%ÞÈ·¢^»F}gnHÞœ}k±¾ÖûñAÔÛY/ïõ|Wç!–º^,þÑ¿àNùÄyb:LÝÓ7îþ&O²»“#Ù¯[‘oØáG¡ŸòïõÐé­›øàœîíÐϬ¿ùøp8\™}’e_ãþ9gâ¹¼—ûüàv8ˆ¿šç{¬ÃÕÀåö¿ðã¢ãCsio†ÿÝ^øaðaÅ,ºÝznñ#ûûíOÝ›;ê¸úrð}Õ×;íyüfÔ¥üå›?‡îß3 ‰|¼Î™à ­'{ħÞÌz|¹¯Sö—º~3𾟫z4êù¦üe¹Çåks®Fþß žRßåë?w{Æ?êS‚£áy}2pËëîŸ,â!¼„_bçÇ¥^è½kþÞA÷:£õ2ïXYçþ']ß-þbÜuøl#êTñ­æ§Fܼþõ·æò«sðMêÞ¡Òžìwç¯pÈ^ãâiöAú«{2ÞôºðNðžìÌ~ªÿç¡_ýþûQG«ûö»Ý žX\?õÝ,ü§úo‡ï3ìbà¯w‚7cwþýFð.9ãJÔoêœËð{Î)° öúOÿÜñ {Ó·r5xìºwsøï?ü¹¯ÿÃa³£7=þˆ;7Bßɹ¢Îm±g8ÿýXGºÒÎôíù’>]}»}ŸÕúðñÑïOˆ·¿õ~ñf¡×ë?ø+žÅz_ ?”W®F¿¨_O„®{:ë»iÇ=ð—zæ›åþù•†?ÝÿêœUž{X ?ª¹ù‹þ~Ç‚¿‡õÝñ‡ãñüêÖõˆ7Å#Ï;þ/èâ²u¥Û‹÷U/É ÝÞÙAåëݾþ…S'G§ýî_{üô\úõg±óšK¾Ôý<çèº7în†ÿfßö¥È‡x¾:W<]Í-x§û%Þá˨gð‘æ)ÀCycÝ?;êû`¿åþ< þýrðüa+ì¨î+ŒzܺÀ“úcñZužaÜó{ñó¡CÐðuÙn·{u˽¨¿×£®Ç#øõô¹À×§]‹úðJèu¾lÚýNžzºSÝ:éø0粪¯Ï žg_„Ž6‹¸êy.„.~+x¬:6ïxUžº8ýÀ?Ö÷aÄ¿ºG¼†³6ãß׃·9Ï‹O¥ïÀ™7B_©{ çŸد8u7žïZÔcyþÍþ‹§RoœözÝyhqvÏ·|è;—îÆ¿×}÷³†NWZçšk»Òñ4¼¼zVõ³„ßÿ.êeü xVsÉçGŠOg?â£nF}˜ý)×÷Õü†iÇEGC¯—·üþ7{=ÿå9£SÁ£ß ½ânèÇðÜ€ÏÝž÷}>úŒ>˜Ûáwu¿¼ø¹ÛuzùîBàByžsùoãïõS¬…n$žûÜ:Ïëß“zÓãÈÍÀÉ7ƒÏö¹úÙ> }ïràÅšËrÐãºx<“õúÀ_-ºßòë¼w“ùoýËð,ý Þ{?ð?+´ß×½þ×îoüK¼“ÇVCË{|Ä÷á²ÃÕˆ+Õ¯±Ûu8ëµú®õqn@>Ÿ?êù<ï׿Wtyølø³¼z*ô÷Øßß øîÁnç㬣:E•o¦¡o‹K¾×z ^[<¥ß^{¢ë=‰øæ{ñIôCsÿé=ïðþ£®,ýÓóEÜ”OjþéAÿ9ö­Ž¿êž¨á¿—G‡¸uƒ<î{ž­tžægQ?^ |p3ìæfàRúæ­À‹òdÞ“´ù¬ú#|y‚ŸŠw·"ß߈ú¿îŽõªù _Õ›9‡u-òF]¡NÃãü8âòùx¾¿â­:^¼zMáìE_ï#O_9ê|4+OÕ{M{|K^½úo¯Šy?¿»ûM½øŸ_É×£ŽÞ ýW¾u¨|e?ð6üùŸVúïË æyŸ‡Y:G!^ þP]ô üLž½´x»>…Ï®s…ß¿»ó9×B7ÜŠ8Xs1ƒ·_?PWÁù7¢ŽçÿtUzœy3üˆ>bŸë¾•q×ÃóžÐµÀSaâŸÏU¯Ãûð†ûXð”u=ô›¡wàaáBq¦úà†Ÿw.÷ràÀÒGºýÉ'KÁ_¬„îÀ®NDÞ§7›çåçØ\sç½~ƒ¯ÔoâéVä|ÚÕx?ë&.æÜ‰ê“ Ýîºx¿îo\`¿×‚7¿uvÍ}>ǹ/sÝçÃ=B—ƒg·Î5‡xÞýL}YýÛãnâ$Üõ$t!ë.Ôy¿¡ã't%ð¿x)î=ØïøÎÁÏÝŽúüqàßêóþüæû}ÄOqɹÊÂçÓΧù¹É¨×óð¨¼w%ðã;aOðlţݮ×Ý]±Î³†¿y¯Õˆ—üv-ö{#ìûJðöëQ‡g_û¸uœXë9ê~€g˜¸8~\ Áon¿„‡¾5;¿®þ|6ëŸë{ò¾Tûx/ô_?š›PøqÑóÎà]¬›øw"òÏZØm[þÝù¯ÓÁ·[ÿñ¨ã7¸êEðüt‰Ø8 ï¡T/ÜÛí~c~\‘ºÖZÔÉâµzRܨûé{\¨ó‰K‡ÏÕ€_¿ϽõõjÔgÙ‡oÃÀŸÕÇ8îëîÜý]\v.Q¾ÿAèGëÁßÜ],û¡ND}GÚ ]Èóááj"é;¿v™º±ï?:ìzÔ‡ë¡'ñïùp¿²~¼£ï'ÿµ½mÒóêÑÀîaõóîßõ³©‡OG}!ßÈ—Ö]â[ñxÎÂ…¯þØóâï×ÜŒüv9p,nWgÜ?èñ_÷…Ïܳ£Ž­¹I‹þ¾ÎQÌ/â{øÙ…¨×ì—<«ÞÃoÏ·èOñÙ§?Oƒ¯µ·CW¯ó­£®«ôzðxð=„žj]ê\QàîiÔ#K¡××=„‹Î›éO»<Z~à7Îiªë\}è«ñ\êÓëoÆÏ©7žìö÷Î÷…3ÖCo½zé­à Öù÷â9–;>¬¹*‹ßñ›ê”:?µ×íZ|¯¾æƒˆGQw±«ÛÁóÔ¹Ãáï…Ýò÷Ú‡E×ù¹÷X Ý}þïÝ_Åñͨ'ù%|9ÿ~ÏWð’þz©9Còª9=µ.{}¿ÕKç‡Ãùt¸ê#wÜx*xúªóÀ…û\¥ÿóXè»ÖNƒGÍ÷˜¼ÃãÝÀùaçò´úžÿò7ôðäÙÕ{üŽ}εÖ}óàý'=þÞ ~ÒÏ[Çš‡:üž9@Óà;ê\Þ´?ç÷Šyþ¦Û1;`¿uOÎðýúý}û[z|;êÊõ¨ÓÖ#ÿÔ9áÐùNÿ¨RÜúÝaÿïêw]þ„÷ôÉ»ïû×Áo[‡œs wMBo>zÄ­Ð5ñ·êHñ\>‘gå< }Ú÷˜#q$ü·âQèŽêDï+Õ½9“ÎO߉úQq;âvͯŠ|Ãÿ¯Äú°køÆ¾~3œ72gŸQ÷=Îû>\ žH~¤gx{=êSçüK;ìþåÜç4x_Ï»üìµÈúäÔÝâ¯xJñüÓÀ«ænoF݆ÿ†OìûÃÐ1ïFÜÕ¹ö/~À©ú”òžfý3…£Â¯W£î·_W®‡•n±èq¹úèƒçOôçú¼õãÎçÀ“7þƺû~u¢sËøð<ç~!â±|¥¾»ûg¨;õIÀ¿âö•È£gB‡€“JOÞíþZçß >“_¹ŸO~ϹªpÆÙÀÃìd#ôíGQ?Џ.οXéëSç&=~ùÞºá›fÁ·Ãµî¼|橈Ûgã9oSö·ßßWÜõµú žùDèºw£î¸xJLþ?ûþ¯ùeäýKYçìvD¯ü}ô7ŸH>:ø¤Ò%þ:âßNäëùhÞymö“qÑ>Ü|~+t÷¿º—nÔy‡ìÏR‡þ t‡›™—ßt>öAì{Ïs?ìY]ln°¿Wï݉õd—u_îAÇí³Ð ?Ý'ÎÖòƒ^¯Tý¤ó¶ì¡æÈô:¢ú§âýóqq÷|Ô]¿{Ôí]Ü÷<ì\þ?üܣߓž´|¹:ÒÜçéà–zIÅ'Lû{Ããt™ßF] ß×ûß^ò'Áú{ûòeðçBç¾8¾î_wû¹úÿt¿[Þg= <ùd©çAö Ÿ‹CÏ—{½Ëõ“\¼gàÓÄ™Â'£¾nG‚òžxš?úNžû`ÿ…7‡?K½:عàY¯Î;ñ\ýx.x‰;±îÖóõZ‡'Fo¯7Âînï[ýçñûߺ-Ï{VŸBè‰â¼û‚ŠŸ\t½JÝé¿yk›Q~xàQè¿yN>ïqœ…×\À7ýùÖ"_ªè(âÿ|¹ãÅšï8T?‘8t9pkÞo,ψ“ëÁû|Ÿk·R‡¸X÷²z\RÇž >{3pþ“•î·+±Nú‚¬ÞõnòÖ㾎w".é§R©+üüOCY:°îk -礜OÝ}øù߄ݨߪž‹ºþÞß÷c?j¾÷nè{ÁãÀô®oþ|[èÒ›‘O¯…Îør°ï㋎?îF~©{ãvCG˜t]éNÔIü^(æ Ç÷oÇï<üUø÷ƒ¨So>¸>~»~ëßÿéLç‡ný ý¦æ ï]÷ßLz=_ç/ƒg7õ‡Ÿ?z—ç°>7Â^ŸéïWçWæO¬ùþӞߊ·w‰?;ÿÊÎKOÄç?¢d¼Ì•àå|ïGá8Ò?/çlò+ëòl¹ë‰â&=Êz¼~§×“9ω¿ü~øo÷LÞœ£n6‡Bþ½u°z¦æt¿Ÿ†ŽsG_äÉàµ>¼è~ø»¹¿óŽóÄñäZäaûÌ¿·Go÷‡GÔóêóЄÎÃ.~(þ0ös=ð„¸Xº`ð¥©/²gq’Þ÷ûw;=q_~(~Oþ&t&ë4ØýïuûÀXïÿûá¹ÌóÿÄMþX¼Eà¨ë±/p >%ñ£{n^¾ë¾ržçݨÇg¡[×ûo`}èT_% ý×ïí,º?â¥ñbâ†u€§ôщ×êwqTxûŠýRÇÁëCwÄ?¨WÖƒO‡£6¢þX ½µpÖ¢óÄüÑ\¥»ÇR7²âUá¶yÇéÕ—:‰¾–‡ÞgÔõ:×>îyS¼’àÖš¯1íû~?t-ß«N×D<ÿ$꼜ÂOjžéðçË¥'¶#>×}liãîkƒœ+‚ÿjŽ\ð·Âj…útÖùûÛQ§Ô=TpèRìû¨ÇåIÔußk¼÷ãEÿ¹š?êùI=²xìhð}ê »CÄAõrŸôõÏ{ØÖ‚§”'Ôã§CGöþx‹?ÀêùB=¾8Ž5ß î‘ ¿~7x&þ¶üJݯ= ~ï ãQxá›i_‡“áçwCï47åõ¿tû)>s·ã<¿Çå÷{»ø{×<èïûWsÂç㻎„P¸jÚãúã¨Ëüùyੜs"ósv*¾ùSŸª8Qý$Ýß«ŸÅz•κ¡ó»öýÛ¡ð/ºŠºØ:êGºõÜ­žýýZÿÞš³µÒóyÍ ˆxy=òSõ•ŸwoÒíüzàÊ›¡·Ô¹šÝþ¾ö[ýnÿô‹š¯·æ=ãîÑaxU<¿¬x¼¶8ùã¨óïϺýÔ9„àµÌIM>‡?ÓóÔ]êäc‡Ë_†?ç¿ë¾j~·Üy1û~xÇ\eñk<êñäFü7þçTØ¥yÏüßíû¿Þ æSFý£éBì<Ïä=BÅ÷o ^Šõ’ïoߥ´þúþóü‰xSñg·ãþ»Á»«ø‡øÿ»ýn?âžý›àá¯Gt,üµÖeÜã•8±8ÇúÊWÎ\ ]ˆý¿ ~Ž?<]ŸðÅГ«ooÑqdÍXîÿ.Þ߉ß{úùÐý^.:^ôûG‚GÃϾú·®;ÖyÇQÇøˆ•à#ÕâÇ™ð3öþ‹ÀU7ÆoçáºËiOã·ª?6C×ó{uNiÞí<ó|h=o…~™:ÅðߣÑû/ÿñþ¯Ñý÷ÿøóÿuwõÎõÿsÔæÿ|ëþò¿~ðá?ÿïûàÃw¯üç_ÿÃþ¿ïæ']ÝXÝñIþò\[½·úÃÛÿñ%ñãÿ}{óá}ùÿç?þ÷¿üå=þ×ÿþÿóÿÄ¿óɃ#þp¶ù?û_ÿïÿÿƒŽFÿs4úÿ>­©¥ºFJu«~2ë«Z§U‚%„ްÊuëÝ´gWYÒn8Í!Š µ{g¯W ¢sMû>ìV¼Õ×@;;¡"ËÊ¢^Ý=ëÏ= •¾XÁ«t§~Ù¢º:æ½:ÏnŽ:mêØˆ õ÷ñ{;ñyÔ ªñ}ë5íYY´…þ¶B¥vÊÂçÉV²}®ª|·gaÞZìÌnG¯ž§N¯aØÙ¢G½sÉV»Ú½`7«Ážø|ÙäÁ¢??4S·r²Çy¨Š£®zÛWUÔkÚÊňڲ²î¨;ñ>‰u 85+zÕ”¾`‡t3ÈJÐ÷’±¦þ}3žOW!¶…›öâ÷d+~«›Á>nD”Þ–“Ýêâz4ü{NW_Ž« bQ ,áóqGCõÛ 6öRT9ýØônïÉ_ØÏ¨b°žÔfëO%s ëz ÇºE5ª†Õ`ITÃ5eyÒÕûÕø½i°Ç–ûçè‚ð/(ãß‹¸ Ý`ÝLÃS]ËîÏÅ¡ásŸ¾½ª•L+W½Õšqg7©N¿?W ~cª¯ç}>êÏU·?…Šd¿L?ñíÉð÷º¨LǨ _ ?çÔÇËy¯ŠVÅ–;]ùàM÷WÓ¤SÕ®)xá¿N9> ö‘}`Ūësø¹Ç£þðƒÛ=±¿X)§HukbIœ&ߟ;Y*ܤW1ð…ÓUªŸù¸û¡ª{;ª…zÞw{µ¾,³î`ìû×u¡òçËau'í ö­«ç~ÄñÿÁJ÷ß²—ásÞ4õýfTé§¢šz®êÓÕJ~Ž…²Ø]ª ;r:‘¿»Mïå~_¯:]ö¦ÿ½îÑçÑ àV¡û‡Ä­§‹¿uÃaã_­õªC7…8g ·õ5¾Ú ¶s;XvÅ¿ù›ü>?Ò»LMæÇ/†ušö¥kø©ï]žÛ|ºÒíE5å¶N{Ρëá›á¹½éñ¢¦Ýïw/òêóý^X7·ÂuüéiÔüK¼Ö"o>Zîl‡i¤¦#—¿ûïSëN‡êýtø÷g‹¾NØq‘ßyo8äÞAïu›È¬×ö‹]Ëoò.køé娳xüíUà‡böû{c?¼[=áûùð÷ÏÙ¹8wØýÕ¿S—¼·)vìHý¡›³â°|3üœÛ§œ*5­Âç[_yóQÔ!󘆮›ßúˆϬãðs¦÷ò3qþJàlvf_o+ö"T'ª§øøê_:^‡®¬·¼°õ/?‘΂W‡=ÃÓ;ìi¿ÛãüÛ}ŸÄ9l:•‰?ÉWÖ®ƒCu:MCÝ•¯ê4ܰÿ¦èfðûð‰ø÷êL·?§KÆÔsq±êˆQgÛì£zµþ|Óëëùr¥Û™øXö>ïqWýcá yÛù"üØÏ[y‡Ú.‹O‡¿¯.Üqg­¼¿{ù ;ÉžÄ=ë wÎÿÉCº[á*j‹úÉþŠüýÞ›ŽSÅ7<ˆºŸÈøª&œ#nÝ Ü]ï7ëõÝýÈÞùªž—ÊðçnÇCM_ù?Æë¨Ã±îTR]-Ö«TÃEgcÙ±Ó’ÞK=óp¥ã)ª‡ø­‹]wÅ£ƒŽŸØµÁé[ø€ýRÉü»÷ew÷7íwûÉ®eué#<Þ¬ã]€x)ÏÉŽäYöjªÀ uãnÏ p&ûšËÇ“^ˆ{[‘Ǩ“ o-÷¸]§KÇÛøQÜ‚Ä9ïwoÞñéà§ÝÏŸ¯t¶[]¥~}¼—xGéöô\ü@´?7³ÞÜï,ºzîÇ{Ê“Ïzþsš_*ž°3y·ê罞÷ïOÇuáU}5ëï!¯=Ytœàó½Ÿ|^·ªô:ƒ?Á·¾·ê£I¯cñ§üÍñxQx[|”_ø—u§©½Å›-zÞ_M‡x´ßóûžE|”ot ›f¬®º¼Õ+Ë=ËsÔbºÂýþ<øM<Ž?Ù{u•¼éü´?ᛜ¶ +X^ág§«Ûî ï¿îuåóàwë–²qÇgð|Î>ÔᕯÇ]}†Ïá2ñ~Ü u?Rñs¹«©/úþú^~Нªi>‹žoÙUϺšM_Ñ]Wêå°>/Þt¾îAðN“X¯Ò‚~ºÜyqBÞ¯©Æ£_Äs~v'ò0?ÆñËíà‹¼ü¨ÅŸù™Ž—¶bÝå}öÃ?ðŽUö8(ÈïêÇâSVz^†Jïw}f=êDöCoy±»jx¨µž_q7ìh'öŸ‡¡+àkð@ºb|~áí½îŸìãzàGþ´:æÀ·â{݆pÐy&û'Ž<Ýë¼¼Û*Äéâ…v»ç™ÆùõÃÈwÖù~ð5¾^˜EÜ—ÏoÇ￘÷:\ÝÅØ#{¨Ûç—:ß_ÓÅF]_ñ½±ïò¦úIüåçt¼¤ºï^Ý_³®/~_ê:£úi'ô{Õ5%iÔßW^¡ÊKx'ïáT¾d+ðÀÕе„.wÀ—Þg+øXüÝ‚=m¦¿/u|i?áPq^ÞÞúþ$ôÁy_Ÿâ'–;®ÛŽúÞ¢ßë&/Å•+Q¿¯†ßò÷»Á£‹“Õ~Ð븺]rÑã#\¡‹Dµ·Bç§C郰îø_ºþîvØUÝ :¬K›ö:ÒçÕ¾ï÷:@‡ÒåÄ>Õ¿7"Oð?<Ž?ñ„ôúôvðÜþ¬ÛÏà]ijÒÑF=¿> Üá9àYŸgÝåužç¢GÑ]Üê!ÔéËyçåV£.õÞð2{Ú‰ø§nªÓªÃçÕ©ÞE÷Ϫ_Ç}]LéÛÇç]'SŸVŸKè&òýfÔ5…eÑõ\¸·nEõøDÜŒ:e3ttŸó(ô`¸ízèIöW?NÒ]t^ã\ð;¡WÔ-óî/âÇ÷«àÒ{Á“ò›âý£®£cZgºœï×=^øyÔùëZÓ*{>•ì›úÁ~﮺ú|ÝÆ…Ç™tü[ý'óÏéhüg|1¼ç¿7ƒ÷eúàv:ìo#îÃéNÍÐ¥×#Î^ ý߯Nýw៥®K±KxüVøûfðsÖãrøëý½¾>ø ûý<êçÍÈ‹Óà=ÄÝSêÂýwÄ3Ý“ÅE^Ã;Éï>§NUþ£Û©ãàA~¿}Øãfé¿ÁÃÖŸ}ÚÿÍÈË¥³L;^0µøÑYÇQg£îÄcêCÙ‰}7Äe¸´Nûž´®âCáàˆßú ®‡®@ÏdòxáøÐcëTà¢Û½? ïLº=Õ©‡Q÷/ù{5üßû^ ¸pܤÛ-»ô^uú%xOu^z!xdùáqÄuz¤xÿ·ø*õjñC{½wJÊ>‹×õþ£þüÁ»éßÚ ýÍ÷˜:ïùü[¡knE\‘ð\§‚—ÒçY|ÿ¬û¯?Ù]©nu]´ú;÷;^ ozϺm‚îü&ö!x:y`;xm8³¦n?«Û“&Ýÿ7Ëœt{¢°Ãê7\î:5WÜä3õù4ô6ø\þ…Sáö OÕ-ˆêS<Øn·«•È7ÅkÎz¼`ç9ŸÿˆÛðBñRóWÔêŠú½Ý®/Ü \VýÃó_ׯëί¾»•ŽwÄõ#}NžñÞtŒ[Ûw"OÍ‚—°OðCñ¤Á+Õ-lQW‹xlðEø¿:9üy1ô¡Ê÷ËO¥Žu>x!}‹úɾŽxÍ/¼×óðkþ’x½³û\zRèÑòˆþ§Âÿ¯ÊÏúeèZâ³xl½k:@ð$÷"¯É;ö.ò^5ÍkÑñíNäOñ÷^ð4âhõ•Nz<«~¹åþ÷úzëÞЕäÉšzxg#ü·n›Ä{tžµxÊýž×ÄçÍÐkíûÍð[8N<ž§Ngôõ´?ÉkU?ßrÿþµà#®?T·p_íVªªÿã¹íWݶ·è¿Ï/¯…ŽÉ^ð¹pü‰ø{~ ÏÉç¦eáUvÂNÅýšJ¶×íÐô!qœÞ—ï£ï™Þ(oÜ Þ°ú™ÞtüÂþ<×ý¥n/õ߇oçîD>sºn€gÅ>Ãku^gÑñØÝàyÙ±ú ~Gï¯..Á‹øTu¾Pü”ߪ^Û‹uR§É‹þüð»óü‡¿Þ üFW‚ûn‡]ä”ñ›©#Íz<¼ù`=âMÝæ¹Ûë×Àƒõ㎷è‡u¾a¯Û7;ÂÇ×s.÷}Ý]÷Vྻñïö›ZW||î÷ápñ¬òмÇÑ›ÁG%u7òõrð“uËö¼×Oô/q1ucþ#¿Ô-½ÓÐwäƒq×C‹·Ûí~r>Ö>Ù ½8qvõŽcåSç—œ‡Á›ÈçúY￈—.»Ì÷ Þ†~w/ðÛNð"x:x«ðÌîÛõÉ£“îÕ·õ8‰ŸòþêàšÎ·è¸ÿï=õùœ ;¾x¨¦-?§¿š¿Â^/„.öx¥Û­þüÍˆÇø“¨WñTðMñþÓÎ ñï3cü\Õ£®;¯ÿP}»ÃŸú@à“Ÿzžc7uŠ8öÁ{Û§K¡»¿éqO¿dñ+=NÉóÕ¾ÛííùnÇC¥ËE= wû~SÚê–¿ywìñfè>pÅÉÐ=¶ŽzʯWc¿ÿJGßíþ¥n•§èçôƒGa_ðÝæqð#µ~ÅK8J½Qçz†ß{8Þê¼péq»ýßŪ§–úºmOߎ‹«bÚó(û»z»‚‡ŠWšO|’~Fö¨ï®úΖú{m„Sý‘£^Ù‡œŽ'¾ŠtqyÒÏ? ýÑ{âåäYý p‘ÿVUßÎAÏç±§øk*÷´×IÅ›ÿa=à™º­zÚ×óJâ‘áùÜ ŽÜ Öû±< »«øñ®|ù¾øèyç¥ïîŸÄÿ‹QÒ³éEpÛùÈWÅÿÎz½®Ÿ[>)oÜñ‡<”ýCÖ½øïàÕ%õïã¨û‡çÑ¿dýœ§ð}OŸà±®„ýîÈoîF}®_õað>Å«Ì;¾Â'ê;†Ç؉ϳ_ëÁÿàëœ]è»ôeveꋾǫá·ôꛩ;úsª·¬³>ÁŠK{ÅGþzÿÍÛqqõ „Þbýä):ýо ~!oE|µÔyÙÛ¡«ˆÕß7 ^{%ô¥Ãî'ò¶:K㽈ûë³é8ukÉ(øÎƒŽ/­Û™ˆûÎc\‹ß×aصó©‚Ͼu¼SS)Cƒóè¼âlõ—Nû÷Þ M|ª>ñgÚß{-ìÒóñ«ì?¼y¶ú¥gOªûسúîzû†‡Ô—doGþ³^ÓQ÷ÃâGÂð=â^n;õøYÇyþçù¶Â-³îÏ•ïºÎå<¯ýtëÍJè•ì ßoÖyúݾŸƒï’/õ)&w?tš1|ž~h|ª¾eïý tQëÊ_î‡>µz¢÷ãÔÁüÄ\‡íðoý’ú‡õÇÕsM:Oäù;5C_‡çbð;}£ì~¯ãƒê“—‡ç×÷‹G~ø«ú뢞ðž÷¢z|u­éˆãn¿òÿNð ð$<±:—z[žÚÀ>Áýêx]ܱoú'¶"ÿðCù±ž'ðÉFè¦ô5ë(˜‡PñaÔùÄšë³ßÿŽ„“àpçŸíç‹ýž×ª>ó}ËÝ?¼»ƒë¯Ê›ãþû÷ÿ˜_ÓLõï×­…/žÎ#Í;^ªs“Ðq£”Ùç­Ðêö¹IOx5ø©ú'ÿÖsïu;ÇÞ =!ã’= ÜÐ<ýBÜR®šEŽøøq¬3{6W¢ê“•ŽG~þŽ·yñ†ÿˆó5ÝtÖyxÑÏé¿*]izÖ¬óuKç¸ÇiþÍßénn±¹ë[sB»:GÞ²y)ø;Áÿ‰_ëQ¿ìQ:ÄrìßnϧÙx-êŒÂs{=?]˸6|ß•y•Øwë‰o¥³íd|_급½;gu7xcu­<øaØUÝ&0|¾þ=vá¼Ð|¿ç³êŸÚí8ÀúªKÝžå=íŸ}_K— “'ÌG ³\ŒoÖu„¼]±úyÇ}½×§©k.ú:ŽXéö½q•ÿÔmJ+½>µïø>óLÙ½¹À×S÷¾GÿºÍmu«Ðr_o<ÔýÈ_uKdÔ}ür-x0|Ô“xžÍˆóòͽХ¶#žÑû6‡Ëën}8ûðL´Ûã5Ù£>Lu3}°ú}¦'¬‹}‡KJÿ_îù—=Þ ¼5 }ìãÐKk®ð¨ïW Þ‘þ/ž>:|ûÓÙàÿÔEñeê ùÃ{Ñoê6VxgÒù¸Ä-÷¥ /º¿‰·u^j¿Ç÷›aŸÖïº9~{ÝXçh—{þ¯~àáó&£žK¤ïÿXý쟪[§/¯>î¨Õ]tFçOååêû÷õ§Å¯Û‡å…[Qoz?} ;à%|;»1ÿVž—ðuËeÔWuûxä~…çVoÀCå_ÓÎóðcö!>ˆ‡t–ë¡»‰/×/èkØÝ»æ.®ôý*ÝcÜñ¦>ýz/ÏÚÏâ?÷Ã{øu•y˜ò"œgŽ—sùú¼Ôiê°âuÇ'8ïñ[ý€·†¿Ôau{ð¢ï;{üUòœཆçsŽâüÄnà|êfÔÑx6v 'L£Î•_ä/õ¸óþøÙò‹¿¡úþº=x©ÇQ} òÜWýOù+¼½ßãßé‘þ^¿$Aœ“÷Ä«œ3F;yØþáà3õ]Í]é~^sF½nb‡³À9úçèµâ8^þ­y»=nÕ­kó·Û룰wÏq3ô\ë…‚Ÿ¶#ÿ<ŽúätÔ¿;‹^?‰xcݬkÍëœ÷÷ƒsÍ/È[IÙ?¼%ŠSüJÝί­wç |.Ήúˆõ™Á]y ®z²øÂƒ®ÃÝŽ:Ž¡ŸWßoèvÕwÏo–»~jðJ×’øC/zñ¦ú[C›Õgö]¯[&ý=õ[×üÚY¯›Ø©¡·ëÓ)>(t _·a¿ÿ$ìR^ro‚¾ÂùZ÷~¦?ÄÜwý`G£n§·? ~Ëþ=}B½¤ÞqE-Ü=évq;üò^èoÛ¡§ÈãxEvt;ýuÜù°ªgGGÊ'xù¼ô¯ôîÃŽÇù;Z?öGº|u͵žÇ-cìÏù0qºôˆsOü=ÞÂíaú³à@sáܲ¯âAå‘ʃ³nopŸzþ›3¡GOº½>ï÷{ÜÜ {Å‹ê»UWëó¯yªË=ÎÎߨÿÅ—a§©GòßYòY“Ÿéž5orÜã£þkÁÈÿæâKè âÄ˨Ûä?vp#ðâƒð/û‡oÚ½L~6?A=[s#ç}}ÄÍœû\s']¯/ü<ü¾ÛhÕW¢^±®b½ôYyÏÍÐÁðåêDühñˆ{¯ÕƒêÔØø¤æí÷¼±u\áÏq¯¯ËÏ÷zžÙ ©î‹™„®3ë~'?Šƒ9Ït?tpyýTä;<º¿Çç<{:ݨë7g¢>®óqpºëMÏò’| _ßùoóàíøœ¿¬NqžL¼vnçtðàp;œWsÞàˆYÇ⾺~3Ö×úåùáÒ]vßþyð¨x²¸¯æX®ôúI}Ä~Ô1"‰yï\âšš?9é¼:”îS:yèÎ5°yxÒqªÛ`OÄþ©kñòî¡P‡èKP‰W/㹪®˜vÜT÷ÈX÷Q¯CánþÏ^»¬¹Á;Àqxq€ÿÕüÖÐ æìÏU¼á¢ã÷ž÷8—ÕgTss:Ï“;xlü굨7~ûOÕŸ|3Ö½æ Ìz=%ïÙ7y@ü7êþ±•žOÙGÝkñ¦óúÉïG­¾öyÒ?G?ÿ„¿Ù}©óé¡ûÀ÷ðnÝS6êù¸úš?¼$;ª¹üòÿ¨çUü-~ÑíÎâÇÕàëø{õg.:¿(^ÒÇéPpœr+ø[Þ ^ðxä}ïW÷_ßèýÄ[ó£¬<ð(òß‹Ð-¿Yîºu‡3øñ½àiÖÃÿ«&÷mÖutv¼|÷û¹ºÇòMÿ~¼LéV{Ý~ùã­¿Á·ˆëò‚xO°—‹QNƒ¿¯>‰½Î/À÷fo×ãªOGœØï:èVÄûW}ÚÃÏ›¯m=ë<õ´ÇÏzÎi¯“ðAò´zñJÔ ôv?&ø@¸©xÏáóôÖüíy·gï ¿9§QsYBO¨[¢§=~>;ÄïÐàa÷Wà‹è/w§ÚGu‹¼OO­¾¯E—úéàm8ŒŸ×=#þ] ]ÉùD}þôÌËa§ð8ûåßÙ÷p'êûWoD¼W—/|Øq»®{%F‡_õ#×Ü¡YÏçOÂ~õmÈÏ×÷?|s3âôýÀ{ú kŽÄ¸×ãpñ?ß¼1}»>œóÉêþáEçà}¤uO¢¼8ÙùøÒq†Ï?z»¾]yöyÄoyA_ÝB<ð÷Û÷Ö—|¸´Îñ Ÿç¿ïógQÕ½ŠóàÍæ=/oŸTs‘ǽþ©~Ðáù¿Œz°î½yÓýPþ¾|kÞÛä÷Ä%þ._\L¼:êö./©›†¿Ô9ÞÀÝþ;Ï+þYô÷Ã+Í¢n…ÃÙ)>«ž/ì1uÙºÿ*x&¼SÕÝ“Ž·ÝÍŽŽ'N:èú¥ÿÞþ¼uòÙðóº·r¥ïkã(¾Â±ð+<]ïx°ô±E¥Ç¿é¸§îYêû\÷›¬ô¸Zó°Ïל£Ý7ムÿõ5²Õˆ«ò¾µæ},z];§ÇÇêg_é¸`¾Ô뉚ë8ïzIñÂo:¢Îª¹w³þþuÎá ç~x2ôá»Q—W]3ëzûÌU¬>‡i·ËÇÁ_Ê{ü¢úÉbÿK·õz^ÿ †W=??ðÞÅï….Qçu§ô>ì„°ú‘ø-><P¸d¯ûeç‹uV§¹À}9ÎõòÓÁÇ®G½_çù''¼ùn©¼rØë <ƒ:´ú Vºÿè×wµ:.|ŠÇ˪¿`Þó~õ׎ºßÊ'uïÆ¼ã^x±ø§ikø…:;évñ ôÅŠçK_:xýaäoÞ‡}ÂgèSáÏb?·C7€j~ï´ÛMéb{QŒÞGôY=‹úA|U_Z'çfèžuž=êO¼QÍÍ™÷|ÅnÄ7ëÈÞñ°52pЫIÇ}¾—ža=ñ üïièb×oð vC_,}'ùÇi;×£>’—ô+9W´?ï>O}4uÎq¥ï?‡Ÿª¾¬IÏOžS<¾öTÜÏm†~£/¡ú Wz܃£áoñW?éfÄgvdýë>‚I_<®:ð³QߟŠñ¹òÌZ|Nñ)‡=®©§ø<~/êQq«Î=G½¨¿E§¯³æÍ;ÏÏ^„ã]n>®¹Ð‹¾®u~gÔñ^馓þ¹òÚ¹à£åãšß¶ßóRÖ›GF‡ƒoå+õÏ«ýîG9·J|+ÿšôõµâÔüŸ»¾iÿà‘:—¸×Ÿ›Óß}.‡EçíéôÅÇ~Ƨ,z¯ùKÄëÂå»=ÿ­†Î»xV=[snæ=¾ÂÝúSÔê%úYñêQÏˋ⎼ͮïG`ýÏ Ïµü«ý;º“õ+|²×qü«¾ÁK”^¿Ôq[Ý/3îy[œ 'Ü ûsÏú磎ƒjþeÔu…«VºŸëÿ®ûIæÝ^D\Ä'©#õ¹±æàìw~´î%]êü5Ü_¶Nøßêo_éñOÞÅûŠÏâ6~I<¢;Øwñ!ñ¤zžþ¡ÀkÝ‹ü#þªôíËׯ: §fÏÁWÑ±Ä y_~'Üíïí{èu¾?ú,?U¯šK`ýÕYp¾8Tówûþn„NR÷yD}\8!òGÍå:ìx«æ®Î»~ó,ê1þ§¯¨øà›6'Ïÿ½ï›|÷0öËzèÿ‡ð]ÓàYo†¾ ßãëý^é†óÇjtè ô¢µÐUéfø…â{w;nM¼A§¢ø9zÙùÀ ø5ûZýF‘ÿ½¯¹(ìM? }€¾Ç¯óÜøµð—ÄÏ}ýñÇt:ü䓨wJ§;èñâIä9u/þþdð¦p:<¬(û¨æÓ¾ÿž.­y×Ãóß íaèH³àg†.²|çƒàÔUÿìw>Iÿšï¯óm»Ý~|Oõ[>¯{²¦ÿ¬{õ‚ß“÷Ä+y ιy¯æÎvôù<êÍièD¡ÖŸó^/Šâ¾w}³ÛÁ·ìD¼¬û,GÁ׎ú>o%nóé’=»Øç©×Ä…:ºÒóœ8(>©CÄ#|ÞW¾,\¶Üã¬uª:~¥?z P}G»}¿kš<°ÒñŸýªûPÞtý®”ä/ÜV_U¿ˆ¼Ust‚ŸßŒzy5òqÍŸž÷8RýŸâ ½køÞµðoñb-x|qQ¿rÍ«wýôòüíz–8Á/õ£‰gÅŸî÷Ï«þœàÿõ‹Ï‚ÑÏ÷,üvíoÔG¥[.zŸÃµ·C'VTŸ.5ò½õòÜø¼ÜËîᄅ{æ=O=¨î)I|<¼zO¾£7È‹¥WYsò#ÎYGû¤/˜]ÐÑåñ¨/ù' ÎV§ŠKwƒª>–à}¬/\-°ë:·;îñÐù7ê[ø"ìDŸàýä};ÿæûÍábOúá>ù=çÜÕy¿E?y~ô¾êøˆÂ{ð¸> ñ@œQˆ'ꕺ×}Þ÷ÈsžSœ¥'óûõÐÏvB‡f'[ÁoÔ9ÜàìëfÔ¾‡}Ýûxþl_.$¯yùQജǪ®);[t?Ç£8·K—©ù+}ýè ú£î¿²¼)ÿùî¨ëxwÂЇ: ¯ç¹kþÎnGuÎlÚëaëWóNCïxïyåzÔ§w#±³kÁ÷Â1ÎÀAu?Û¼× êÝôÛšºèøo繫ßâ°ëy9—Žñ~×¢N`?§‚?¾º¨zÃç]޼"ÞW»ßýh=ÖWü?u¯{é–ú:;gá4}ïfÔgüìfÔcÕ6ü»óéÕ‡¼èù½Ü þ¦æ~>rîIþ®þ8³øÂso„î„×À³ž=Bw%êþšëy®æ‚L;D§¾úøÃà /E¼ç§žN­¼6ü¹ú ûÀ= œ,ï߉ú®ººNó ®~›áß?‰üì܇øQóˆVºŽ ·ø¸ÃþŠ£wÂþÍ)4œ½nFýJ1§S\÷K,:/…÷„Ÿð,øñ­î“ ]®Àk^>×ó°ë­Ÿ±ú&½^K[ß‹¾™§ÁßÚ7øQÞ•/ðℾº­õiòIýþ‚óèöuöÛ‡gà#y ^áC^ _LÞ /ámu^ã¨Ï£ý&ò=ä ñ_Ká«Aç#¥³Ïú:Ü |.ûï{¡ëò±ñ+>IÝO÷6|ÃA牓?ô¸Wy]<öç­û“Ž"Þú:¯þ³>¾ž{;xº|#ol†n /U~‡¯G}ýÉê=Þ³zýÎ'¯¾P³Ð§&]ÿÎ{ågzÿ­Ðë<Ь?ÿÒü÷õ]÷yð%üVx'ôru¶ðêÝÐ{Å?º.üT÷{M;;ä-¸K_žª—;éãpº’ºˆ:7µÔãAõgt^y?âKÖ…Ü ý”îï½ÄGº/Ÿý t§êÓ¸Øó Â~Ö÷OØœ}X7Ãû¬w:gù/ó÷~ñ¦?ý¥Î%:ζު¿YäíÚÿ‹}¿lF~=üp!â¥?CÇ> ݤâÖBŸÿÃxŸòÙÃ7wkÎzÞ‡Çî…?(Ϙëy/Ö+J}0¥¯bõw….zIöeösåßÍŸS}4=€.Týú:õïpBñ’§Ð}Ä <å~à&ýdê>ŒQ;â"=êÉ»®ox_ëî­þ±^ê>ÃùϹ ê´†}ŸÃ¥Õ¯%ø3ÿÐ<ß‹ü^uŽðçÇ]ÇSÄ1ë§ôåIç+OçÏá<‡øN»>à^ðä•ô9;^©qˆõ)Z‡t9ëSÝ‚øÉ·.¿ã}èÞ¡—ò¡v¯~ŽÃ/–C7qÎH_)yåNø]pŽuŸ|5ëz Ýź=^žó£>Ð{èG?9ÝŽ¾§¾å0ô¥Û¡ëä8ÀÇÕwô´[õ3 =¨îvÜm¿®FÞÓ¯ínà 8˜.Pý=fý¹7Bo/¾}ÒñðíУäi¯¾øpÁ¨¯Ûª+: ]ò¨ãÈXGÇÏüÝûYÇÖ ž·g¿c¾ˆ}àÜ›óB/.už²y¥îõøY80üøýàÑôYëþ(ˆƒâ]Ñó> <]:~èž¹OŸDž­{7"¾Ü ?[=…ñ©{ß…ð¶¯ëçïú>‚Œ—u«^Ôø„?gÜ <$ÿdÐ}Õ§ÁW«i¡¯º×9t~ŸY½!ÿñvàÃÒ]Ç=n™çµð[J· =¿îm7¡ƒÞ |G©»¸ï]ç»ßvÝGž¬û©gýß«¯ãBÇ/æÿQä]¾Ãõgõø‹xfßàŸÅ+g}¼ê|þ¸ë`æOù“Ð[Åû”ŠíDÞ¢;ññ*^…Rçû߇n6éù@>¾úóN𯼷Â÷<ˆ|Ru>£Ð½Æ}¿g†Â¹á›úž‰ù\êï÷šÓsá#zëvÄ)8VýÓ…YÏ·y/ \®ŽE~7³.ï¢ßáýåë¼í|èIèÁÇ¡X§ÖÕóIè!“þ}t18›Îu: ݇žx'p‚|ù xõíÈËOÇ}¿—:éñξ´ðªê×ò6ôƒÓÀóïÓ§ûYøçÖKõ9éºUéHãîGšgçEà’â… Gë¯kßÖ½cÁƒê÷GÓêœÞ|Üó>ïSõ3¡§¨¿6^å×.ö÷PÏ$ª;žöõ¤Þïá7ì‡îkÜêÞ§¥oè5|ŠÿR_0îïQ÷ÙŽz¼<Ž<¯þ\ÜÕ¯ÀþÃOÆëðC}ïåA¸Óþ˜„ƒ·ó³7«>3ƒˆ³¾~ÅÛê+~LÕKD<Ǫoõ¸ã|ëözø$©·Õ}Ù‹­Sq¯«ú¦·ˆC[á·dÿS:ˆ÷wÎH]¾B'º:ýQøñú1Àyx;¾%ÓS«þ”¯9èx‚Gw¨óOo{žÌ>ÇÆÏú¬:•ùsª]½{3p+\P},O;^+¾rÚŸ[ü4ß÷’‡L»ßú ðèAè@O‚ß‹÷ð“û^̯“>wtR?¯¾…ÎQý×ç¿ïÜŒq_ ¼Æ)|§›Á?ü|õ«FŽ÷÷uÐRÇÇxáõàsêi«/é´ëƒ·/{ŽÒW=àÑuÞb©ÇGç.Ïκ>^¼%x}ù]£>t{ßS¾Å ïǪÿˆøƒ> ¾‚oä}gê7ªþqþ\/Ìßÿ]ÏtÂýðMªŸÛi×OêÅ¥>pÖZð[øF}RÕ…ÞKÏV×…ïÁéòBÕAŸö¸Wýf7©sàq8§ÎÓ߇è«áÓ]œõõj?Oöy”_î‡(NU]uèKòî¹ovØñÌýÀ/ƳxϨc}øcÖ1~a¾ª.ú¤ódë±öÁ¸óõ—Ç‘«Ÿð óÖåЂ_g¾¬>“þ}umè†Å&}=Ôý\ï{œ2îö ævà çkï…NT}¼Fñíáë.VWVç”çãñã96§m.'¶Ão–¯ê¾—…ÎãnG<‡Ë„-®XÏö#Þ*¾Øw·c\"Óqà :sõOä‡n¸º‰:ú´|»|Åy¯íðqñ¡<·¹úCùÄ¡ËÐãAꃮë:— wÇúr^ïÅozü³_®„_˜÷^â·ƒš?uuïVèÃö)^A§ÌûAÕåÔ=ðïºn!Ÿ®ίþ ï{¦Ï𵮄¿±ñÊsòK?éþÐ^àõg¡¯Ë#òåÏ'¡¿ôøÎßu^õqèmÖSõöõ_}âÞwžÉ—®¾dï:žªúvûï]Ä÷Yߟ{áÏî¾/ýä¨kÕi…Îkž¬wŸ/.W?×ðÁÄsÚ»—ŒÇÙÀáy_­y©ûÄäÛð¹0Þµû¼Î]M;)ž¾Ð}´ò}çã¾ø1ïYVRõ…Ãþïî}¼xívìxÞ÷ÒMœ§ÀËé•uŽeÒñé˜ìß*¾©OÄSkú´î#ùeøçêïo?¿8ðøäüÊ>Þ œÊª{ž{~¬ûkÇOýÐ1á«à·Bï¾”|ÏO¯sDãþ{u/Jäýªšõù7öC÷ÞNž<ù¯pþ´óï'Î>ÚsË3uŽ2öµúœÝÀS|ßòIf=ü*x¼Qý¢Ãg¨8rÚÇÓ<ª ´ß6b^à¢òõÅçÐÓÅûþ®ûçwîýNèqÖ‰:R8©úŽu<ö(æŸqŽÆº¸z/~æy*þ ƒo­~&ó÷<7ëñ¾¥wÒ¿‹ßN»ïQç±ß†ø¥î›ô÷Ö?§úúÜÀ›å˽ï?¿¸›îaýð¿éÞö\qøÏ¸_œý åmóàßÅWuiô-øYú£ðêüÇI×­ÔÇà=Ö¼Â{æÁ½.|!8y/ò|ÕÿÏöõä篫>¦¡çã-ÎaLæ¼Ë=£ô¬¼§^îŒükØ?üÎ+£ã€‘_ ŸŒûs™ÿªƒŸƒÃø}â'½øqèƒÎïWŸ¸tÔ×?æVê/K¿] Ÿ£îgâSÎúç¯êOó®ëÂU_r>ïbÏ O‚Ï;H?Ã;ùQpÛÓØ·ö¿¾yU6êûŸ£ëÏóü}ç?ôy:jÝW{ÔãŽüSüó¤ïCü¥î—¢_ͺ¯€—áü"õKÕWñ}Ï?âuê¿uRêKƒÎ‡ø—t6yh-â«sGÎMòÅ/øD:Š}\u†£wÕàepœxËoq^I|Ú]þVį‡ÏÕYÇôðX'|ûº)¾ù¶ãݺ¯vÔñërøœxëVð ú‹ù¶îw?Š;{¡ÓÊwø²õJÿ…ÔýÒƒÖ‚ŸÖ=Jï{þÛLÞ<îq¿pmà@ã­~'ü ƒÀóÎeÒ9Í»:±ýà_º£Îq=ž“. Ïȇð÷JŒ7ÿÁ¹RønrÚqK·?íú|û,öSõe=éyƒl¿Á©B·ýþ ë;uîmÏ3>_ÞÝý–Ç× -u^]8ãmŸñüZèâ{ák”OÀ×™ÿüzøH|}š­çâåóõQ}F&}üàFú‰xŽª[Ù8fóѬëÊúåó˜ß…A÷ßÅÿê[²Ðó¢ù5^uŽé]7"NÛòZÝ zÔã®ú•ôÅüݺǻéVpånøa|y熪¿Þû¾Nê\BøÕÇ,ô£:¿ø¾Îí;¿Ç{äªK˜†®3éã">Ô9Ý·=ŠëðÕ•Àëâ¦}Q:bøÂâ•õu;||ŸËPç[}ÎN;¾)}î] ?!¾§î—÷÷-þáýçÏã<énà1ñ¤Î_Eœ5®…Wgýçé>7ûø{ÿåà¾çŰ¯Ÿì™¾`ÕÅ;¨º˜ù¾ÒæŸ"TŸÀø»÷*\¼„N.žÃ Õ_o¡óÞ[çvÂǨ~Sï;“Çôu­sX㎿ªOì°ã[z¥î1 ŸN»|¿t“q‡ò÷Ãàóðfã¨ÿûZøÕy1øÒRãxZõ?ô÷“ßj¼èø£®ã>_Û¼yßê×F‡àÍÿûsypÒu†;Á£ÅëGá£à{O‚¿â¿ÃGÁô¯s¦ã®çÀâ ýÞ¹ºžYçoÞuß´îå™t¼¼ñžÞp6t¯ê34ì:2½0û´9G(¾ñÛìG8d'æ…Ïp+ò­uâ­¸SýtÆ=>ä= YW¥¿b÷™~ÕÓ¾T/®ŸÜS÷ ¿íëÈ:ÑÇãVø ø;»¼Çyð[‘‡«žsØã±>0uOÕÛžoªîí´ÿ^õÁ›?§sB—#óïS¿€Çé;GÓÅ+üo;tÀ«±îÅx\\Ç·#ŸÞŠýtqPýŒº1¾6=ÎzÐ쇃®Y×uoãI×êþˆIWÆN™¼ëëëIà¦ês:ŸÑ¤óGñ®êÀŽú~.}büvØõ|}pàºtõ8 ½~ÐõÌÛ×àPÏß?þžxY~³.«OÂBÏ;…býÔ½‚ÓЯ{©¾ÕÓàÇ‹}WÝåü9†ßǯzŸÐ»×rßœö÷­yŸu>À UŸ?_7B¯úÞ£>îyß`{öïçÊ_t~~wõKöñQ?Y÷Eĺ¦TÅîúw8i~%ÝJ>­¾t§¯Á7Fý=Å+úkw_èqºÖû¨ïóÇWVGaY'âUÕ5v~ïù­Ó½Ð¿èêÖ#Ý Î=Î_ó—<]Ý„:ŸS÷ O:~š,>šö|‚Ï×=6G=f}"¾÷Âß…wùt'¾\釳þû·"_Úò:ÏO—ž>Nõg^è¾pÝ#ÈW>‰õø>ö+ÿ2ô'êÃè}ìï'¡'.‡ßnTïû¾¯«/¤ë‰U¯:÷›Ô¹ž¥Îûäûìw|#×éûŽKÔ¥éóâóü:ܱ¾¦ztúâƒÈkxÓnø¸Õ—u)t¬a×ééYâ%½´úåL;ŽÁ/}ÏãÈ'•ïûs.¾Ìúz ]+uYqÃԭ»_å(üùº§aØ×_æg¼TÝœýG­¾&täI×Çê\Ùû>oòLñiÏ/Â×ÜŠ?~²þàfç Ÿ†Þ¿úýéjðouD©ßg_Mqß=ëyÏ0\•uãGÁÇðÙ'Á é˜â—ºê÷-ïNBxÛu,û/ÖgßTvû\~²þÕ;ÊgׇÓÿêþŸ÷ÖìêO½ÐßÛøð±ðá:G5éûõNäK:®:›:×µØqÆàÕu 㤺7sÚõ'ºèäãþß«¯ô¨Ç™ºÏþ¤ï[:\Hϯ{Ç#/zÎê“xªÎ‘vœ~7â/¼ù_âû¨ãï:wº!ßÿ¯úÝqσޣtÛw‘ßöyô§8_³>^[ë>pDéÁ¡ÃûþÒCÿ—Ǫ¾9ðÔåðÉï„^m?ÐMépêb¯ç7á0uþxàãÐñÊ_D^xßuZûY½µýbßfÿ‘£ð{è.åGÞ«þCóßÓGá8p—q?èÕù¤ÿ<ãùIÇ1tvëêYèJUç:M郮תW¸ºó8qði“{á›ÐÅè ž—¤Þɽ"þÕýrïCGuAݺx)ïV\ þëïî•/t>>“Ïúú­ºÏùü«O½ú=]KÞÆ;ðpz¼v¾9êã)ŸÝxVùç]þ½ä‰óÏùUøÖ+OÅùÉB× áðºæmÿ|yÕçÑQéµOïrn­ÎËžtÝ¡ú »Þ)¾Gõ÷Õ×p©çõ:ï³ñt¡ï¿«á#©”—Ä}uGuFè[å#‡._¼â¨ë?Õïk©ëRê“÷fýþ­ÀO;¡ VÝéü¿ŸuŠ/Ÿ}×qèùˆânÕUŸôñ§Ü‹ß;Š8±¼<¿WÞ7.ßÜ ]ÕüÊü"ùWÿ”­ð9–#NÑ#«ï…¸;ë|_ÇËÄix‚^õ9“>~âfú;ü3ëØ¹&ñª¾6“þ}øÒaèøôíÐê<Ú»XW¡_«—ö\¥+¿íñŽþG®­z…QßG›‘wô‹ª~ô¢qÿœÂÃð{†ýso…θ¼á(üGç|=·ó†—Cÿàsób¬ û˜þPýߦ}îÆz.ÿ÷]÷ɬ‡½XgÁwC¿¯ó›áwØðXõ±\ê~<í~,¸ÆxŸªã<íëûhúa?ï8p¯ŸSwQ}L÷ɇÆî…ïè¶p‡ñ©º˜£>^xn×w~äùøquÞvÐãšø°>žX<ô¨¯'ç–ÄqûÎø—ÿzÚu;õÓxüjð5y—~Ҩþ|GO„Žsë»æsþ9êFÕ3Xwëá³=Œ|D—Si?à3ÿ0èû’¾,N^‹¼|1ò¥óö#äSTʸóÀ;¡cí‡.KGw~‰_¶8ñ(tºíxótú‹¿F¼©ûSF}|üwûS=#Žn©…ÿ^ýÁGTÝÌ Þåð³ì Ï;™u6§=®ÑÃͧ~Rp¯yàûÃÅúÒÓ…n V¿™àñƱú͇?‹×N–z¼|:‡:‘òQç?¯î§úƒ »Nf8§&¿‰/Æ!ûé‹Ëð|ÕÛûúξt·ÂW¡ < = /­ûˆ‡}ÈËÆAž¡£™ç:Gð¾û…Ö¼åß«.ô¤¯ÓÏÝoùJü^jÜëÜäQ×ù4êÓÄú¢'yNþ|‡ þ„×Õ}p“¾¾«_iøÜ…?gÝ¿÷ûÖKõ³w]¸úŸÎº^m}X?ÒÛy½×ùć‘_ë÷bnŸºüýÅï{\Ø<}ú§xuøŸ/þ¾½nÇʳ¾G:'½Œ€§.N|¼ø(õÏÅ>ot/óKW ãy^ñ±ü>¼{Ð÷_©êV=?ÚêÏÍ£}ÿ âE݇<÷ùB£t±:÷6 ];òý•ŽäýÄÏ›‘_ࢊ‡‘ï*þžô}L÷¡óá ö)ÜN«y?éëŸþ„–žzÒ÷[âL<ñiàýêrÒãxéˆo»®|>p×Jø§Uÿ}ï9êëªú¸œôç«sÊo;ï­óŸ¡£ˆ[±Ïn…uû¨ôÀ·}ÜÄ?ø ¿ß e?|[û¬úû~½¾5>vÐ}•ÿøŸÿ>üǵn«®T›ôq¼xÔ}-ãð…¯†oKߤ‰'ú\Xg?ô±ž.ÇûÀ+¡#V¿ÌY×uàrz›¼åó½ÿ'¡?:‡·9¯ê>z|`#âsÝ+>íûÛ~Ù ÝÇø‰kžÛ~3×"þ|:èÏGçÇG|§÷Ò ë~¥iÿýOC¯4¾x=Àþ©~ÚóçúçA—ª3šÎߺ>hÞô™óÞ6èïiÝàÝð›ñÂ;Í“¾ŸO:ÎU7þeŒÿ\wúái|Ž''ñŸñÛ¡×ÒËè®òÏ烾.¼|À¯ðóó Qï _þyà þÿ8Ö)œBÏðwý»íƒµx?¸ÿ‚o?Š|+Ñ…èqêòÿƒ>Ž>w5|XuI_„ß!Ž8O ç^ˆ8ÿ÷Á{ò}å1ëRݹ81>ãoðÎê‹8îëÉ|—o>æýÀûüð—K=nÖ9µùÏ;oJç´ÿ¼~ë¹î‡Ÿ%5¯ò=¾K—Ƈøxô üˆ®{8ºôöi×;¨ºûùs97º>¶ux<ä~à2ã°þ“~Wø˜üûEè‡C?ÏÔTŸ†ùç={×u‡/—Uÿ…£¾Ÿë÷"/‰ ö¼À—½~1¿ˆ>~-òé³àWÖßíÐ{?ºzªuP}ŸéÒ§=ÎÓ5àú˜::Né ]W¥c©£·OÄÓãÀ+ÏCï[ê~£ß¿¸þù´ãüTý~iž™„n°¾RÕc½íqk?ü&z€÷Ð{=xdú4·C ÃÂÉUß;ìñ”^b==[ìxc/ô´ÒmçëUŸ¸¶î1÷¡ó[œ/À7·ÃÁ#׃ï•þ&®L:?1þúT<‹ï7/â”>[õ=K}½À_ÛÁgäuñ¿·žÄËÒßõy¯ó0oû|¬Eü¬ó×[ùHg&ý¹¬K¼µ¿ª¿ÃüOõÔž«Î}ŒÃ÷}P8cÚ÷ ýÔy$:\ñØð-Õ7¬†þ[}œ;®ªs|ƒŽC­—:ç=ëz'~š÷kˆ#tHûàÓÐK†?ãU}Ðø7ïn°ïÅQxœ¿­þ>´àÏÇ~yñ½ŽÄɺ·bØ÷fœèX[áÓË Õf¾~Ü·¼z3Ò8á•âívÄã¡CV]Yà=놟PqkØy÷ZèëÕïšyòa_Z<[‰?ñïñ$ôo|•žnÓ-ð/xì ô$xɹ—qø@õ^ÓŽëñnñ°ðäRçŸu~`Ôóýh<êÞ²…>âïqøÀöûVèaë1žÕ_:Æ´ãZü.‘ïỿ ½{-ühüT¼©~£¾?ñ8ùàv×ÊÀg¶B‡¶ÍOžg] ¾C„ƒñKÏmßà_uÏÈiŸOþ q­ûF]¯”?Ê÷ôõÃ'¶~6"/TŸÿÐëÓ·•7~ã!/ÁÛÖ³yÑ¿L>§GëÇnÞÍWÞ+.©7ß꾤A_¯t»õˆ»±ÿÌ#_ þ7âøhÐóœõÍ7Yc÷Â/¦3Ö:öýó(x«xmàÍé«ðù¼êëðwã‹—ó¡ð;¾:BëÑï[â—y•OÕ'ÑäéÒ©à-~ô¬ûµÆ=Ïfý ãó —Ÿö¼Ì_U/ê÷=õ#t ë‹,_mÏ“ª_îIŸ?GŸ°ï¼÷¯"ï—^2Þ½Ôãÿô øëíÈ_ð£z¸þ]¼u”ëtÖãàvÌ·¸§Þ™~g«/ÀÛÍ£ý'‰C›gK÷|ßùÏVø¨·‚ã[p¹ýïÑ#¶#þªÈûÛè7¿”'è{Öß¿â!Ãþ<â¨ïuîˆþ’çŒäC¸BßKñ|;pWõšuœÈÇ¡3Â{un|Ø}Íëáãì…ï³ëCœ—ßünÆGƒ/¨,?zØŸ·úÿœv|nþå~µûÕ¿ ŽX}½»Îf¿×>t_g-t<ßãyñÿµˆÓëáãì….O„ó=ïvð½ÍÈÃu.&âºyÿëÚ°ç>\å—À âÚ•À%ø\(¾Öýl'Ò«éapƒ{åËÁëÅÁ½X{·KO˜®¾Þâü³AŸçíðª/˜øÏÎúß«.ñ´Ïé Á³ðS|‘otëÖ9›³.~oäo> žf=Õ}ž1ïÅc¿ùý¬÷ïÆEüÚ œ¿>ÿ¦êv—úçÜ ýË=õê2Äiñf7ô"8ƒ'ž‹Ó⊼W~õ¬ÇM¸½úósB'T?á½Õíß³>Kω}œõ…uдóaÏ«ûÙXô„Ú×ã¾è|£½ØÇ;á3—Þ6ëqÛ¸Øçꬿ3èù³îû˜u~k_ˆ/¾ç̨ïÛªƒš}x|/o3~â"ßs+ôêÊ«óÏùkñ|ÒÇ£îQŽý^[„3/u æÕzR¯·4ÿó“I÷;Ì«>"pç_z>¡§VŸ­ðIwBOUϵzÁåÀýôÙ­Àcÿâû#.‰Ãô+çháG¸çYÄïeÿ‰ÇpIÞ#©‡x§­ø<ɼˆ‹…çC^ _Ìï턲qî_uþƒà‘xÜràƒò£f}üwÃ'ÁËà@x¹ø÷¨ûo׃wþ,t€ÑŸÐÕKïŒx†Ãå놿ïˆU·?éûE>^x›÷Ží úøY×êÂÍŸyú*ø¤:u¼êï‚o=}S÷‡ïÅßÄÀ[Ûák\ ß €àÖò ÃO³ÞîG<Ú ~jßT?Ïù÷8·µ¼MÝ´x×Ý ¾s&ðàêŸÐéK¯Džô|j\ð˜ÍÔc'Á?~o'âÐføo;ñ½å+Œ:NÉó’ÕoO8ù0¾Þ e+ôë†OC'ìgÏçùéÕ`Ø×:[ûZü—è¡âV_ŒwÝ2ézz[uO×÷®÷6c_¯G¾ýóÈ«xuÕ*¼ëKþÓÇÂ<©W7ŽêjćÇáÁ'«‘ç½_ýxÛ<[>_‹ºrõ~Ϻ©ºÌùïë{ îWßÀ÷Ö}Ï›ÿ»ºü ¡÷©ã¡GˆsÞk?ü øébð˺ß|¡¯o8¡t«Y×wÂ'¹ã¼8®4oãAÇKø”|#_U_r8dÐßw;ðdÕGÍúó¯¾65ÓΓW'Ëü_Æ|\8¦o×OCÜ ÿŽŽöEà4:ßË»oœ ¸Öß ?÷ÙÔ›ÎK¿ =c-ðuõ‹}½ºQõ5Ïx=è:ÐFè9ü_þ£üEôüøƒº|ÌïÁ]çÂ/”ßð³qè[ê_ñ{8x3ô/ñA§ä•ÍÀê[ z\U/šãr)ðšù²Žäç™íïÇ¡_ ýj=ôοý½úÚŽ;ö¾ô—­àËæÙçšW:Ý1Ï÷Ó‡êžÛQ_‡ÆåjàAuöÅGS§|/|;uÚxm¿?—s£Áï­»‡áç©×¨º•q_7ÕçdÒó|ÞŸW÷ƒ,t~…¿ûû^ð 8î§Gô?åg‰{oÞˆü¶ûÕs«/üUø»òº}s-üúµÀ©»áOYß}a=g(]Nð|Ž‹Ï‡/÷Z§ôQu7cßÞŠyqžÁ{¬Ä¾–ç–§ÛçâŸýºú¯žÍsOz*s9>×çTÅ´ççêc7íëu5t'¸×¾3^ž—¿UýXÇ=ßÉCÎ!®†ƒè9û±Ì×Nè{î¬{#gÔ½‡OT}É^ðM¸K¾z1ž¯Ǭ‡n·¿?=Âßí+ü¾¯{Ì#È×Cw³®öBG³¾Ìß ~Š7:Çu)òðJ|séxƒïÁ+&ኻKÁ¿S_Pw§ ƒO÷ÎÙúù\|U^¯­“ýˆc·ãç×èûz†]GÇÃ÷ãÀ_øßAäQ¸Û8Â_>®£Ý×§_ú…ÐU埛õÎgÃ/‘õAÞ~^}ƒ=>8ŸCoÚ ~)_Šçø‡}/ø_Ü{ú;=u#ðä…ðiùEÆç“Ð*κ~t>ôÕÝÕþ‘༿}Z~ÄÎEá¨Û7|oõ‡[èë­î%št¼.àIÖ-¼ðiè9òaõC˜t}ÇsÒ¿ÄøJœõ}û‘äÕõÐÏá-ëŠnv>|¦Ë¡Ç¬„߯&ïôq2>Ö_õ§v\V÷&ñÅ:¾”?ŒköíߎqW­Cy×8;·WyjÖדýfüÝ× ®Æ9>}t웃'üÜú_‹ur>t%ùW}Võ=w| ç©çº|@ü®û4û<Òѯ>Y~XwÀÇWb|ÕѧgóRýoÇ]ß?œ÷¿8¨îŠõ'>mÅxlg>õ}P8~Öù)?‰^c__ ^-ÏЭÅz¸ûëBì—êG=îñÏÏ‹S΃[OôQãÿbÒ×AÝ«<éóZu·¡£Þ‰ýK7ÆSÎ…¿ ·Á#öþÌ·Pßx!øŸõoOÞÄó{üà›wë`-pò£ð9ÕmÀou®#plÅíY_:¥¼×é/aý¨?x~Ú×Ínð Ðc䥺‡qÚyžüh\̇syŸ-}âÅB×åªJìCxÛypÏ)þ?>#O›·¯Âg1îOOû><<Îþs>ž±~«>ö}çŸÇþù§ðÏ÷"Þ‹/×—މ¿°o«þ¿˜õ¼àóí¯•XOk¡U_µQÿýê‡ùÚ>€'á:|w5|`ñîR¯q6÷å»Îw&¿ øÞ¼ÃSãøžÃðYèvâò£Yè áO?:ëÙà­pÝÃÀM[1»áë>Šýì÷à18‚^dUߊQ×ç䣿 že<Å'ã‡gV}˰¯Ç[‘¿è îÿÒßÿ‚‡èTpÏÃø;>aîÞ7Nâýi5ž¿î‘ÿá回'ʯôü^uÐòû¤ë"×CçT'Õ9ôÀyú)ÐW·ÿ2xÐzìS¸/Û ßo®¾»'}~Å'ñHÇ­àÿ7ƒWðÅ‘ºïj¡ïýØïË7¬†¾'É Î©ÒuŸÆç­‡Ž›¾ãqâàù¿;Çiÿ¸×F= ¼¸¼®Î·Ìÿ|ø¶ÿ8ò0=Ã8òMðæ»áã.…?R}çF=ßË3ηežuÜðËÿ­Ø?ò¸~ô}ïá]8~HŸ¦ú”.uÜc_ÑÓ¬'qö0ò÷J¬ÿ»ãþþêžj=ÍÿûßïI×5èÖÿ…Ï'ŸóÑå׃ðoé–¥#úçm†Þ&ÎyÏô]«oaðëÿ(tì͇â'}~möƒ¯¸9럛uUwq¼5|¯Ð¹–ƒ/Òõ›L_Šî¦ðAàªK¡c¼á‹ÕØoÅë‡ïÁùë±Î¶B¿xs#òþ[õ‡æå´Ç9üK¿úåaä·<·¾¾€ù¼z*]n3òü½äSÓÐ1Æ}?n„þ'Ïáá—#_TÏAÇœ»|¼ØÇÃ:¶ª?]9ðövèVÖÕ'9^‡ƒ—?~º<á=K§ÿMÇ÷{?ñk-øc žôõü¯¡Н{Wëþóùû鋹:–s}Æ®³/ý»õ$ê‡fú­ïÛßÜøÙ§·B ßïE•'3Þ”žxÒ}cqôJèû{¡“í„®¥ÿÌVÄkë…gf?³ÝÔ3çŸóóð}7ÂïÙŠý'~ÀƒtÊ[¡óŠƒâÑÅÐ>ü¹þ Ü'ãþt~g#ò©Ÿ¯:µYõ=U÷<é|K\aþ,ãѨïë\ÝÎvàÂ?£íǪOœt~È/蘫á¯Ã{ô‡Þ o›q³êœîQÇÿÆËüÒKv"ïzøúÕàÃûõjüÞAè@öÓíБÂÿ\üi¼ÄE<þ/Ówχ|Ï/Š?gƒGÖ}¼'Ýägó=¬+プß³¼Á{§Õˆ;æñGñ~uOä¨ç!¸í ü1ý/¯„Ÿi| gÍÿÝy7xY>xøÿRû`ØqüFü¼s;7C”Çé~•Ïg]Ÿô|÷c?Ò!à(y´î©™Î_D>] ¼´¼Ô=DU1辑}bõ)´žŠ…$ܽ·xjä1Ÿ»À÷>íqe-âéz೬÷¼¸Ñûêó%îˆWÕï{Øã®}¹zárèmÙÇÍ9tñw%žçnÄáÝðø¦tö+‘?ëÜnð:O ^¾ïüäNÄ)|㇃·«¿Ç¸óiûw9Þçåw:n´/ÖãçàïÅ_7Nxƒs쟿á9ÿ¾z4<]ç´ú8XO5.ðäÛίª/æQøWÃë¾ÂÓþ¼B—·Å_ü[@º¤óâòÒ û [á¿m‡øbÔq ÷ÀcV[Çêví_8v¼U\PúÓÀŸ†Ž+/ì†Þ(ÏWábßpœç÷ÃG£?V¿)¾ãüy…/M/è+p’þæþ k˜¿:‡3êz!Þ³xA¼oÅ媼‚ÏÓKË¿|ßymÖ‘xïê{~_Õûº–WÔnϰ¯ùWx0]Üý€ãðg¬ÛW‹}\¬û‰þ(Nì„or/tquÉ»õ ÀcÄùèZð`yZ_ª ¡{­…ßË¿Yü$Ž<öñ«:Õa÷íä1õvÕß9ÖÁÝÀ‰p ͺ–á¿Gß§¯‹×þ„îÁG9>ŠøàóùJê¹öC÷‚ã÷ãç­Oqÿ^‹¸t˜ï»ÐßC<= ßêrÄA¾éJä“ ¬ÿêG|Ú×ËVèðÆÁ>Üu!Ÿª T÷d]YŸôDõ£ž×>ýIà÷6š¯'áGòÓèúùÉéCË'[á£XOÕÇè}÷‘ä•›¬ï áÀæÍsȯëá/z.ïA/߬Sx½ÎkÍ:÷óÃAÿ÷ЇWBöyô¦ÃÈÃY÷œ}»à2ú^õ…ÿŠ÷Àóï«{WO{üÞ Ÿ5ë«Í°ãÂ<ߥŽ”ïÌŸsÏú¥'}èZpþW÷@Ï´/ôMü·Ð3ªïBàþ—õ÷â=žüUðWñ_Þã‡>ˆx®¿_ûuÜ'¿íE>q/ýjU¿ŠaÇoËá#/Çü¹ß‚®¦Î€žËw¬{]c_ÒéÔ=ˆu²ñîgáãæ9@ëß:(?xxf±ãX|ˆîÄ¿„óà9~­çT?l<á¿ï~°ßñ3¾¥þ[òÎ÷:7 Þ¼gº¦¼¨{’í[óoª#ú»ð=‹¼íùØú¬:ÖQÇ3unaÐñ³>â=^}%âKWà?üsøf÷#/Õ:õ“þùò’zªoŠ|D?´ož…N-ïÒÕã-u/Å´¯ómý«¤GÖ}:“à…Á¶#ÏÁQâ•øõ‹àåðÝIݹüQ÷î û:<ž/nd]ÀQè»ÕGó]÷É¿šDÜ*|”ó¸Ô} øD›<èëιz¨8Á_Ãc¬»ì‡¿ªK8 ~ïþoyFýPÞnþœ»‚³èlxßAúf¡—¾xÒñxéBã¾ß«oÃBSÖ¼¸¾ƒs™pž:uQò¿w5|¦ê[?í8Þ„7nϺ>¾yi7|C:4<ÒÏ·B÷.þzÒŸÓyþý5ÏyÁ-wCoãsÂ;üÏ웤Ž^þÖ—R_ uô÷C§Íû¡o‡>°º^ö1+~Üó„u@§pE~­sYÓî«gûÔú¿š¾äü¿»‡D^?ó™|Ì=uÖÍ8tOyC\oÔSüCðYãþýÀWtËÝØG·CO¦Ç†¯·ã‘÷¾ªÛŸœöuX}ðf=þ ÆMþroV=ç°ûbð~û,ü$x¬î×ú¥—ò'Õ!8רߥ}ZçýƒÏV?«aø×ãŽ_ê>³a÷—#óu¬ÛO&]gòüx&\Ú?òõqðÐ ¡Û‡þƒ?ð…ÔQñU'K¡ïM»Na?¿8êøo/ök[œôŸ«>®³®zÞ³¡»”o9ÿ»~|‹­ðùèax»üIo˸x5t½ÀËòé¥àqOcœJ_ ýR~°¾íg¸ >Üœ|?–¾wÒ×û/àwz~Á9ó"¾Zgg}·ýÐY„?½|…>ÍçpÞ~*¼þDÖxÞêk6 ]pÔõ4üîÂkÄ•{‰'O>GÏÄzçCË‹“…®?:PýÈÃW½ãêø×Îìþ6߯úõ\]E>{>tÕÛ º?¨Ÿµ:%þ<óõ¸ëÖ³{™ä-ûŸ¬úðAÏ;±.ù$~îFøªî7½>Ævøg>^PGï=n¾w/°}w>p”ýj]ãgÕ×`±¯«›ñýžÇxëkS|mþ÷ô}//ò–ƒ/‰ÛþÝxÜ K}¯}·ÿ¾¸Jü¼ùF}ýføV|÷ìSøô}'¾ßç‰ ôUøßïQèß|^õ^t+ùYŸ¿•à³úæÃ«gÒOô}i¨?6>úÉšwûO}¬{˯[ê|»ú5Æú§<÷AèZöÿ>¡ãmÆçÁ§ú«×ýדžªïî ó x½úQMû¼º—¼C¯4Ÿü~Gõ =é8­âѸëM[Áßðøsá©wâË'€cñìÀ]Ç¡3ëŸû$>.ò0xÒõxëòVļ®úì®{Ázœ¡“®XÏ =à7Ã7À‡è"Ïg¡¿œôùÄì¯OÂϰîè¸ô#úFù& =NW¼LÜêçÇ]§Þa;ü§óÚôZ:EùˆƒîÃÁOù=Ãà[|“ÕÀIÖ1~l~åkñ•þkà™•Àÿ¥Nz\TX÷ „¾ë=ïç_Ò~ò÷‹Ákø‡úªùïòî üïìCní„κ8~Á¯{tž‡¢A½Œ¾o¿ècI„W×CÏÊóá|Q¶tÓA×so„ºº3œUuæ‡n‡Ž(×Ý^eÿÖ=yÓ¾®å8Õ~³_ñ ûÔ>”ß.„_OgðûWç©ÛõÞâUñÀЭ·B÷Ã+àañŸgüÔ-È+ö¿ó8ðÐvê'ã–‚ßÓ%³þXÞ:üº)ßb5ðÒõð%áKuð†ýé÷ê‹¾Ä ô_~óJÄÙõˆÇôþ~±ºdöoÿø)úZ©ƒªîo¡ÏSÕAŒ»oôr©ï·ËÁK7Â/¡<Þ˜÷Çxõ*tú0ÿq7|cy^>ó½üuq[þç̧:9÷>Õ=jóçÖèqÄ?õètiup¡ñƒ¯Ë9ºrùᓾoí£ª˜õ¼”}Vï…oUºâ´¿7¼pþçFø3êt<¯ü¨ŽÆ|×=£åùìRÌ[Ý¿yÒç¿ÀÛùòZÝ;:èÿ.~Лòó÷ôA·ÿÌŸýõyä«bœ6â=ÖB¸þíaè!u®jÚó³¸i\ïÆ÷܉}%ÞãÑÞ—#Ò]àTqÐ:~ñüjèÍø½8é}ù…âßzø„⯾Lp3}YØê9îóiþMðZëKÝßf3æM=ÆzàøqÄ×óñ~ô"õŒ“ïuþTqÞzЏïçðxÞ÷{îÌ£›á³ÃEUïz!=@¾½ö'æ«êõf]ß—á9ºœºúG¡oÔ=Ä£®gŠ| ýüÄï}6üÞºgþsêÐéÅò„¼€§:Gí\†sLÆ_ýÿNø‚uîlÖý!qF_”:?ñ¦óñU^­>´']GŸ†Þy7|UñN†+Ÿ-vý¼îA¡ß„%_> ü´qæ^èÞuyÖu3ù”_²øŒNZ÷ËŽ{œT/W}|^ô£ð×Í3̼n…~Ï]Þð0âç¹Ð¹_¼éüL=—ïsï²;õõk]< ®þ–'_¦Ž¾þ†üRøÏ:_êøÃï]Šuæ¹³?p#¥+½ë¼ƒŽ§>Êý×Uÿ6 }(Æ™^èýî…¾³–xömß×Õ'ôûÔw*®º¿½óy&â„:Ië.{0éyÄ{ñ±ò~N8LÁÇìSï«/zÕÝÎÇ[ û꾘ùç;ïWu1~;¡o~>½ÀEøAÆe¾ ýź•g'‘Wà_øôJäûp-òFõE™t_l5ô8e#xžWý"'ýsoDžñçÅÀA…'Còóu¿ýb÷‹ä_ã•uZâ²|k}Õ¹¬I_oòœ4Žç÷÷£ÀËú^ØWêçô“뾋iÇíUOK÷{×ýõ þܾ6Žýs!ülx^÷Ñâ¡õõBç·ÿ?ßê ôíºgxÜ÷WöÁŠÿð|¥{†_s+p;_Ð>ò9‡±oË—ˆøÌ߬ý4êñTœ·ðÎO‚wÒuð‘xÞò±Âq¾þågá{áÃÑiÔWíÎÂ[쇣ÀQιýUäóë¡ãàWü!ùðáR׃àPû¿¾zÔNðl¸BŸ(:žÈç9ÞsÓ³ÜK(ïà p‹ºº„¼´¾çNèEu…uý¾Ï‹çÁ·ùöÇK«Nà¤ãò»£®“C‡¼ÿ½ú‡…Þ…>D·P‡p7ô캇SÜÄßOû>£;«Wù~ø>Æé0|<ùÕú¯{~‡}œÄ3y%ëÔä‹ò>ºRÝw1îûu;tÓº—lÖó³ó4ÕG7|ìµÔmG÷çâÅô˜÷=^ú9Ÿ‹WÉ_¿U·U}_;^ä/YÕê¤ó’ªc~`>²þ´îçöï¯{DæªãW7cŸ,…ÏÃÿp.öA<‡ý¦ÊúCè|ö'Üñ$|)ü¾ê­}Ü௺7t!ôßiÏ͈sÕ…þ6íz|õ ˜Ï‹óÊ7cส[\è>Þ„ z|økéÿÔº™îÒ ë÷~ßzÞˆ¸Äï}1íŽ|÷סSÁÆùqø!U·N?xï8yó[÷;žÂÛùUuc¾OFÜQ!_©Ï§/ÁÖnÄ8Ãy_€õ‚wè‡À—×7V]þ¨Ï_Õ úxÐ1äyçiøÇu®Ÿ…¾Výlæïÿw÷<Ï?ãP}¨Æ]°¾ämçúÔ «Óí_ãø±>å)ëùBðŠ‹Ï|?×¾·žÔYÑ=ÿT=š|FÇI¿_.?ø¢úf|‹N\~ÐIŸçûø$Æ=þ»z2yQÜØ ?ñ7ù’ÞQ÷»Í?_}kÕ1,õx.oÒ!ë>óY׳no–çùw‚‡­‡ÀŸ«{°Æ]W^|ñ$t°á/®…~â>ã[}ð¦›×êk=é8íVà!x?‚ÛŒ[ùco;O‘=¿x”}IàÎõÐíëÜå¨Çýªœuþµ>Õvü~éï»Þ!îÒÔÏгê\È´óÖ£Àiß <þçñ÷ºdÒñ£¸cÝTÿÕ“ç7‚‹‹uÞ¾›ô÷‚ÏÕ]Šõ™}³Ôù]¼b#ðŠýDçÃß+>úûÁé⨸M/Û þ¦ÿ>?·ÎÅÛ'G=ÿ|7ðžç½?ìÏgþÏoªß›v¶z¹ü-ÎûyÏoÝÓÛë^òàò³ü²?‡_Û—xÞ¥ðç?ÜÏá;ÖcùõSè…CÂgÅëϹ:;ÿ»z£‹7¬ÿ]Þ è_ÝÿGÕ5>ˆøVu“Ž#J×?éø´îuõ|úìäÃøêZøRÎ×À?ÆßsÔ=³>/…wÞ÷õ’úø.¿ð“ž½íþ†¸N·ÿ2ÖãFðœOã§Ž ßÎÅûçýmÎU•Þ8ìz£8NÇÄ»¿}ñLèu/KèFÆëÙû®ûæºN<ˆ¼¿ø_Þ¸z³}€Ó•¾:¡ºu¾¿$}¦:çrÚŸ·îiyßýµãÀ uïÏ´ëÒø½ÝùiúÄaøPp²yâo8Ÿ¸ógŒ#Ï|>=³î/Ÿôølÿ¿xÐ÷¿÷9ˆ|¿þ˜¾¦ú/ŠÏô„íðÑþ9ðÞs9ðêDp·>ØU7 ?9tû¶îÑšt] Ïܸæ}œ§ñž¡×© ¥+óJ÷|ßõ?x½Îc„/˜÷dÑk}ÝU}³uá\Fù"ƒîáquïõ¤ó§:×¾¾‡àóxƒq­º¤Q÷“œ÷»¼d=|nýhøo7Âð\ü©Ýˆ³ò„¸p<ù°î éæO¼Üz3Ÿ““®S\ Ü çÃâ;Ï{~œü±ûìx¬¼$> kßÒÝéÜ|#8p3ü»XÆÇyöªË9éqòNà,qD}õg›7ƒwÝ ^VuCñæÇω‡ð<=æfÄ9û\~´þ›q)^ñI]¤ç{ñqÇUðTòÖ|bÿVìIÇ~¯ô»I×?àõãàâÖOcÜ>ñbð ãU}~;+£§ð‹âÁ³ŽKèOt<ͼNõ¼‚çÒ‹–‚×ýÆøÓbŸïŸ†þŒ;_i~‡(?/ô‹Íð÷ऺ_æmÏOô>û°ênOú{Õù¿àKÅÏáÆIÿ~ïo~'f?\ ü„‡ð·èøtæ:3íq™þè¹wÂç ù|¼‘g[ÿ|»1>p <_ø}Ð÷kö1ÀWó> ó‘Wÿ1pÿNàÅ<· §êß1ô÷õ9»ñ{êþ*ôøº¯ö¤ë_„^÷¿{Þ]Ï>7¿[—«_Û û1ðÑFÄxæöàÃxÀ9lëŒî}3â!W>PwO¿™zþ¤ãݺ·û}ó|¼º‡uÒó®spíåð…ñFû$ßÕyç“þ¼·#Ù“yý£Èt´ ‘Ä18M^¬óÊ£?«NxÔueãÏG¢TÐÅ®s‹u/ͬK›ößË{s Ÿ#® úz°ÔQÚßpvõå<éqÛþä‡EÞxtÒ?×xì¯Õ/k#|‰•ÄÓŽ÷ÕeÀ}â$_öLä‡ÝðWà|G<Û ÿn+ü{ã%Ž9g ^ÔyèÅž·œ““Ÿ ûüœ }7ï[Ÿg|ÎzªŽ;òýj7Ö•¸Z÷¥‡þY÷½ï¸Ëzw/—_àkç^éÛ©ƒO:ªû%ßužqkÒ×=./Âa꛽·y¾8Ù:ǃ®~‚W¬oø_\Ñÿ…o"ÿŠ—B·Ã[ñnº;_ÖçŽ#î_ ¢Î¹ÏºŸÌWÊþ$tÿ#?Ç÷Šð“õÈG£ÿÑgªgÚç~¢»Y¯þ]¿ûs;øƒù·ø#ny>:û•àaëáo×óLº±¼ý\à?þûnðéòçãy?ôïºwí]Ûü³ª‡œvÿÅóÒ!ô§7o+ñpçÆ[î…_±8¾îIœÿùl>>Îg«K„Óo?/ž:3}Ë<†¾ï¼§÷©{(GGÂó?>ôiø&g&'º¦óY<Âz: ^èû|¾}(Ãæ^Y ξ»zé­ˆGueëyÔ×óNðTøÆ>‡äGú/Þc\ìÓê¿7é¾Éjìgqj7Ö½÷…{à–<—¾ûz5ø¿¼™çÈìÏãÀ©ö{#áhõÊ7Bç¿øÀ<f?(>=| ®àx=.ó§?ÕETvûk©ãæä'ŸÆ~©>Á/|^õ¿_èûŽ«/õVà/ëÙïëû^ý>:Ï<Úü=_úñ¢êBæë£î«Yìþ™øøhÚã6ßÏçN ½¦î]Ž<Âß¹qk¾!€ž'îÁí—C?–¿ªþ Oõg˜ÿ»:ò‚šóm^ô“Ã'ÒOØ =Ëþ[Žø(>ËSÖ§u¼¾™u°y×<­îÇŸVŸÀoõ…P‡ÇÐ_ðeº4¾å· áÿZO›¡_Â3>GžÔÃÁ—ŸÇ~ƒSÇ᳋?u?Þ4èñ¿¸øÍºÐ ïs߃}ÿ$òÓÙð©WÛ‹ý‹gÕ9×Aꔳ> >ÿnøò,üýSÓ'÷øö¯#jß„Î+Þ‹Gð‘óDâyÞ£(ð±Å;çYø˜ò~ñþãüçõo²Oäkº9Ý®¨>HpÎi×?ä5õu|Ð×|¢Ï`÷|üœG óÁ1Õ?t¡ã‘º2ðLÝc2ÿ=õäÕgjÚóT?Š8p!ð8>¶~9":o¿øw%|<¬ÎÁ„ÞˆGÁ3Æ™/íÞ:Ïç<&fœo„\çŠÌÛB/âÎ͈ók‘×ø¿|Ƚˆï«KåO| ^²àhu˜x¡üGn†àþž…àgò~ùãÁ÷è*Õo.ðNùé'=V}á¤Ç¿ƒÀåGá?Ž}Ÿ|B\Ö§+}ø¿×¿CýÎVðÏ‹áOâ{â*ÞWáKâ4üOùy|Pý~q%ö£:…êã<ëñÝz__^€ç ŸË{àU÷?ꟳ¼Á|Ëûê]å¡Wxeð¼wÅøUýì¤ã1ëÀ~{:ý§¡[Ñ_öc}Ó™þ5xØJìGø§ò隷ûªË2ot8øzÖõšãð7‡Ò™è²t?çÕý[wêoåø¹¼þý*|¸Ÿõq„ÓªòøÃûóóСà®õà uñ´çϺïeÐyìjüݽ CÏ:Œü{#ö|Šûž[‡à ú€?Â/—éUu?ë¤ÇÕçã®÷lÅ:ן¿Ï'?;7´¸Ü¸Ò!á}ù.ûÓEWB׬>9±ž«oaèšâÊÏÂO¼þº~šê¨ðŽÕÀ-BW·¶—•ú“ñÓ?qqOÇ8|^þHõyˆç„ëœ'£_Þ >©ß0ü\xzÚ¿¿ú¿E¼;}s%ø\LW ÛÒáä)¸_üðgòû¥Î£½ë>²}t1òRžò\Ÿ–>¼þÉZè€tüìr¬8ïÞü9_F–Ÿ¯¾A³Ž;œw©ûæÿýÅ|^~ãâ^ð£À%ÞC½÷^àâ›á“þ$pQé }ÿÚ§æùNì»»á£ÛÇ{Ỉ7ò†ýdþø§uÿÀ¸?oõ?v>ŸVß&ÿý¨óäýøïöÛÝð13¿ zÞÝŸ|8ÞX÷ômqx0èóUÏ1jq쇧ƒžßàvzMÕÇŽ:Wù/;¡KÊÃâà¥Ð£Í=îÄ›êþÚYçGÎõˆÇø üp6ô‘ýÀò?~—÷Ø,ßÅ;Ôsì†-W?ÆÐžC§öy¤W¯‡îâù¬?ï¥>E}‚ú­_¿Â;ó>9ëªúõ¾ëúúѺŸ/ø€xR}ìÇ]1püVø^üðåàô®'Áƒ+ÞÍ?çùRÏúìªSøÇàoð™~L«±þì‡<á||< çÿ[èî“Ø/p7¼ãÜÜQ|¾ý¦Ž^Õß°Î)Ï:_Û =øBøíÆ÷ß§‡®(Þ©û+¾7î¸ËºoŽÃ×Ïàòú~õÞüóÍð¿ ¾ïñÒz£kû{ùøïÃW‡_Âÿx8_ƒk¡Ç¬„® Ÿgùo9ü^ëöËà%ųñ»açg¾_ÜÁ÷ñIu +ñ9Ö‰ºwëOã ÓG«Ÿè°'QçôF]O¦çÀ3ð Ë× óTÿ¢÷]W‡·î‡ÏŽÏ«Ï‚#Ä üŸ‡ÿÞ l+tSóÏ^ \ÉÏÚ ýú“Ès~¿îž)^ÊÎíÀUðÿ¹ø>ùO¾Pº|â ô)8²î«õœG}ý>Џ»~¬: |Êûe]2p+üæð}¿ßÃá’{1žtGúbõ-vühlnò÷:Ï3èë·xõ¤Çâæñ´ï˼Ûº1®Õ72t¢ÒÁN{x~%Ýs%ô4ñÅ=ZyNŒîŒ¯g_¡µà››±®¼ÏNðJã¥Î¥êK‡±.Oú¸š§Š¯ôã·=îFž¤ÃÔù擞§õ¹ä§©ëðœð”ý„/šO}èÅyÏý­òÿ¬úÔ »ïcTÿÌiÏâ‹<ŽO[ÿK¡®‡>¼<Üþ«u;îï»úÞW÷3;>‡®ð«Ð…áç³îD¼ãM_8>1|ÁÏ÷àOçÙŒCÕI†¯[u·Ó>êFäüE¼ÁïVÂ>ÏÏ—Ê:ÞíÀÇp^¯¯²xÆÏÒï¨útŽ;Ÿwê¦iÏ÷Î Ë÷>Ç:9ûõ p²<†ò<ßFøã›;ŽC§|6êø¥ÎyͺN-YtzÉaè³ÛÁ¬wñ„¾i}¼^èwމþYzøüy«VèŸò´øè¹¬³ÕЇK=¥·ˆÆs-â¥}õœ^0èz=ÿ̾ºß¨ÿÿî/}#ø§ñÍx¼|ë¾¾UÎ;Y¿ÞãNijº_tÒqü´z;¿Vß>ü>[è ú…ÐéèU³Ðù‘xó½ðï®ä¬†®´yP\‡/œçø*ø¹}ËÑ'üIðçêÏ:iúéâÔfä5ùr/òÛFðºÂ ³ðI'ÇW÷°ã$¼$õMõtÞ[<¤—Êãêfn†Þ±§Áê­é²‡Á®†Þöiè•åÃO»ÿ´x¦âȸëÕÏ{ØŸ£Î÷¿‹øqÒñÎõøÜª;éyY¿:löK©º˜ØßðÓWÁ#äWûƒ¿Á/3¾ž[| z~²Î~ÏCOÆ_ì[qÓ|ª¯W¼‡ýTywÐyÓ¿†þ£/ þáyžW=võ%œv?c9tõ¿ W¼®sºã?w"~îžàÛ?—#d=¿u¸y þ¨º®QOW‚OòIÔu‰[æG¿1¸ ްÔ}VÄiÇëò}Ô¹Hã¡ß¦ø·}ûh'ö½Âxá¡t骟ÿübàã7ës8P|®ó¸öõI×çê<'ý|!Öí¤ó4ød3ð"^†oßtP÷$,ô} ׉Ãâ¢}Gg.¿xÚŸË=u¡cÁÉtþü&މpÉjè%ê¼õÑ\ }B|ä7ˆïu¿æ]÷Mè\÷ƒ?®‡OóÓÐ!¬óò±†=ÿÀ'|÷¯"Ï] Ýq;xÏ“7®Æ:¶oãyªÏÌ|œ~úƒùÚ‰ù­º‚Ið¥ð©ñ9ú0|ŸïŽ“øzø}üeèdt¾sá7ããê”,u~±~èQè†uwØó¼õ3?þ`žäIøð|Äqú-~rxãNàщ<3뺄|¿:Ñ^èyò½ù»qÕþ±¿/…{ù3ÏùÐ'ò\Ã’÷é$úÄ?àîOõœwÃG¯~aÃÐmå³qè%Ó¾?ðxêÅŸðœë†·wÂoÄ‹³Ÿiõ»™uظÁ~¯úAvž†sÅcyÜ:¹z/þ±ñH|•—Ä?û1ûÝïvêBÄ8¿î{›u_Ÿ‚óñ ëDýšùÞ ]Ãüó§nÄûÂñú»Ô¹ÈQß7î'¬þ¤'}}< ܺx„>|5ôCùÊ}¿ ]¤êVÇêOU÷nÉ‹¨w/\¸Wü¾>§qª{úfÝÇÙ ~9™t^!®>Yêú ïÇø*ð>)>9'\}wÇÝ7»|l-ü›íˆƒÏcü×âóÔø=uêéÊW> ~=ÿ>uôõtœ«áÜ ~~#öηªºÕù{«{þ$ôK<ôfèÓÞg=p»xw/Gž§÷Ê—Õw5t„º§oÐó¯ÏñÜòU'Oæßó/¡cб¾Œ8V÷b/tý¾U7(ÞâíêÝù¨ø£þFÇÁ?ù%ÎÁ蟦Ž'ï­±~ŒËJàó•ÈïðÓnøAÆ _Q'Š3òœ¸t+ôøÄý x¦ür+öß”‡ëÜÕ´Ç)ñ_½ÕYùò¨ãaq….‹ÏT›“>îxÅÏŠêX³NúNà̺ÿbÔq¥sâ¿ü‰Oû»õñrØu!qa-póãÀCâðO#.ïÅ«ÄAý/¬Gu7ë¡·ˆ+þ¯ï¼3¿®†¾©E=¦ó°tç̪ÝÛþï›Á#ðOzyõ•ý¬?¿ú^u¥ø&}~õùþݾ£ãÚ_ð™yØŽý#y”ï†OÈ7Õ:t}dÄ­ûóßS_µ¼Ò~WRuW‘—Ë·“Çÿ>»þ}»Så÷LÁ›«¡óz>ëC¾6Þ|#uÓêg.Çú©>¨óÿþpÚ×7½„^Aׯû˧ý½ù½g"Žë‹àÇ™ÀñÞûNøØ•‡]ÿ×7ÿOàÆGÏÿÑ›àgõ@UGzÏVàPñ­êß]§¸ïW}Ö°û@ð;ýN}šïÉ{‡ò~§ÒøƒïÕ¥™Ïê/=ê¼ÙzxúÝÓðø‡uïÄIß§—ƒoê+î|HõE“gö/쟛JOtÿ°.¬+xêÅo?Ìãë^ÅqçÛð½÷ZèOòŒqåoªO¨º•YÇÉuOé$xûžœÇµõ§³ßւﮇ/^}X}}»—^¼Tç!ïÃΡo†Î%¯§¾ /Y_ÉW7ïÐ;ž-t_Po$Ñ­þDñ€Q×?ÅOùÞ2Õs©ãñÁþs.y3>ï /¹ï¤ú׎;.Ú¸z#âÒN¬7uÂòÜb_Ãïâƒ:ûq'|àíðëà.y¹î1÷ùÕ%Û_kÁëê7Þ9üðÏmO¢[VŸ¼ž‡þãþû`ð?ÿŸú’“>ïðUêÆ ~¶?ÄizŸç‡¾..ð‰Õg?móÇ/=¼ý×±Ïñ$üC\=¼tør9|Š›¡å=7ö»{xìwqê|ä÷QøM+ñü7ÿeû0õ¸:ÿ?é<Åø]Žü¢ÈsZÕaØ¿÷û¡/Ë7·™_ñÞÿ2p|Çœvê ¬;÷߈÷×_o·ŽêMðXãð0pÿ‘ÏRý›]§ó=p€y÷<Þ·ÆaÖõ~ø~è2¾OÉ{CƱ"ßü8ÞùM¼…_!Ÿz޼—ÿ·üŸëóЙéó?‹¼~5â­¸H¯_È:Ç7Õªx_‡ÏýóÐÔqÂYpƒóŽé'Ã÷Õ‡rØó _йeúpö]ß \A°Þœ“ùÔu_ïbÇÙOžKèFtxàÑiß7ÆM¾¨¾£ÎÛÄýÒÂÇc"_Ñ}Dþ3þx‡óãðÑñûëÏáÈ¡ƒ‹Kê#øxê¾ãÓÍŒ3ßK]1?ôbèÑæÍ}y·ÃGâ7äút^ ^§ƒòSåu%æOÅÓЇ6ãçùáöGÝ÷°Ôqbõçç.ö<®Ÿ?ßQÝ8]}©Æ]ϰ뼘ç>í:ˆãúì¨ûDY7Äׯóѳ¿7âSptùN“®‹›¯WÃà§£¾þà3óé÷ðMõ9unhÒß_<¶/éfêÏ'‘ïð+y¾t‹ðÛÊ™tßïºú½~wþ»ñsO)^ô,xÝѧó<¸(OÇs†¾mà¾ãqàëBÞ‘¿ŒSö =¿ŠÎJÿ£÷>wŸ©üÉIçwò~F¾¸ß¸û<÷­‡Î ^ NÒ— º:>Ü„ç„Ï gó•ð޽ð_à0øªÞ?xÙӣзO;.¥»ÑWë~¹·=¿Ê3uÏλ¾Þá9ë{2ÿùá çIxÈùrç'Ì{Õ›.vý.û,ôƽð»ù¡ò_“ßÏ¿ðžô+zÜy/ô ¼Ì¹#ùwr©óʽÐÓ­'¼JÝŠ} ß]ð_CG€ã?·«>;ô—7‹=^úùÚçGÝç§o¿ïùϹ|o¦Wœu¼¸¿g½ã—ø’ÿ޹Çâ#[‡á»À£»ñù…ÿ»ßïž}/Þˆ»t øz3ô6ñ¼8n½Ï›õüÖõ½ô3g=/Ôý"']߀cÄÏõämבä[¾%žœýÏÃ9Þb_Èãy¿2ë{õoýYø¼p| ?Ô} Á+ù›ô#yUüÞ?ɸØøëóÀΉù>÷‰{póvÄïê;6îë«êŠûúþIà\ýóÌ+ü ×¹Þ“þÞ ƒ®cYWðœqUwÃG¿sÚçS½šx&.ÒåöB¯­záùçþ*â¶ýv&|xòþbÇ©úÊ×ê à>þ ÝæfÄ ç èºâœfîǾ—·ù:ðÌíX7úÐí†ÿ°¾7 þÇO¬‹:ç=î¼ÝzõÞtÈ‹ñûòŽ÷_ Wœôõã½é#tl>“ºý~¼‡8µ¾ÙNèõÖ¯q‚»ìkyˆ^ ÿ«{u>àñÛÎë†o²8©êUÏŠ×øªºÆº/bÔñ üì{è2«á‹ÀüƒÈ3Ù§¡îSŽxgÝêBŸXŠøazϬ7„ ¶C÷òÞÎÿÊ?¡ ­Îzo/‡¯{>üAø~%ÖÛ0üÍŠoã¾½þŒc}™'ß«õbàAëŸïR÷⎺~oË«WÂW¯ñù¥XøÐAø$â«ý‚oóeÄ/qºúøL;ŸÄ§Î…Ÿà½á+y¤ÎÏû{¨‹©ûìG=®É§¿ŸHœ†ïœÏ³NíSyv9ðÖrøÕ×¼>œ(_Á?p,Ü|'ðFÕ} ;ûEø òñÁ'ñ1uTw"¯éSN¬þà =>Âs¥·ò±†ÿÑ1ê|ò ç»ºWhÒýסgÓ?Ô×ìFü§›g~þð£|Î#¿x~ëÑ=|`çÌð¼‡ñøqõÃuFgW7#n;×ësñ!uþòºxYºî°ç úAÓ ^³¸ Î‘Íþï=ÎÃÑ_º¯V}CÇ=ÏÒ¹ÿ<â‚ýàü0üʯùqàA÷-Ü _éLðdq. ™¼ƒ§Ëg«áŸTŸÕˆw{¡k×=VxÔûîCŠ‹p$t!t'ãX}‚OºOcoþ¾ÐyOõû„ñBy …_½¿>ÛúäÉ…³‚'\ ~[ý]áÓà;|ru x†q-]`Ðñ•y­þÙ³¯þ9ø¾ñ£oµßå§³¡W¯ߥ‡âŸÎ}\ ~óoÁkoÌûÅøÕ¯%æí—¡7]< ÐÃü¼|r:Þ_‡žz#|1:ºŸ¯{5N»Î!ˆŸøÇZø¢·âçÄiù ^Ë~W÷W°ùûü(â×õàqxÛ^ŒŸûÒÏ…þ’}/Ìë¹Ðµñ~+Á7¯úÁœ§QçtP8m=pÿŽÀÕKþ ÏÿQø©Õׇn1íø.û‘ß ™ÿõEèøãн½¿ñ–?ùe_ĺ.<0éñ¤ê¤G=>[?¥ƒÍÇë“ôIå~÷^¡;×9ËYÇuö¬û°p _<^½lØ×‰N»óÀ_ß霛Á£Ö"‹ƒî¥RÆgØ íqà“3‘ç­¬gÝ‹8d>ëãQÿ÷ªw\@Ïþ*x5ëЯ†e<äëN(ßî]þõÐÏ®GÞ7ÞW#Ú—÷âý«þIçï>o=âÏÅЯåwóʧüAèÖ‰zkqˆ/ãûªnuÚuY¼·Þ÷¤ëì|?ã(¿Á'Õ÷-Ö}1ôzüÖøý*â’~ü¼Ê¾¥7彇Á‡–WíŽåˆOÂ_Þ |¸ú¨<þÏá?à·t›òëâóÍ¿ý*ߊïðò߯º CÚ7>‡.v6Ö¥sÓ‡Á#­£ëá'Á{xGÝ/ò¾û]ð«qÞ þ_ç¥g¿[×û¡³ü4x¯Ï±o랪q×qÎ…žÉwÄo7ù³ÀÍâ‰szø³¼j}«“Ù óÞÛwª?wàð ¡Ÿ¨C¹ã ÇêÇÙ_+Á›­ùÒ~Ý ?EP7#Ô½:ÁÕ‰äbè³æß„«ñª¿õüoèZÕq¡ï×KáÓˆ?Šq“Õ[ÓAøöý ïS-žý*ôuíׂ7 ìóœ£*ý0ÆÙz.0í<.I\hþàÆï•÷ÿÖ9‰iþ:G0ŸE|žu]—>*Þ‰7pSù_áûÔyÙA×Eäùu%t8êLè€ö‰ïéðHõ+ö?éô¼}3øÄw·~å3óG×þIà|ïõ(tj¸æbè·Cÿ)mÔãÖW±í£ð9—·Èó¥ONú{ÞÈ:ÙõwX?꿬ÿÃä#ñüüu~ÆwCŸº>ÒjègC·Þ ½þ¼yÖ¾X‰øOÏ_Ô!Ù·U?6븹ôÝÐ96§Ã9ÎùÓ¹Äãªßu?ÇçŠ/ÎÉxº[უ®cÂ_‡‘×ï…ž^ý±Þ÷ ¿áf¬gu°uïá¬ó y¸ú5 ?ìË[ŸÙ8ïݨsãó¿«÷P§oýß<*^ðQö‚Òë^è“®oZŸt(8Á8©oÙŽy©{צ}ú—8d\æù½ðÚ8Öß•ˆ'ô ¿‹üž÷>Ü|%.«ïñsúo…?¬îº¾\æeñ„ž¶>ø•xû·þqૈƅδøÂ¼ˆgyþŒ'ËWy®o¢ãˆ‹OÂÿæ‡^ŠhÆE½‡zïÐeñÉŸ„^º¼ ~\ß“WÅ ë¾Ý„® =W\³œ_r‘®)ÿUàD¸Û:y~î/ƒ¯U†Y×äçs±Îþ2pë8ø3^ë÷> Ü-¯Ò߬;ºƒq½ñ5ô¸Ð±<‡: xžÚ¸kýê+÷(ðÄçïè(â´º§º·~Òu¹qèî ò>êÏä™Ò¹F}=á‹ö œu5ð<¼­Þ^?¸ýZä9çÚó›ª{}Ûý'qÁ:À›oDŸ­Ûºj©ãºÃàÇYWš÷ÕýZéÓ…>Eï8  îušõõiý] ^H1ï?>+N8â/Cß?ºá™àÿžŸÞþÝà¥Îºwâ ü—aè}xþ¾¼Äùœ‡©#„þm\.„¿#ÂνÖ9º…—<Çåø~³ø;ÝÈŸãÐWäu|¿úM‚Œ:¯¶oŠ羪O=k©ï/çªá>›úyuºÎë¹×ÿº¼;ëÿÍà?x1œ¡Ò³ˆW[áãykáSÀé·B„ ùòÞ: ¾X÷û º¡þê|øšõ=ó¿?÷üåýÎÎûÇÐÓèüWC7¬>AG]Ï]ŒyÍ:“º§c±¯Ky®ú3L{|«~½'¿Ñe³ÏôNèê üüãÐwËZèzäÙÐýOq]þ¡÷»ïG<­{˜=ÿ9§_zÛI÷éaø ùÓ?¤x¤çc>$ÛxÂpáãÈwðè¹ø3ã¾ø€ï}¾ÇJøsÕ`±çêó=èqóR¬ ñ/ƒês÷ӟχ?å¹ìy.ýÆ:Ÿ>]Àº¹|•.p/øÂ£Ð?¼Þúeä7¼ÿÎb÷ðpüÄþRçF¨~è¡7ü(t2q¾™÷Ç)>^çß‚'ãûžïjø6ü;zŒõr7x2½Çþ•·ö¹q™^}=|.ºå_N»ëüiø¦â œcüí}ƒÌ8û ß|¿õ}°Õ_Þˆ÷[=¤w= ÞhÝå}gÃgû—àqUÇ2íü o–‡‚Ïʳ+á#É—x×åø¾~±Ðù3½¿oÅc¾×zü»s@ÇáŸ] }ô^ì#¼­pÇIŸ?8ûBìqÖ¾ºŸ×»Ö9D8ö^äïKÿX ÞÀG[tüy;ö|ižÖBwú,â§ülÿÓ[¿ þ¬î”¯p6øÎ½ðÎÄúâ—ŒÃWþyà^xI|”ÿà¾<I¯„Ô'èWr6|ŠÇá‹ÓÓªü¬Ç_<NSߤþÅÇù‚Â/UgS÷LOºO»>¡8ã=ÕÍ/þ^ß}x.tZ¼mzGñ°¥î+Êê^ÄÇÍØ§p%êÏBW»zÍÛᯈçóºýÊ/Λð1èûô‘º?wþßÕ£¥^Pq5|·§‹]·Ï/^áëü"x?YøwÜãÑŸ…¾bžª¯ü°¯ xD¼Çñ×Òa†ýýéþîýêÞÜa_§âþ§ÁW? ü"Ï™×͈[ž¿Ä¯ŒÓõÐu ÿ¼ïãfüý<ü%¯Öýγ¾OèäÕç5ô:xØ~vþÅçzž/C·çÿxn8ìù´ççöñ$çIÿ!t÷A„>¿zíµà+‘œC¡s~x»îËåÏM:ŽsO¾—È«úÏ­‡~c=á ðåÍÈÛòÑ?…n@®þ³wåm~)Üo|û£Î-F¼Ú ÝL\4îù‡>&ê‹¿˜7çÏD>ɾú¥ÛÏÿ»>c—·{_øTüäßÂoô¥•ðoÌ]þ%Ï:çÆÿZ|_}×ÃÏî†N$Nñááë©ê^¦·ÒÃ? Þm¿•oû®ÿ÷ÂÓo{^X }׸ҹÔ1˜Ÿê>êë–ΨO¡÷©¸;ëúù(âãZè·ÕWè¤ãFðIzÇùˆwÛ¡ßXÕ§ó´óyšîQçV#þÛgê{΄ƒèd—W©£þŸ±N­÷¥àM_…x.ðÜ*~Õ=ºüŽ·Ýç¡ÇÑa­:í_¾Ä}xu?|óê£2u¬u¿Õ¤¯+ú†u#ÎÖ=0Ãwó”â”ç¿|h=pž+ßÿ0üÞº}Ô÷ÃNìz»|íü~+ÞT¿aÇ]žŸ>ë{ëžÄaÆ›Îk9‡y;pÚJðû=y°¸ ?Z?—‚wížÀ£Äýºßþ]Kp±zŸ«nAÿ–«¡ëóë'ŠOðÛCߪóã¾oì¼ÉõS÷v ûx˜‡‘éùt¸î]Šõƒ?¨ç¥ë^ŸÝ~0¯“qçIuž|Òqä8tfñÓ8âÕúyÑY=‡ù¡â]òø~è™uŸeø«Á×ê^™“þsg“§†þm9/})ôŽâ}ðxº¢¸Rý=Æý¹è@? ¼+oËc·Ãg…Ëÿ6üãê;º˜~uðWÝÓ=èúHÝ;þyÕ‘Çh-üÔ:>èxë³ð!«^?øájìÇ«¡ Á3÷B×T¿~ñÚúÇßø+W·É·pñ~<?±~àÀ•ˆÃx¢ÏÃûõ];Џ§Îâr<\¸¼rÏçè£fy¯â‹³®Û˜7}Røwâ}íãùßï¾7_užiÖãúaŒÏÅÀ%p¤sIô<|F|£ª'ºú Þó“àü_ú¿÷O/ÅxãMôB}aÔO™WŸ§/úu6æÍ9>¾ãJøêuÎå¤ëÖ½y›,öq¨s¢·àˆç㮃¬D<…sèN£àïâ¿u ŸTŸÈÓÎãüwù®x6Ç;ßž=:‰Ï¯~_gܧ]uYÞçÅÿ•ðwÃgÛ žs3t˜ÕðäOûþføUï!ÿu_€Ž£ÞÞùAŽÛ¨Ïïb]ÀÃåÇF¼Á ô‘´îÅ õÅæK]…>Î# â¦ùsNB¾¤GXïx­¾AôEñ^P×`œªn pÒbðËê+Ò} Ÿ§~DܛւÉÆ¥êñú÷«GÄ»è8çO|5êùS¾Ú‰ý_çLb~á1uU§;êñAß<ù¨ðñi_¯ÎoãeÖ»:nó·£â•À­uNã¤ë|oÏ•÷ ,Ï)¿$üÙ‹ñüúëú½ºŸ0|;q¿~Ÿßºã¯ÃŸâcÝ ¾öÕðMÏnïàGº”uVºÙøÃüa5â«üŽÏˆÕ_ÖOÃCá7Ï_þ× Ï;Z=Á½ÐôgÂóø>ËáS~ÞŠ÷5¿"þ¨#R¯oì\ ~Ä“áXuÙuÿJè€OêûІýÏìŸ0 ¼‡[7t’á ãë+ágY7«áCX—pë™Ðµé%÷ß©:Q÷Åðpñ髈yŸ{õ5õ}‰ÃÑxˆ¸S÷/õñ´^áäê¿;èþAùG/Züt¾r/õæë«î˜uŸžÏáó®…~vkÒ?§ô­Ó>?u¾ê¤Þ­Ðì/8F}áóQ_·ƒ—GÛßø™}þUèôžƒÐëäáˆC¾Z;‡k#®ÿuì7÷Òù>ç%áùx5x±uÿ³ð!èákÁóRS§ªžòið=ç„ ÏX·‹}^¬¼˜.Xù6â—χ¯Ü#þâ×V½É¬û¿«¡3ŽB¼ù‹l~á8–. .¹/‚>ºøl#öãÃðÅÏ„?ù0xß ÎæÑ}œg—OñU}«Ô©«Ÿ_ŽÃ”o×Cw‡Kèò‡ø¥®äZøÚ/Oúz±o«ý¾qB¼…ÛÄùíЕîDâ‹VÝȰóºÎùðð :³ür6ÆÃ¸ñíøë÷ñ/ãV:QàøãÈ;g?ÍþyàžíàËð¸ïñܵ>B'ŸtÝþKÕãÓ¡¸É>‚Oýwø›%¾©àWÝŽýjúy>Ý?Φ»‡o!ŽÀÛÇ×ä‰ÃÐ!øÖ—}ççÿ6t¬[€‡ñë!ÏEÜß²ú"Ï>œ/Üw…?ëò½®ë‹sxœu%µKÁ?­»ŸvÿU?ìêÛ3èùÝ<¹Ç¨ÎóvÝ~©óPƒ7á ú<^¨/¬ñ‡/å“ság‰ÿ¿Œ8~#Öïà¥Þw¾’}gV¿Ë…ljõàsôQ¿oŸØ‡òSûgØã‹}(OÕ=SÓŽ£èûáCm…ô“U_G§—gêþÊAs+¡'•>7êû£öï¤ëÌòþVŒ3I}ÆAè9Ù7ïP}5G'Ô}¡+À¿êÿÄ[|M=¢8º¸Øsãñž#ëÓ˵ðÝÅ×Ò›"_Ò­G<æá»Ž²?€x£ïþUçƒÆÝ_á#›Ï¼g½Î×u|¿ë?=¸;ûGÑõë~âa×ó>"óãóÌúB¸ÊýÂ"ߨ›æWåýçü¿O&߈súwVߣСìcqE\ß Ý?ñ<ðíaÄ+øûzäqqœ>ýçÕ/ÐχŽ{+ðTžöK‘nÆþ¯{@f]_ ]ãnðaú.\+ÿV]ã´»çÝ „ŸYu¤ãþ>ø‘ø 7›w¸Þ_ïÇxÝ‹}©.OýÁ¿Å>þ2p¨õpøÐ}¹ÕWó¤çÍŸ„ÿÄøAø_ôoõ>ú˨ “7Ä+nÆ:¨óQó¿;‡åü÷åÐyý¾8½~6>J_< ”ÎóÓˆÕ¿aÚãóùG÷s×ÃGæGþ£â7êªÅç¬ó…/¬{ú¤üãüìqø´W½ï[÷uLû|×½ó“þ|ø#?Ý~¬>ªo;þ¨{jN»þV÷JLzž0ÿð¤¼(Oìÿïè½ôúœS}Ç&}ø ü!uA? ?ƼxŽÒÇ'Ò˧ ÿèNèr¥w½íy÷óðQ­¿¹?§¡ÛðãFÝgáoÓE}Ï—Ág¾ o¥¯,=m9t‘Í\÷¡×ÐÕÀw×Cýiø3å?ÈÓGý=ø¸á›àSðŸý‡õí0n~Žl>á ¸R^¶Î¶CIýšþPëaþs—ƒwáwø‰|*îé¯!ÿ< ž#îÃ'úR݉q? Þ(žíŸ­|7êyÉ9åŸ.Q^÷¬-õø‘ú½|©ŽŽãûèCAãËïÕï ¾vœXu)Á ᙣÀ¾×<ñAðíºWuÜã å{|¾{¬Å5uëü]ø[ý}àqè[Ùâó‹ ¸^sÞ'ûÁޝˆïpAÝ×5éø¨Î£Œ{ž¡ÿ«›Ùý‡ØçðÓÍð·îGÜU?#ŽŠÆÉùè›wn*ŸcÒçÿRÌcÝ34‹¸;íþ+>WõÁ±Îo„–÷³ð¬{ëo"ÏÒïŽÃ/ÔËÀ·ö›<,®»ïéVøÑôŠ£ðaÍ‹utüºîù‰üã9éJÒ7é â,þ'~Ê«p£<´Ï%îÊ—æ±Îï.²þð*>©ÏÝؼŸ |$ßàCæÓº²ÿ«ÿí¨ë®Y_§é3óâ]Çyò¥úNxï­NŽÿ«?þUø tЃˆÛð ý+ñ[á,q›.ôQàmyÆz¯ê~ÃAçáòÑÝôMg=š7ºEÕ¹ñqßö8h=û=xʹ6u|Fßw3|YëÒ¾àó«rä0ò|Ï–2ÿ{õ•‡.}ÚñÌ­ˆsð-ü-苯þ õAä»YÇ Î‹«+ ÇÐ7ÔË™x™¾ú™uOW½!õ®Ç ûC½õåø¼µð?ðd< ޱ.å/y\œù"Þ9x`ÕiM:îun†?ƒýYð_ñõL¬{>ªy¥SÐÔózO} Ägë]<ªûا=~¨cÕwžA7X ¿êvÄËíXïöïJèœôùA´:øÀxÑ=Ä5ú+¿qغø*ð ÝÆ|Ò¡œ×2Ξ>:ýï\ðpüƒÎp>p½ O¯ûˆçpÔõfëRp/¸8êóvc?‰ù?ÖÃïâ_8¿æ ]^⻉oüyr7tº(ñ|±ïK<¬Ö§ü:œ÷qžÀ¸[GÕ×rÔÿÎ?æŸ ýMÜ}<éúx›v_çAèdø›ç‡«ìg¼¼ü¯I×Eíq¬|™Ð…ázÐVÞ?±?èDpÙvàÀ¥Ða<¯ÏSÿy6ô0u·ôC>ß(xBöÕÊq¬óê']p.Þ¸ûVÞÓ÷H>⛋—|Mqí Ö…ïÿNÄ#zè_ÆzÃñšó¡KÉcÿ~ðÃÈöÓQøQâú<ê¼§8/ΉKÖ/|"î|º—w)üÛòÁGý{ó^: Ýî0â‘zØ»á_ñ‘ñ)z±:az­{Ÿ‡¨W[Ô³áC?Šy²^¾Ž®R:áQçY‘×èbÕ¯'ôP÷ ×y—IŸ§Çñžâ½ý#îCçZ‰õbñ+áEõ¯káoÕ¾§òEðoë˜ÿ‹UÝTèYö^N¨{+Bæ£;·_:âü÷žÍŸ_½§çPW¥^N¯êž‰aϧu¯sðñõÀ·úX‡{á›Ðuè ƒŽCGÄßÎoÓðuö¸ëôU¯:ëq·úÜLz¼à»ŠÇ§ÄŸ¬ÿµ^œÃX _G^…‡ìß/"ï™ÿ/bÂ'ô0¾Ýä¨ãûk¾ÿJñžæƒþEÏ¢\ü&ž®…>-¯þDz_WÞK~qÙó,Ç8Á½ãÐ5éØÞ‹oø0ü´‹øÇ·p.›Î_uÓŽ—­oüŒI¨{ƒ¯w¼¹þ£u²ù˾¡³ãE;¡ ʃ—·î?ä7äý­ø\]yù´ãü­î‘yðÞÙgŽåïÒÅà§åˆwðOžCäÇã÷xœºÑòçWç_çÛF}<¼Oùb“®ÏoŸ§Ä¥‹á3MÞ÷ýï|?ü¸é8öñÅÐïé“ô¸§;­ïÿ|”ß®¿Aé%óŸ¯~‘‚9W]÷`OúúZ ]oæSÊÏwC‡ÞjÞ­;ów~•:îºÏmÒã|kœ =.ƒàüÀÖÿ©î›ÿ]ýëfÄOçܯù|qíËðÏÍ]j¾8€7ËSÙ?ΣñC*L:Ú ½L½GõÁv¼µºÛõÈÓ¿Œõ±ú>°<²úd.ôñ½ø^]ÿjà¸èÕ ç—ɬïo¼çløð‹áK‹ËÆ—žc¿åyòªÃöqÝŒ8×ã»;áÛªƒÄë.†oº¸”þ*žˆƒžïAÄwû×ó>›u¿¸Îžô8fþë¾±ððû3t¼ÿ\ð½ês8ÿçûœ‡µNõé¾±ƒŽcäç{íü¿r~_âˆw¿Š8 ç/.ðùò²õÉgS÷óˆ§ÙŸÔ~6žç#•?0é¿ÏOð\âÅ­îcW±®<÷~èyÖ;|¹=ùpžÇsÓùÁ‡çw-üÂG¡‹[·ö»y±ßŸ,t_8ïÛ”—ð/õÖÃFøò«íï¯>Ä{ùΨsyGý{à õ5öÝs3æ×+µæÿ*tòª‡Ÿõ}}.>çóð%á:‡}O¿ÁKä;ï]ýkÂu®N½ÊvÌ^'ŽÚð"ÿÿÁ/χŸ«1u}ÔÔz/y>§‹~ü òÁ¨ÿ»üt3|/ë¡ê7çëáÙûˇžôŸ/Å+¼î(øö¿†¿u1xì(ÖUå§A÷ç_œvÞjß—½Ô}æ§¡—ÖýÕækþýú~–>5ì~±õºyÜüå}ÕogÚu<§îiu\÷† »n-ŸÃcÆm²Ô}eñê(|j¾£u÷Ö=Ó×ë<‹ù˜tü\÷℞ ÏàêåG¼a#x›úÛ­˜ÿkñßå z±}A¿’ÕYÊkWwâ|Vqu5øæ/–ÿ5îë\ÝG÷I¬ã…Àtë¡‹­G<³>î„ß&¾Ã'r>‚~y^÷HåyEçЫOÏb¿Ïoœ÷ß =X©x]}ÃwVg©²>‹—Ž:~É>,ðÓ/C_”‡JÏö<ðdÜq´:Xûõ |2ùÚ¹’õˆoâ¸kþ%tIëx1tu˜tq¯âÇiÇ‘âéµÐsÏO¼ÏEo¢—¯7ëyK=UÝ÷¸Ôó·ûp ÷‡‚OXïê¬é_…¯^,û$«k€#<ïQø}òÅñ§g„.…¿ß‰øC7…wö"¾ª+¨¾;‹}¨ÿÉú€ÒÇCǰÞ?‰qª¼ëÏ)^úì¥Àïöñ¯BÛ ¼Ä»>Hõíwœ[ýŒ#oÊ'U¯8î:ž ?ž¿þìzìsu@Ösà·CW·~è!ò*¼¥îh-ø3¿t7ò,_:(<¤îÈú¤?.‡>lÝT£i×çªÒ8â×°Ç;¸¢î3 ~ÆŸ²ß¾|G½O™õu,ߊ/ôøYÜÄ#è·ÖŸ<†ßØ—ÕOò¤?yòq£p•¼p+p½ýdþVG®³ïëÐb_?pàf¬—¯B_ЗÛ<®/¨u5èë£Î­ÿçïU½Ãfìëxyð¾ÇѺ&Ö¹8w9âÉõàó—C'Ö÷í(òÜzè·©÷Û7C·ÿ2|¸‡ïIO?ó¾ïå9å‘ì_Îw¼þÁ—¡ëº·€®ÍGZ šŽò8ôô:ç2éúkÆýqè©ÎÜ €?K¯Z ½Z\¡/Zå;Þ}ôýcüñ8éQÄ•½X[Ÿ.ÄïË;â„çtŽY§7ª¯»þ< Þ–nh^ð€ÍàŸêò>̺‡ç}ð À?tsq­ê¥Æý{ù¯¿ ]ÇúÛ \b?­Å:>|Üü= =èóØgt:ñAè¤wCúIð­ Á¿«ýÂå;…N}#të]xM.žqÂ>¯ºôˆ»Æs=ôÎâ}òü<GTĸï/ùýbèÆpÃgÁ—œS©óð±éx÷c,Ï÷|ÆéIÄ3¸ì8xPÝGο8íx´xü¤ãùÏC³éjpÆÏÃo¯ºa×É<—øí½­g8ùBî³ùs¾|y/tGuXüVã}5ò§:‚ÿrý¨ûM·Ãß6.þÜ a9p˜Ï¡ãÃq¡?Z?7Cóž+¡ó—8ëx©ÎéO»~nÙg_Ä>±¿èu¿êÛ®³ÑuÍ?¨úGœô8Q÷"¿ëï…TýiëuÞ¸çI¸ãNÄ™•ð9íoõøÆåÐO𵪳 ½ùɰë3[Ïø“k¡c‰ëÎãù÷O’gŒºOT瓇]ßøaäãþ%¼QýÒBõþæãö¬û‰7JovÿŸù»Xwæ]žÃ J§žõu·yï ô;óš÷=ÃÏ—º® GW_i_Ïð<Šÿýy俪C˜uü”u-Ö‹zÇÕX·‚gš7ñl¸®_ ®î·8íë$χñQè®Þ_ÿ[¸à»¡÷Á·UÿpÒ×yÕ;„¯U¼gÜ÷ÿëJà³ê{ú2ŸŠ>é9Æñ|¾ïzàñ۾ŠäiuÜg瀛ĥk¡;©SÛ¿¤îÙ€§=ÿÑ›ŽC¿0×GÁ{ÛáËÓÕÏ>tøØ>\ ËïÃÍxÔƒˆ3ô!ãéœÁƒàßuŸtøsú¥]_s=ðöÙðþ1žëÓÀgê¥íÿ/ˆ?Ž<,ž¨ŸÅð ûp0è|ŸþP¾ÀbÇá~ß|ª;º”ºÞ¬ç{y­êÅá·§Á—×CÅãW§‰ŸÕc¡ëÝu¿_èßνû>y˜î$~:A×È{Tó<¿î8xE}îzèÉÖÕZðߺ7+|º›¡g< }6ë øiöçJÏs¥gáxþ¹ÐµÄ3ù>rŸ8ÜÊïÈûr¼wùg“¾ž×‡éÏ©>µ¾ÄuÑö“ç_þôÝÐ!¯„鼞D×°®s~ñMã¦Î ¾¢G‰pêpÐõ™Íàwòß¶ÎytÝF]œýhýÐGà ºlç÷}x?t±3¡{×½€ïúz¬{¡§}ŸeÿwãC4?p^Bp.ã0p?ü):WB‡·^øøø%<ÿTßµÐAé;twñZ} œg_È÷Â/U®¾ó‘7àŸ»á7Á«Ïúº¸ùÖ¾™?Ń÷WÓ…—ýßá |‰Z÷>Ê#£Îc«ßÛ tÏÀÑt^ë};yYÄê?8é>]벯–z|åY¾Á§á«%¿: ßµúÊ®¸šçßëëï â¸ñ‚gջЗë”ð‰åWq»î_ Þf¼ñS|õxÔuõª§šu¼™8RÞ±.Düož¾ïyµêu#O‹pbœuYzݨÇ}ÿ½úò‡j¿Ò'ì§ëá3ñ!èÃÀ_ð3ß<Ãô쿊ûAð'üü³Xg_„ïùü¨ïƒ¼¿Êz´ƒoî„~$¾ÐIÌs]t0zþn䯺·5ôLç”ä%xæ^ø û‘§œŸX=.ï3¦;â×·Ã/çñ`qLÔzàJú^u3æo¿í‹åÈŸ?ú…ý/¯ó“¬ãŸ/wO}Gßý•úzïÅ­·“Ð…=‰ç.|Ÿôu£Ý8ô“ê§}ÚuûøãXGâÝW‘Êï£c/uüÀwxqÙÏ×y ÅŽŸëÞÜq×}Õ1Â!êW¯†Îc]ÈwôMy>ûÃøù‹‘§ýÿW¡×Àay?\·¸o5xyõõ™vþU÷ÒN; óLèåò=²Î¿Lº_%~ºï|+ò—8Â/¼þõ•ÈWæYš ºŸ›:°õm~íïKñßÏ…Þ Çø›~É•ÀûÖ3=‹Ÿg^_|wôaüR¼!ö}{9â—õʇ å8 w¯rùR|ªißgU¯>ž€g|~ÅFð„‡‹=ÎÒMÔsߌq¨ûi'}?8ìrèž.÷þôOüXœ¥§ÁSgÃ?»8Àþ”ßëÜZøËb\åßÃ|¿xoyãV<Ÿ¼y>âŒ}Í—¯{ ésoû~š WõóçCÿ—ÄüP€ëð‰ò_g=¾z?y“^Eä‹ûéÏŠot¨¬ÏƳbѽŒ[õyÛýæXŸ|ûªî)G>˜tœ\û.ð“ø]çÇC§ö{柇#Ãwn .3Þê$ñäêk4êï'—/%¯.ôù„Çë<ǰó,ë+ÏQÐãÝ÷×ù¹á?ÕºZìó¦¾ÎuÞñ8ô“óá#Õ<ºo§NŸD×s?>kýã³wƒg{.<Æý¦;‘/Å“9,Ý>b=ŠÂ×G^9˜|اËz¸ªt¿5.öOúÊÆÑy>ëàÙ¨¿ÕwŒû|–ÞwÔ¿—ž)?À™êÚò~-x®_ ý~¾ú¶ç»ûºú±:O¯{´'—ØÖŸ÷ý(t÷ªO_èó_÷tSxqÖãò•Ѓ/¾‚;èV#Nù>|OuÎäg{~>Ânà|Ó:Ïñ~‰û‡ñ`ñˆ¯S¾ïìÃ8Qþ¶OÍ+\Xç8çêsp&t'çôñ¢O« •|Ÿ}nÅ'¼®üdyq¡ë<øØÏC¯´ßRŸ«{bCOSCW¶?{ÞÛu?üá:y}9ôÃä?ôixǽFú…ü"|‘Çó?ùúú—¨c»yã8tòÀåüįF=ïÂGò«ù5Žô <@ürNˆ^þ·Á§Ôåãóªî(ü<úÄqø:êñú}K¾b?ñk¿Î:;Ž<®^Øy#ùžÏ‰w”O6èø´¾?| qþbèãkÁ+á¾ãÈÖÑJà©:×|ԾôßÇ ¬gyü{áKU?²Qç…õù³>.ôãîÞ?ñïGá×Á×î±géâú\ÒìcûÄxÙÏôõÔÛàñؼZwÖù½È‡ã؇öËfìÛ›±Ï[±_èHOæïy+üë™þ}>=¼z>ôùãnðÅë·êþ…“þ9æ·üîQ×C×Ü ?l…Ž §;SõFƒ®cÈg±.V"Žž|à~ql¾ñzð:w œïÏêãÒug|‹¾¯à]ô¬w®…ž†?U‰Qÿžó÷ézY·N¸üù“ðß¿?e7|88×¾wÃ7õ9·"nÒÔMÑü{Þ«Q}è¦ÝOðgÞÏ ßѯöBOý‹X/y¯Bžu|-?y/ùQþø›ÈÃuï÷¤?_Õ3M»ß‚—Âi×bÝý(øÀµÐ“ª/ÎÛŽè(7ŸÒ¯‡®å¿ëo©> Ÿ¨û´Þ÷ïÅ ñxu5âÔVð=>Øíð¡é®ÆåJèõkB‡E¼ãM¿:èAÄOï÷OéèþûQøÎðÅ÷ú"ò•uJç9zý?Üàçí?÷]À•>*½å¨ãªíÀÑ·&'À“uOüàú(Tý¦z ïoÑ'ñuyp<'\ ÿêù¨ë"Uß>ê|A\¶Ïé²ËÁÛéƒtJùL]ÍRü¼ñ¬zã…Ð OzüH\v&ð?œd¿\ šÞ7ú\zš|óâ·ÿù§ú‚º'á$|âiÏÿË¡ŸÓƒê¾‚“¯«+ÁGåç­ð ^àÃãOžó(žSÔ}ûOøcÆ?|º°{rÅ ÿ]ý•¼dÝ[§â†º ïýeøBø#üƒÙ×Uj¿žtÜ_¾xßñàvøéÖ'ܤ^þ`ðaŸi%ö§ñ=Þ7È¿òëýàÓ_†Þ[÷ƒÎîé óÕÁ½7>(Îÿ üÚÛ³¾Þð`u÷[¡w×½h‘‡Ô¿ø¨Çú.ι¿êó:ëëþ¶¿àL~CÝ»>è~¾ï¾œK¡'”n"-ô8ý<øárèAž7ëõw˜Œ:¯‡WäËËá¯è#ƗìÓêS>ìz»z<ºx¼¾°õ+Îl„Ïf~ôeOêʸë-×#>ÉWõ÷aßׯévèCtüø\à8AÝGG–#>n¾¬þ'×Ü ¿|;ü2ï=ÇE¥KãÍò¥<±~4=Ïù€â§ã¾îü}1|^z‘ç¡÷ïD~ôûôŸf¾öù0âÓùÐè*ø÷ZèKtU~`õsö¸ä¼NùB±?ñ Û;å—¥à Wãï¾SÓv=]|®~ƒ‹ý½ä |ábè<øÇBì_ûËüZÙ~3xçÛ þþiø~[á›­Æúƒê¾õùŸŸÈkÓ¾äþÜaœçைëøßN0/ôê»ó«®|ÐñVŵi÷ò6M=´8v4ì~…ú>û¯†oÌ‹¸Q¾ÙI#ò•«…ÎëCÇÞ \¶> ¼*.;Ÿ_û;<§Ÿ‡{o†i?È_¯¬‹X7â*‘Oàþ#ßoW¾u]ôNè©xq²þWÂoü2øœù–G}îaøòÆC<ñ'žnQ÷ö86ôy…—ÄMû’žÈÇ»üÝŸp¨|Q÷ˆÆ¿{û×üx>8X¼·.åkxóçãнá0yP·?ÄÙÃÀ!_Ÿƒ›ì7ñd;|<òÓð]éSòuÕ¿Oºž¹ù•Ï¿:nù¼ñwz‡Ÿ¯¾K¡Ë_Œ}ŸçZ _/tr&òÜzèÿt•¯‚gÊÿò¢õŽª#÷Ä}ëùn¬>Õbèyîáfà.:à_…Žj}Ðcó|G\W'„áuÖé â•øäœ±}úÓÀ7ÖÓÓÐIêÞ¡ù¿W?ûa_/ãàkWbžè­ÖŸ„Þ³þ<,/®EœÂ‹á¦½à¹•ÿ¦}ý¯ |çÒiðJ‡çwÎÿùi¿ë¡›ÐiJ7õõcø¡ürûÒ¾®sàó¿]Gô^ò-Ÿ•ÞßñßéØU÷ö¶ó'uu>×9:fùW“Èá·ÿžu±<þp¼Y ÿ¯ú»3ǻſK¦G¨Ë>¾]éGágóÉÄxüZèÓöuŠßlN3ÏÎ#ø~çjà™ªÓ:éñHœ2®tœêk=íq\^ÿûÐ7.†¿<Žñ‡ï–ÃçáWüJ¾~qßxÃðõ¯¾ŒÓq¿yÏÆ¯Õ#[‡tvïq>üóº2ðÜUýfÆ}¿ìϯûÌçûæ«ð•Ô•˜ÇG‘é ž«t­Qÿ>ó{+Ö“ú)yTyYÏ퟿ÈËÆÿVÄýð?å{ãèü—>t^㫞½êxÅÓ¾ŒÇ™ø{cõçʺٺ§Ëxõ÷ÇÄqž/L·ô¡z6_¯êñôkxÝ>­ûGWó_Ôiâçø±~Œ[cá•çïúúTw²: ?>vèBàIë@ü‡£àaçwŵºlÖÇÁûâïô ëêéR×ç/G¾pŸ\aðï³Áç|£â¼u/àyò$Û½Cð=q?|ñ«îow<Ç·Ež2ÞxžúçÊŒõl¿Ÿ A=&@ž¦{ÃÓüàêKåÇ/c}Ôý 㾿ÿ%ôhyd-|Ü£îËZÿUg8ìqO]®¼s;ô8îðïÖã8ü :¥}„÷û÷êË8ëã_ýT#^šgyýR|Ÿ|b?Òg¯~}®:AñïBŒSÕO:ίû‰Æ}ˆc×B§’Õ“Zßæ¹ú’.½<?†ôQ‚Ç‘_ÔU^àjà¥K¡›¨ó¯>¦“Žçáj8ãù¬ÇSub~é³tj<Æ8ЩÝçô‹Ðs«OѰãÓãÐ Ÿ‡.w|<åiàŠº—zÖ×Ûrøctg:œ³xn|šŽaÜásŸ+>ë‡ ?àê2 §„>ð òªxŽ¿ñA¿<ô thxÄϾGú~ÃWöÞ¶8Åïÿ,ðÜ,îÈ;ðÔ…à ? } ô›YçöyúºÖƒõn_×ù§Ð÷øtø—<°ß·>‰zúk—v߸ÎéO»¿}3Æ£îÑ÷¼*î…aþû`ð?×¹÷¸Ñé6bßUܰ¯;ëtòû¾øâ³ø&.¼þ}ÏKt ß+NÓ¯ËG?¢âͰ¯ïk¡g\ÿú _Ú÷_ |%tùz#↟{õQçû«ÁϪžiÜ×uÝG2ìqy?t\ºçÜý»ÞÝp-øˆu:ÙêùçÅ´û†«áÑVcÝsÔîfðãÐn_] ^‡Ç?›Ç“—ÿ>ÿóãÐßG]ßQ¿}<Ëû©Ã »Øoø=¨î Ÿôuë\Áף΃ö"< ÝÿWKêž—aßG/ú{ò辯ÓõÉê»8ìó{>ô¦º¯iÜù@Ý zMé“Ó®©qçÕ¨Î^ð'ëu#æÛ~²Î^lõu)îñ¾žß_õõ|ÿêDz?ç(N˜‡ýÀÃâ6¾©®¦î«u*Ϩ«{ÅGœÞ7œÿ9ßÏ_ÏO_góqt~ x³ÔóÒrè?tŠÉüó‚7ÐÇø¯çûêõ¸ï3㧃ñà×¼žþ3üt>êÝ<Ç×GßÃKê®EüRϧo‹ýøú}#ô úŒ¸ºy·t•ðg¾žÇu#åGt¼c¿~óÛÀu³î“øwëòÛ£ìûéqYèx`?ÆI¤›:?³ëL|°Îéhø®ü(þÕ½o;ß·Ýw¸÷ÉI×eéÇ¡ûÙ_ö1L'áK¯Ä¼á)uïù´ëÛö>Aß,ˆ|÷ëïu}/ü1ëÂüó=6"?ã+ÆßûêëCg”7_Îãó¯çñö…uñ›®C¿šÿû7óïÿfÜu]¼Îøvþ>ê¼_½ëøï~ý ë¹ü úO­Ó…¯á4¼©øÓ¸ûTÕOaþ½_Ôq™qýæ;=îÖ}J'ýóà¾NõmYì¸r#ôiëàŬ¯kº™<å½Ýǧ~ÉzýFÞyÛy§<.~Éïu_Øüß¿}ßõmëëÛAÏ÷UÇxÔãÿÔ:O«¿åüç¾Îöøÿz¾>Ô‹àÃâ„<ãxÂ7[}ÞÄËo§ý9¿ž¿ç·ã>Îâ“{ÍÃ׿ï8¸îíœuœq7tSºà¯tžhÿ½š?¯zëÁ|[òÁËYÇGƒþý÷‡¸¿É½˜òÝë…þ½x»¼ûtþù/~Óãžuk:O³:×7u\þÍg}?ãÏðÌ«Ïúû}ûû®ç›Ï½ÐQë¼Ì»®_š}héAtjófž<øãçèÎî™þ6þÝxÕ}{§]·îwÁOáXó&ïÑÉÔE;wùõüs¾þNǯôÀƒà3ð¶ý)æý±ÛÁgÕ¡ÛÿžÞy½ØuÛ[¡ÿÑ7œ£Ô]ÎÅÏÍÛœw”Ÿúò_çAõ£ª{1úó¸'í0|Šâ‡züăœ;ôy¯>î¼õõVÿ×£îÿïê[>í8®Þ]Ⱦ¹>É×¿í¾ˆ8ñ׿™Ï‹}=ßštj^ë/=zØý|õ4êB_ç‰÷‡ñÜÖ~h½©ó¯{êÝ¢C‰ôã)NËçÞ¯ëü¦{;ä ÷:ÇQº€x<ë:ÜÄ×x3‡ÿë㎣Åënò ãt<`%ð™øeÅã¯O:¿¶áJ8ÿã{ûý̳ÃOË{dòÞhþ¹ï÷½›‘ÿÌ÷‹Ïº®€W¼:íþ9ÜöÓðé}¾~7¯Bo¸UñrÜñMõ囯}†ñ·W¿íq[Þ÷9ÛñÜðžÆ?TŸuz°}üµ¸|†_dÒg=·ú y½î_Ç#ß÷Š}€÷à!|ÛÉ çý÷÷/ÿiÖã&žK¤'ð齟uBü$|ÄÚ¿ïó ï{Ï:~üh^/u–~Uõ@“ž×Â?TÏÂשωqRßÂ/¢÷m…ß ¯Y|Ç×'=.VßµIÇ£p?ByvÒ}98Çúð~Ö׫a÷‰'õ¸r5ô’Wìz ZùaÜß׸҃տâ#Öµ8d^à¢oþÐ×ÝדžÊo|¾]Âx¾þ¨Ç<‰¾*ŸˆûU×7êyåÛïô}Lÿ7C²Oî„î,^›——G]<Æ+æ?7~*ŸÒÁëžú“®§¨gàwš?8IýÏýðVc]çùçìxîÎÇ.9øOúûð–ü¼¾)^ï­õ÷ê?>èyM|Ø ÿv=æGÿóÃ7{9ãÏÆ}~Äcó!~¯„ÏYýxç?÷森§Š£kÁ¯ìs¸ÜsŸ K^UÇ*¯óÁé¾öٓЫá-u^Ï&]ÿ6Ïâ™ü@·W7\÷i;ŸŸáÁÕЫáÊofý9<ŸqáãY÷Õ_qÖß»úAÌú~V—È·Gø2+Ç«nrØuÆ[¡GÜ }ÖºÛˆùð¹—s}žt>S~Ôû¾>¶b_‹»užoØq²û©ÂG{ýï}ßâCÕwkÒqþ tm:¦ÏÿvŽÓã´n¬{qäqèÉãXÿâ³:!ûs?žÃyHó/¯òyìÓÂÁ£¾J ?ôŸñŽït¼Žß×9Fúì ãoý¨«nnÖõêot_÷yLzwßÑzð)yþë?ö}vúªW—ûâ¨ã²¼Ÿ ¾ó_ë×òè]ÿ=:ˆº,ñþt>fëêEÌ›ü°øP+üÍwºþYºIøš{±_Ôn¾£Ÿ;篪Ë\ ßÖ|¨÷?º<ùfÒ×Qõ[öyðgc½ªë€«#¿Ïø¡'ÝgÚMoÜ ÝÞ÷áyÇ/½Ï­àkÆõÕ|=»ÿ¾ÄO¯D~µîî‡_xvÖñ‡ÏÞ¯ºÞQ×ûèÆS~PçêûáÓ'ýûä#u-êɃG›'zª:&ë.¦c>¼ Wâ9tÓòúzµÔ?ÐsÄ>—¸`<Ô­ÉS‘ª¿É¬¿WÕå »Žî9Ô¯­Ež«¼³Ðã(ßÑy:u…ü(u–“àóU—;ìïcý¬^´?ÔÂk…«×àž_]?õ ð¦ý´üA]&^ð(ôz ¿ _×¾ûj/ü`uSëáSˆ×µNßÊÇ/ßv?>¯>A³>®öÙÅÀ£‡QOà9à,û¤î§Ÿÿ¼ó¦òýç(øÂ·uÿùrúðï:¾VX}†ÝO 7ˆ—¯×ßϹ.ç\èwu~"ô±3ñÞu¿É¬¯{¸ì t¸•ç$®…Îþ<øtõõu_Ü:ÏŒ‹÷¥ã9p’¼å@é‡s<÷ì´çWz™üÈÏÃ_J_ ˜þ`<'è¼IØËyþp^ ŸÏ¿ì[õ#Y×±>ÂN¬_¾ýøvø ÕŸgþ÷oÿ½óõº·ý¤Ï}VžÍ> [‘wªŸü|}¿üMǃu>uÒ}Ò£ˆâ[éÓ¾áa¾°q’¯¼wW÷¸c^ _O­ºÃ­Î·Â·Öj=ð®º\ñ6×·uc>Ä#¾ŒºÓÕð±œÓ­¾’á UŒ“ŽË¬õ’‹ƒÎ›n…OÀÇ«ºŸw]¿0‡á7}z¤8ìßÿ!ô-zÐÅйçÕØç/ßwœ\õÚ³Îäã —\ þuz(¼ï{¾þ¸ÇÃ:·¾@õ˜vÆ9ÜìÏ«îŒî*o©›ÍóVÖ‹ïÕ/ß¹„Qò¤Ð›éÆU]‡6ú„…nOgÜ Ý%ÏÙ¿|ñÖúߎ|O··ï¶Â—ç—Þ]ºÎ£-t>£þîqø”tkø?¥7nÅüWòaŸïìcf}‰+ß8æžz8Íû"p’yª8¾ÅJ¬£o?yÐõBëš¾[~ŤãûӏН/ÿ3TÝ…ø"ÒvïŽÁ›Åiõýò»ÿÎw£3:ÿAŸÙ ܯŸGù7ƒ®/‹ñJ?:ïsÒùR/:éüßòÜWc¼“ß<ÏF<·<ì<„óñë¡SÝ >dÞù,â±¼²¼öLø êjÔsàÍûoä™g³î7Á7uïÉüýœWwn-û–ˆ›t0ç ÅÏ—Ký÷ð"~ºs3/ßõ¸å½ìGó@O0Nø³û^^ÏñlÝ#:èyA}™x+ÿ—Ÿ:Šó¾uŸÝBçÛQÇÀI7g:ŸÕz_êº ]{ë»xàótëܹ;z8½œn³øI]Ÿ8î{áŒWóy¿þ¢øl<œSƇ৵ðgÌÛqðÒêÿ;ë8Q~¯ñ=é>h^ ^f߈#U·zuñqü}—Ò£†}ù\ñC<_}b5üÖÁóåGz©¼t3ü,qþíßù‚·'×}ƒÓ7öƒçÈ×…Çc¾ù¾ðHõq›t=žµîÄ…³ñ/CÇV®ÎÕy,úœs ×Õþô¼à{ñJþ‡zñÝð-Š7 »nl¿¨o€ÃŒ§sÙuïíüç'“à-ÓΓÌãnèW_…E—§Œ›¼Nï«ûäÞv<}'ÖýÒ û’Îᜉ8ª~Ä>Wïè¼*}/Yxüß;ÿ¸þÑ×Ák‹—û¸Ð§‹ÿÅ8Õ=µ£ŽëͳßÛˆy²nùêâz[õ=ÆË¾¸¾-rº„º#uä;¡Ç9?òrØó¾| Õ=^㮫£â38‡WÚðœ8BWâ'Ø7G<ŸvÝd%xÿ£êñBÿLÿîXŠøľÄßê?þ—sèòHõyu¾·ªëƒ£¿ý¿úÎ ú|ёۅÐiñ¦ÃðµŽ#NnÇ¿ÿ ö­õõòMÏ÷Å_‚/<®‡î /Ô=V¡SÉ«Õ?#òðZøòÏç=NZêÆÎnƒèæÕ2öeõ+Ûñ¶ß? \ñí¬ë(Î á •ÇFÝoƒó¬CøðyìŸç ýùðëƒÐ*ß¾ïëðÅ<~ý»Ž;ð€ª·9é:*¿\^‚댛¸Dwœó¹Ê×t ûw7ø¢×Y\ö½å‡h?‰'ãðõøÔ¿¼Yý„û: C裴|×zµ¾ð)º‚> tx_³_ñßÕÐõ"/ó 3¿ÆºUg.¾®‡jзŒ;ÎæÃÄú¢VŸÝaÏïóú­ÿÃß'?ÉpŸîÍœ'ê¢Ý÷ØÏwBÇ¢Ÿ;·#>YÞßù}­^O; †_äœIÕ‰NWçÇoýqÄéýàU7í|ŽúvÔ÷¿ïw^Àü~vœË/„+äùÞs˜_z Þâ<åfð€ÕÀ3|þG£'ø/Î+ÈËòólÃ?´®¾Ö¯áRÏGÛ±õÅWTúÛ¤ÿ¼ó#øœ¦^õbø6â¥óÊ·"n¿âŒ<ñUø“øž<`ñóî}ä :þzð8…j?ì†~+Ïà‹u?ÙÛž×KGxÛó,Üod_"|úvèvø‡õ%ÿ—/þ½¸â¼xÞ?°|ô tyëÄûàãÆw;ÖÝÃ0ë¸Ð:Æ£œó€7Jgš¯[u·ƒ‡ùsþ1c]:?µø–.Tu*óïÓÇ„®\çC"îûÓ¼›oõ{êõöƒWíú-©ÿ…oíÓƒÀAü_ú¸ù¾yÓã ¿žÏC·ƒ—ü]Ó×Õúã×}Í¡3žv½S>‡Í÷¯Ûç.¬:¨qÿ<8U¿!ëD~²þüYçOzþWW_­ù{¨k~ç£\øïëïwº½8Z÷¿ûüòmÕË¥®AÇQGû(üPã†?®ž‡í‹ƒx/û€Î‹7MæóRõŸ‹ÝW§‹ž -Þ3êû]½¿óp¡Îµz<ƃ³ß=2ï[6[‘ÄGyOÿ¤½ø}qÓ¾¶Žwcüåñ<çÊ/qnʸ¤ïD× gO†Ýo5ÿÎ[8÷¦>KŸ‹ô+¿Ž|íù+ÏÆ¼Ù•ŸO;±žÌ»üS}ñÇ}_Ò+ùö©<‡Ö9ðaŸ‡çGÝï„gÔýþÛ“Qç¯ßøü¥ž7½¿¼‚ý(ø#ž—«I}Ñ<8Jï³þŸÛ}^…ï?ÿ¾û.ý¾ïOôe¢3Gþt^Wü] žTüßÿ¼zß»óñyÚñ–ssÏbÞôi¶¾Äûê/:꺽O^¢ïÕ}¡ó?õ ª¾$¡ ‰çâ›ç=ˆßs.¢úUZï:—èŸøáÓÓλ;ü‘zyÕS†õæ³>ßÅûNûú?ß¼ëyÍù":Ú—áC»¿Óûè³äžXù®uïz=žg3æÕ÷zn÷ ò›w¿âxª:èòÙC3¯pQåŸ?t>¤_û*ôÀò§z|†ã®â•çñ¦óNº&½¡tªˆóò²ïs¾=ç¿î)õï­¾u³ÐÏÍÃQç‹u6æ÷bàl¼R\Ôßj=ðŸúCó°:|øÍÇÏ„þ¨Î ï–oë~æð!Ô«ñåà%xÈ:¦WËÃâý'ÀÅêÏ' ¯©'ž/†£¬sú¦¸Z}½ùcïúú„í8ðvðŠ×1NøÞ‰/- :>ÑÇj%Ö¿ü ï€ý·¸f+Ö‡ü|&ø«ý­.Ùúý,ö_žû¬s¿‘ŸËožô}µü‹®Š·Û׿wòÛ¾ž7¿”wW#®ÈËÎ5¬~4ÏU·<ÿÜ»¡ [7îOðßëþÞðãë^ºQ×/íwþËzðxãÆ÷ú<ò ?m3|è—¿íxC|‚÷é²ôEûi3öÝJø|xˆõTõæó}¢N°tÈ“'ÔÑÃMx›{ ôMø:ÙçW|Ó?˸›gñÏÕGý>)/þØßïløÖò0z=ü[>ðfèãøÐƒÈ+ð‡¸ú‹ð=_GµÊ ƒþ<Õçqþïßê÷ñQÏ/üeŠKê(ËOºÔÇÛ÷â •ÿæÿQàv¸Ðùw:ç·Ã¾þáz|P=í•Ð-Wƒ§Î˾~×ñ/}!ô0ñI<—¯½§ü«'ÓWïÕbßÏÛ»^~¯Ç¥åŒ'ó<ç<±øó0tLçþ_~ÖÇÏ{¸–^TýÇ=N/ëÕ:«s°'wÊwâîÿZìû¨ÎmL:/„Ûœ+ÄnFœQ—®®Pœ^ ¼æÞK8@|p^>£çšGç•èìø:rãª/íj舻¡¿=éqNý=²òÿ¤ó ç“Õ‹8G‚¿ü]×ìsÿî=}î›ß÷øÆ¿Rÿ๜o³ÞŒ»ûïèíæ_}‹>mòûVÌñØ ÿJ «OÒOôVðâê“:ìüÅ÷ÁôãÃ5úLáWö›8ÿeð¾•ð¥ªn÷¨¯#ýð«ô°ëÂÖ<è{ñv8éJÄ#ó—{_ñ÷ràiõ 7òy¦}_W¥çÖC‡Ç»ô18 ]F\vÜ'ŽÐ ªoÀ´ç—7§=¯Ô½#ƒŽéUÕ·U^ ŸÏx8õë÷}_ð/Êÿôß·^å}zeúÕu^3žCÜ|5_'ú,ã«ñ½;±ÞœÔwv7òÝj+üaûÄóý,|qÍ9œåˆ·UÇ<ì~Š~FuÎ"ôïËW„‹ô°ñ<ñß½pãň#pTù½“ÎÔGÐ/Å-¼Í9|}‡Õ=¨¿:Žñ©zoóÏ?ôqÙ½t7ô[õ7î;€¯ùâ:ߌ^(žà×êìÏ„þ¨Ÿß¯B‡¥ã«ç.½kŽë~2éëÚ¹”Góÿþ«Ð¯øÈ;áëÚÎ;øû×o»îŒÂâ ]_½µøà¼ë­à%æÓ÷øïêU«O/ÿu¡?7|œ÷=´ò0?IœRíÜÇd>ÞúÆ‹Gü·ê£:Ÿg}öwc}|~µ:õZïáG9¶ºƒ~Êê;^}§óäZ' =þÓ‘³~ލu1î¸idÏ¿×ý}ò¹y¼úÂJäzùQðaó£Þοëå¢oæóô:tûÍy?äòwç:¯óâR÷cõÛ¯ûYOº¾ƒ÷ß Öç¸C],>¤>“J9 þhŸÃåÎï«7Á;å_ç̓¾<êùÜÏÏ~ó xãQŸ¯×³î;Šÿö¿xä\HÖszºˆûÖCg¬~ ÁC«Ÿ×ÛŽsø÷û‘—^n±OëþÔAçÇôXºž8Ç’/ê¬w=>ðéJúù\ëöÛ?v¾*ÞüMૺwvÒ}þ¡óc×#N¸WMQפþ…dÝn¦7»Z=zõÉoú{߈|Mgç Úp§º ç.O¯{¡N‚ïÌÇÍ}uâø /È_†®…ßòËá#º;BŠñ/Õ¹§N~x6í8ƹúÒè"“®÷m„_÷Oºï¾:™øFÿ1/bÞÄ:òZøÛâûçŽãûñVñÉü•? Oþ¦ëRUxÚõÊêgõ‡o—÷±¼<»¾ó·óõ¯ÿÊÅà!U§¼Ô×Ã8|žÝð‡/>ÿ•Ž:ëú«z¥âû'}Þô§ÂÏ¢cÀ]tüd9üíð ù5tc¾þ×Ãø©Æ¥xóIÇCø'¼ÍGð½â›x/ïº'h#ž×sšGû•ŽæyÏ„~ožž¼í~*ŸáöÛ¾²âA¼¿ïS÷L·úzÚý"õ’tW8D}lõ[ç½ïã·úŽ}xþ½ñç[òSÕIé³Qu³îÏ^ ŸO}zõO™¯§WóŸswò»®ÓÂ÷Cw£Wã#±÷Ã×rI¾Û¿â æÅ¾Ù=Õ~Ì{¯«ïݨ_Źț¾5~ß|Ókí÷ÒÉæÿ]_ëW¡¿;¿È?Ww®ÿánäQ~Ì¥ðµàjûèqìyQ|­>ˆ'=¯©[ãÓí…f|ê>€Aç[¦=^ã ΩÓwÄ­ª?öüf_êËcàUôñ¿ú1:®­>1óùÐߤúÙM{ž·Ÿ«_à5õøâÅAèö©õ¼¾Ë›aW÷îÊÎÏàý߆®#ÿšzþzðE~Šºc¼ƒþ©®—ä=ÄçÝÔɧNPu»'ÝOPßäó®Æ÷Úy~T==«úž úçÓOåKº;=?Äõ•6ÏÕ¿ê]§ÍР̽Â}4êNèåî5™ü¾ã€µÐñ'/åó:ÿ;ÿû?…~Æ.Ÿæ¤ëùòèZàÒª';鼬p ü~ú€x6ê:_Íì?Gááùs¨ÃòÞ3ãm=×}KÓ7ñ‰¥AçxäAè³ÕGdÜù´ü¡~àÉ|Ÿ,t~íü&|ç}ìküàóI[Þ#ûþÙ/‡W·:ÏÅðKç÷v‚góÉVc\=§ñp_ÿ#|Š^ö:ô-ûO~æwë'e_‰Ãü™º`ÚùÿÀýP[áƒÉWÏßvþ^÷mÍ:n¨º´YןœƒûIð•ª/G—zÞ‡‹àÐýØ/ÆÃz¯û€&}~íOuÖwb½ðeÌ«ýúMøGÕ/á¤ó^z¹úéõà7ðÓà;î_µà4º…8`Õ½d‹=È›Uç3ëx§tüIχ‰#ÿ5ô­ŒW¯¾×ÿ.Ï?{××»¸äoø`Â?õçï:®ã{o|ßÇ—^)Oÿ[ìGë?¥§ÑéùGÆçå Ï7üHÇËvà<ïi½Òg³äjäåu}çbèt.ë½ô‡ïõý'Žè·BØ_\Ý+›þ!Þ­D¼ð¶Çé½XÇU·øÜ ½™Þ¦?”{–ùÖúùð§œÓàcÃûô“›ñë±ñ>ý—õ}ñ=p™à{ú0üÿp±×Ô9Ñïôy­¼¾7^þgñ|Öí8êJßõüKÏ™/”çññWfÞ ã¼–u‘ýÝóßËÃx™8˜õŽö—¾p¦ý«›_ìœ]Jà?Æ~’‡øuïü¿?>¼”þü¬óD<Ò>þeøòÔ§ðlø÷òÕfè Ù_Ê|êKGò½_Õíúþ ÛÁÔoÁq7ãçë³ ÷Ì×…óêå÷œôu1Ž}‹ÐùËÿyÍy>sõ‰}€×áùUo=êñͿõ_§ÏÊ'Æ]¼§oúw}ž†Oö:tx}H2Oˆæ‡ßýըχñ£¯]м·ãí¿_ïÓ·ß:±¿î‡î’ø)õ8ùp¡Çz=]ÆýÞühçqà}ûN.n¹ù,|ÒñÜ-ßøÜ[pÓûþùÕ_jÔ×kÕù,v~ý*|=ã*ÎÉ#ðnÖ;Wtq\ü<é¸3}óõMôW;Ž<Á?„ ÔÿáÃÿ®ãØÍðá|D~yq™NSý¬;näCÚßö›¾`Ö­ß—çë¾i_ÿ…¿×õTëÐ÷U_®Y÷MŠßÍ?ï\èv{ßï½ïxæëзðou„Gá×V?ÏØ—ô?xÿÄ¿åûOWWgý‡ê—ù¶ëöÏ—z^•ßê~·w—:§tfÒyX[ ¨»î]±^BW«¾}[=N˜O>Ù(p5ŸÜ~øî ãë¦î˜¯zl|´Î×-t\Y>ò´ëYü÷F =Nþ›ÿ]búSÝ'zÒ}¡_Ï÷›~Ùâ²uå~Vï³?þ°^åýñvqÁû¬.¯û¤Ç=.Èð¦}`¿­„N¥ÞÅzPˆw|ë^¿ ú¬:þ»û?_Ïq“:Fã°~œº úcíW¸ùM×ÍìûæþzÔñ,\èüí­ðé½gÝ+:ëy¨ÎËN»Ä\‹ïÙˆxå|FÝ8ÍW·ÜûT|ä}_Gu3pÚ½Èß—CW©óTQWÀÇ¥ûÝÝÅ|¾ùcÿÅ‘Aø+[ñ½òÅfè8÷‚ßn„Ž¿õòÜaøëuïäI×…øF‡Kìkþ½Þyx­îãwý¤ŸŸ}!ÿã#—ãsð}ðVñ§îwu|²:¯x‘÷6Áõtú ~]çä#~ÙGæU2ó©o‰ç«¾‘wé±uwÄY¾’ó>úX¨O^ ½£úãú¾wþÆ÷x~ù n¨{ÖÏði³—Ü'@? ày|bñŽÿîܽÎÍ_ýëJà&ç&*Ǩ´ó>vñäÙIçêå6B'wž _¾>'ßÚyvu¶êAgÀ «¾ã´Ç+qJ¾pÞÆz¥ŸW?Ëq3uWø8Ö^§îo5â|Öó¨û–÷äeûÞûªç?6C²îð¤ÝÐyÌ‹¼Ä×ÇG^†Ï¤8æ|¿Ó8U¿»Ð‹^|Öñ eÿðßÌú:L¿]ž}ùÇû·cþ¶Ã/„Kõ·¶ÏŽB?.ž!ß/u\àýÔ½Õ}²³î#Òãàë.v.ÌzpŸWõ³×ý{º—aÜ«_Õ´ã/øOE¯ôs×B;ŽýÏWQo¢¯‰ç¦oÕý!ô”I×åä˪?;êüH~•¬süÅ¿ó‹Ä <]’/‹ž¥òËÊ_›?Ï“£Î;ÔÁòźæAè9þ.ž;ÿ¨þ‘7é¾;‘Õ ª£—^|¯ç™Ü—"_:wxûPä/Ó7v_O9Žüy+ô¾­ÈKâMÕOO:ÏÑ/™þ7ó»ðôª3‹ü ofÝoý¾ãqV¿]¸ƒïósY¿ëñÕzU.žò#>›tžQýªáðÙàhyÇ<¨ÿy0êøP—¯Oº¾n½â#u¿›?OƒÍßK½Jõžu½ŒŽæù¾ ýd-ôWyÙ=ï7Ã_º¸÷JÄ«³ÁÅ=x¾t量®à yy+üŒ‡¡£ú>þçQøÉêøw"nÀ7ü9ë‹ÎçÚoø´¼ˆ÷»?¾êÎCGso(üÇ:'&OÞÞ"¿…>´û}9üZ:½øÚQ_u¯ÄRDZá®kÔE›v<…gTß®A÷ÝôÆ—_Ežá‡¤¿¼ñFÞƒGÄOyÆøð-ªŸ&]âw]ÏÔçRߎª¡{½ïxºîÓöÏWºý'üÎIøÖµýê~åºÏpþï? ¿Ëïߊ|ª/ÿ€¿Q||Òß‹o‹?ozZõõœtü 'Škx€û^͟ǹoøL2œ`ÿWѸëÍð—ügýoV%ÿU$ùwÒשøg8w)΄nBoµ?‡bé¾ÕÿjÐy[õÁ[êü[Qï®ï„zUýèèwy¿óãIߗγìÄ{—ÎõHxë'“Ž—K'ôyÎ~…ÕrÖóޏQ÷ †Ÿå¾ó6ÎÚέ†ÿ`Fœƒ·à7ñQ½qõ{¾Ø°ãQºŽÏwþý0ð%Üô"úNó¡èhú›f?ú|òl±ûâ°zÄêGq`uÖó¦xC‡¯º¼açY;Ég=ÿñïõužB½ ÃW©ûw=žo†/Oè·£Ÿ0\q9tL>0SuK㮯É3ê¾}Û×+©µuÉ×o¬îYê¼\…sÕΕZUºç‹}¿YOu.6ö—ߣÓU_Ù…Ž#ýS>_¿<_¿/øänÄSñ¯ÂóèÖƒui½É;îK¨~zƒwC'ðóÕ§-p_I ?°î Ÿ>“ë‰aèóß0_‡î“_ôkƒ#ê\Ôûž×ÔY×ð ¯>Òþ 3;`ŽÃ¿Vß¹z`ž¥?|û®ãŸ¿ Ü÷îϵO«Žú¤ÇuP|rçŠábúÙ«ðÏ–ÙOÙÿï‘oá+÷—ñ |ŸŸÃSè3[á§ÑAw§á«óSŽÃŸ*$x¾zòÐso§ž1íø>Rÿ@'Æç«^õmÇ¥÷;®:­Y×ãê¾ÙI¯º_yþw}‘Ì›ú[xîŃÔ7¿õü½¾ƒõæýªÿûI×1øC±Þêœs¬/ñºúØði¦wÕ}‰‹}{O÷'·ê>¡óæt‰ú¼QOüÔ}ÆÕmÜuøòwNû|ÒJ§œöíú>´ßÌ“ߨ¿Iõk[èy—¦“zz8Á=¥yÿñqøìuã ë3îa¥÷ÙŸUzÒ}õ¨pnÝ×|÷ÍwzœÍþ‚ö—óÅÕ/j1ôõÓŽ+è•pª{Û¶ÂÁŸŠOº/”õ¨â+ýð1òû®[wpÔ7Á'èå.u½ xΨó ~½¾Xßôý'¿Xò(üp3âþÝÀ•wC÷âóSéfæç8|ðÃà/úƒ~;?÷¼>Œ¸ .õ—冯£>_ýdþßÕ£Ùw¸ÃyE8Ç8ÓM63þ-õ¸ê¾ü°xÆ üðÐÇ固'ò´ë©O{>“éx…~¼>OüÑEܤǨӻÿϬï{º‘ssU_ú÷}9—Ñ ó<‚^ÿé«*N;Ϫ΢êŒݯ¡çéW¡oùðyùµû×Å:·÷¶?ßràxþ¹¾ê9áZüî(t~\Ýë7ÿóJøÊpÕaèÏuÙü÷^ü¦¿?=Ñú¾ù²ú¼ëûÅ:Ô¯ Þç‡êËWÓ‹®ìœÐvðÌÝà¥ç"—î3îxŸ¯±8N¼½º·¸"¾ž:íøðͽÃyõYÆy3üãÅï·?åù>´/¾ Ýð³ðÍäÅŸÇøÛço>êëÆçV›ù÷©Kzþ¾ï8f3øµqÚÝh-ô¹ÜoB›t>ˇ«}:íñK~„SÕ)Ê“üßw%âQwYÜÇä/¼àFèbò¤zDý^ÄO:léœãŽð`ç}ª/L|~®·½Ä ƒîñyÅY}«®ð¨óPïÏç‘_øÎp[ÝsvÒñªx ñ[ÜYõM¡›‹“êé!~žñíñî‚/UÜI×­Õ¹éCW…¬ ú0þÍZ =ü¯Áó'¯Ã[G‘_üœ~­ò«ï)zØ÷¹º¥‡±¯ìký‡ôÃðä¥a_WêU—²ÎÜðEèÒû¡+m…¾/‹#øåÍð§ñXz“zsÁk¨þ³¾®¬Ãÿ5é<Ï7®üê£ñûþ9Æù\è¯Õ‡ßç¼ï|Öº´_î?΄žt:9|g^ Ÿ|7pŸø¯éSê|Õó˜W}€ð%õWø%¼Ç_¨<8ìqÏÅ/Ïë8/\çÕ†ýçÖÃ7â#ý,æ ð=ú‡ì„?'Þ×ù`ó|ÿõÛ_ë¼n/~^; ~9Ö«{kžG¾ä›Õy›÷=Oy>óY}„æÿݹŠûáÁçoÞ÷¼¤oˆ:€ò«øo;®³îñ&ïkó~u¾½ø@wÏá,õ›«¯þ¬ãíoã<‚õ彿‰>jžß¼œ§.’_Xõk']‡©{Æýûõ{ÏôÞ™o¦Àþ¡Càó—B·¿ë«¾Åßë:Põ]÷ÿî<ÙƒÐ_Núûð­ÔËÐñšåO<Ç9h~\—8?¦']B=þ®¾Oœ¢?ó· ŸÌúüÐÿáHq`'ô*¸r#ôtuf|MñD]>ü}5ðárø#òdõO\qàëßv^N÷¢/OP?mß–?¸Ðß·úð:^oÔSò÷ðruŒâØwc~Ì·üjê/À¯¶î­OõWâ¹'wN›ÎSýk§çì.æ<>íùRœy†¿ºßNO¥gÕý®K]1æ‹ÿb!tŠ¥ž×ªoÅ Ç ø Ž¿^…ßi¾ÄÝ ‘gí×êW<î¿§éAð×êŸ;|68t^Gà¾Bx‚/ý`Ðu»µð%è;tüQ] ÿĺr¯݆¿iÜ_ĹÖKá'Ò3ßü!tÌÀ æÝü^ ÿÛçâ)ÆÕ>w^P>ÓÊOœž{~äIõ:îuïèÂ7×.ľ¯áÉêW>æØ~ß<ì…O˜8—Ÿ<ÿ<ç<é{p®:°ªtŸË¸á­Î?û|ë‰îâ\¸øû”OÀÏ?éñ_ÿ'ýÚù1Ö_Þk7Ê êä]¾ÅÓaŸOúŸs`ðý˼9×]ýÛ¦]‚Cª^iÒ÷“uQõûx÷ü¿ëEÿ1¯âKöÇÑ™¿UùÔ>\ìÏá~uÃð‘úr>Žú¾¿Ž8S÷§„¯qô'üÐ:·qÒu‘7:U§ —W?™wGª›…wåס/Nû¸U¿ùa×ïåóÕÐÓÔ“Ôýï'}ùžåQ_êúÞü¾óU:Ž}罟Fþýõ¥î›ð/ñ:¦u?X×ËÁÇù 鬅nzøX¼çà~ï-ÿ¯…Î,oÒ~=èxJ\‡¿Ä]u_æUßÕÐÝ·‚ç[g/øò§Ý¿qÞ‘^ì8ºÆjà x`8èzº| Õ}ø ýu±ó…»±ÿñë¾v¶ÎÅy™åˆ³¯ÃWó|ôúìBrß‚ùwOöFàc<Uç.g=Ö½‡§'ã3;±þ¬ÓoæüB}ÇVøîx'ÿÌçí‡wè9úé?ÆGQçS}âF—ËÎAÜ|aþèªuŽ{þ}î7€sõqÞ gÒQÕáÛ7páÏ÷ŸþzçR>ñ/|xñvä±W§«oÐDZúŸ/ö|ìs.ÎÁ‡oWóÜ"^Ru†oº?÷uèÖÜ«&?~ —Ñáácï³þ#¼“}Ö<Ü(ž•7íyÿ÷}ê–ÎF€õL?¥Ã‰+x%?9ÏXמŸ>å„ê;{Òã9þ²þ;ž wà­“y>P"¾Xu>Ü|,ô¸¼¾¨ûevã¹à}ê^›qäïY÷-œ3ÿ3®Wúºu%¯‹ü$uÙüQu Ö)D7ÜýÄ:8½Îþq¿ççäUã_|ú3_n†Ÿð`Òwñlï—lOÞß{;ôYqm?ð)^+ÿˆ—ú¼Á)ò_—Ûß~+ŽûC×ÓîûýùüªKà78eÜÿ×Rçð½sðƵâù´¯õ*ø\õí˜~õu½: =îrN™îûëÞÆÐ;äù쯶zËíà¹ÕŸ>|)÷ þŠúg~µó-ÛcÕõªcµÿë\û4þ{ìzèuøøå‹ÿü÷žf\ê럟Lú8É÷âååà7>‡®Vzɰç%õ_o>ëó[8d>_îÔÿM¾äkÉ_ãˆò‹þÒp¤:6ë·_k úzÄgÔÝÓ+øþKásÚ÷æÏ¥¾ñIð úyÕ«N?ìS«—º¾ÜG‡Áïð+q§|«ð ä q þ1npŠñwׂgégL·¤®>'Õm”þ¾0IçROg^[q¾¬ú2~¯ãñS\ƒS<ü£_ =”ýŒì¯ˆ;ò§u[÷Mû÷ÑÑŽƒ·Á‹ú²W}_èàê0Ä}øÛúR§·ºÜÄ7†?Ì?]¾S×a|þqÐ×]­ÓùïÝŸõ¼öxÐçÕzÖ‡ügÄ/¼¹ú£†?/®™_q³îyõõ-NÖ¹­“ÎÕUàïžËºïÔ«•??íø®vÞ˜`þäE>½q´><·ù: =c't4u*Å[ñÓQ×5ñÏò›¦]‡T‡%®ÑGž.td/ðø«ÓŽÏÄ5ýK¯´^æð1*‡SuøÆýw1ï—úúæ—‰wâˆù¬ ýÁļ[]ÔA¬SúÜÍä'â×ü$nÀÅðè ü(úºøD¯¤T=9k¡ûÒô<ûH¼x>ÀaøÕuÎ Î  ÞÖ¯ _OçyéåBç?¥ÃŒº.%ßUŸèIÇuÖ³ú¶ì?EöïÆ«êžç﫞‡-Žª§±կЃ­k÷öÕ}°³Ž“¬µˆËò¼Ìÿƒû6gZwòúvàrqªîçÛ.õŸt³Åy}<×ÙYÇ]ïP¿\øqÜy©ó±ï«x~Ú¥îéç³ü‡®­NÓç­ïä;ò£¬Kø¦ê ݯà£DAÞ3n|áJó¶ùK¼ðwçM*¯/ôñ‘/vZoçC•ì«âÅ'=©[Ÿ?Ù ÞoªOÙÛà—'Ý'—_àOùź”o·C±Õã5g¾Å7çÐÜ“åüSÅ­?tÿºøæo— ºŽ§>ž¿ßd¡ç-~—y×7=ù¬ëFÖ?Ïò6ÞW÷ª/v¼A'1O…'}߬ã1õ­îIËþ»¾çRøNòŸüHçT×·8Åy)?OÏ7®ú$«KÃgc/ôëº8ùðºª~êK}½ì®º> ½>ÊNàÅÛ¡ÃZßÕO"ôÍÔé_†®ðÃÐU¿ Ÿì‹ˆ+t‰7q”¸e\à]ëÝÏU¿®ßôüOo©ohÄy8ßç:·à}ÕÜÕ÷«sÆKàuZêÿÔ-ûý{±ÿà@õcü&ó÷8xßf%öÇ·ÓÐoÞwÝ—>ã°ûyõ?}ž~¾Îƒé«†§OõÐêôÔCϨ{Ó§o†os!ô8øé×[]ǵ?ÕÇ;‡çš~{Êä·K=~Ñùô¤ïìFÞ­þ6ÖÃiçgð\\÷;ÎÔg…^a¾Ôƒ˜ÿÍàê¤ñ\ñþ|øÿ[¡‡ú;?‹þæýëßé$É·gý½áÇ{ÁG_Íz|V×áï/Ûùmù÷ô½ñ>| x¸ÑÍáE:0¥Þ"û†åýÇöüêž¡ãð×ø®úXwúŸ˜õ@žS^’oòžÍíðuñ/yB<ªþV¡Û¨ã®s¶³þÜ¥+O:å/þxÐãÏëÀ;Ö­ý§UÿŠ:O¨ÏÀÛ®ÃÒéáqó%/ù¹¥A÷}Äóºÿí£®ëþSè}âYõStYݾ /oëw«¿ ܾþÜñ̓þûþ<z”y“‡å#Ÿ«/eõM>C_«~lÖûüûÕiðÙ^þ¾ã`¸á›ð²ß3ž$oé£v:OÅÉ“þœu_õ¸¯sþ›õSýKçÏYçlÇýséÒp׳ùþPg¼yo'p·÷ Ý ž~‹×‰;úH½ùMÇyúãѹ&pQø{æ½ÎyŸt=¿£ïÔ·ïû|ñi_ýû‡}-õßÎ!‰¿K£ãHõýU¿?ìùÍ:ׇÒy~DÜý&üàýX7? Ÿ—þ!îê'a\Ά/¤nßÇ›õàãÛWêáðûHü÷s…û§ñŽþòÜ3F¯×?¤êm¦¡›ßå#©3¬{H]'V·Rý°Ç]7ÅkéÀ+ág¿+£ûU¯ùï]ß4ôCõþnÃîÉ£S¹Þ+Â΃ÁÓÖKõI~×õ>Nõ ~ÛÇ›¾wã­âÆëÐÛ®…Sçó>ëqPÿ߬׶Žà˜×á+Ù_ðÁÓ£®óÔ¹‰…®“UÝĬÇqûGÞþIøN|`uîüVçÿñ‚•Èž÷8ð)\ȧڌý)¾ÒÏÄ1¼Çzß\êsñ$ë ¾„sáLëÂ}ÂÖ‘óäÕW`ÜõXyj2Ç1êŠÔáyÞ'áÙåðåígqþ룮çl†¾Z÷ºOúz_þRçæëS?¨Ë¡+«ûPo‰‡˜?ùÕ:¢ïËCp\ÝW3ê>æµÐQw3þ¡ëãÀ/|Åê7îãgÞÅyúÐÏCÏ«ŽùmçaÆ3ï›gçgèö¯ÝþçÂ? ší"M˜pÚºæÁÒ†;ÿÏr“þöbd‰‡ÖXF?òý[Q ú@±Ñ¸:y(«šÖÕe#“mÂ8JÏEV(3.òQò¹DÎv½](¢Rq¡x»ªmž¶®: 4o̵µ‡¸ÓVP|mh0ðµ566ë5GÍ“mö úý¥%ñÑUH¦œ>x}´Èùjó’7*¶è/!ƒ‚L½VêÆûõÈlJÛùéPÎF8ši”iÊÉùvF)™$Ì}UëÚmõ5Va TQ'gÖF[Š÷d€*ôí`‚OH¶H´Þ¼´°£G—9{ƒðc&Ѿ@Ôìqü'ñæ'jÔ—$÷ùþßÃ1©1¿€t§Ú´êªLO42=‘±¨/Ö®Úhå§/ßRŒêÎâçÆU@zË=væ:p|4Ë܉ÈZÍ’ú²î¹9Çd©!³âçÔ–P_w*¸W,KFÊÄâ}Ëï£Ô•k„’ÎIdgý!™Ú#EëóƧg5gãûæc)Ȳ¹œºGÐ%×f—I ¢šGŧ#>Ò(89ûqþ*j}kËÞ%DÖêÙ;>=ÔDÕçÕFÛ«}š¹5³îoó›ê𑨫aR`"öaÙ¡5ƒNfå&üCiïW÷3ÌÿÕt$gȨÈ‹¯qãß•¡Ná¿ÏV 9l0¬G=%Ò’ô¿C(ž¹î~šÅUT›+ýVb<]„ë>oEzý‹m,M Ë<Ö¾c¦ÎšWÔ6Ëy«N@~%p¾;Ÿ0~ú™:?ò _¨ÜÑzz(-9›DôÖšshbTÐì=ãe¢Ò³jÙ7d'œkOÆ6 ¯þŠ@Zø³z"Í ¾ÿ*óŸmàK‰œ|,P¾É­¾OÞÂw¢Ü¦9×€&?=¦é‚ÒøÃY8 ¨¥3{½uåF@ƒ§7˜±>·Ô é3×o ¼ÉVížCoÖìÊd—xu=÷_ÇãonJ†@®ÿ!"ìŒÊ—üü‹'";srqœiÒvç~¼ó2dR<Qïծ̈́C> 2ë•ô}/Í¥´é¡³–>°Vêªî”1v„·çl Hu|±_û„Ië&¡êþ3[ŽsFA +rݵ0PÂën.ÍAc^H3Ï¿Ø]† ž¿å¥Â›àîçÀ9"6)|¸Ùƿ̪SALÍCê,Ú’Ñèw(g–^Ñ9¤Ša7[$Ì€^{Yw±vè½³¾ Æõê}Ýÿ>~ Tê\©ùª¿¯´¨HZéq¡Í-ÀD4ø{´žwúÞÛ—‚ZË’­Ç¼ìÑð¦VV¼ÌeÔÉ|Æt^,Žì 'å}PBqB]aüPH‰6ªbßQjMéYùµ†Üù„U‘óy.((ÒÝ-3Ò<žkTù9”)0^ñ]5&¶=Ýr†_çÿ|tŸ~@jÍ,åx‰=ªÝL.ýZŸ ?« §»IÀ8|ÊVÎøpÞmºš?„º!/X!uƱs¹Z°vÖ %ÜCâÂf×l‘9T¼»ô–ìÓD ×l'¤®SÉB¡w‰«óý½‘4)HJÿÙ °ýAÀpâ¯bAÎfûpâ<ß?·ëš?ÖÑå·ŒQHÕdØBï.© Éšg˜°RÜŽ‘q¬-Ù_På\çÏ;=¼.·µYݽ%ÈØrJhMí54jIXõÜ›¯ÿͧʘ·¦¹sË/ ޹¿ʘƒsð^A›8ä[¤½X‹–ÇþlòªÖ÷Šœ†•(²P¬5ŠúánÛÞ† )ðåÕi¹ùÄÅß=H Q’.b gvÇs Û-`òÔ^ºöÜÔLc)íDZµåöYßÜùu+³+‚AYäiÇÜå]mé÷d*¹$ÊDQËÝG`›#š“·Lå[aÿiÿPd]'kÇ:žå+_\…7š‰ó[Ëq] +tÝ!°YVY›öƆ¡Ž¢uÿ…Ž``ˆ_ˆŽ~"‚,rº”“ÏeîW€ÿbè‚“ o¾î=­¾×SìÙÌ á`\—Ëþ˜GBfÿÕ/2øy‰\nûáT ²²{㥑xoúTçŠr ¼Qº5åh‡Šòb!,`zÿ)õÑCzîfÛÐhk\¥$¶;Uĸ [ΟGÖ|æ ÷´C@*}:ïó1º¹ö]@}ñW:Ðï12_åÕ¼šáëŽtû? FFpÊ?WT!‡@iõZîɯÃUßÉO‘Tíü@úËNT½.ÿîÌÇã(7TÛµøˆ50™ß?>DãKáD6%(¦w—mÓ J}'-¹}jiù¶½)ý´ºcÝy¹ãÀ²ï:TR¿ ù†µÈZ/ôÎD8‰ j¼œ¼Å(ÇRûh5Ñ ”rƒKœ¸zîlNIqU1¹{ö½Ï†RªñJ­BžHQ>z%qÎôÒIÝFÁüÑ åšH|¤ð±ñÆÔÈL´ÐÒЕ²šPà`²cG4?ïdßžh²`Nꡟ@§œÒ.úwi_t/¶æw¢Ñ£ÕjÂKÞ»SÕ~®o€ñaiS¡Ç(,x}CÅÁ¨´©·ulÊ*%ˆ©®’ÈT}ê.ûIg÷g¦)£q‹wé£ d|wÛ&êt‘ì¯L Aùƒ+U|?qŠs¨Úùw¾Ý¿Q²2ÛÂáÅ^dú{®þP$ŽRqÕGsÇqb²¥ö¡ÊÉ;É랣†±Ì©œÓê`2ärýÑnÔû.üH¢x5*‡oY¿6Ù Ê}Œõ€Lˆ¶^£¼%ýnTèü4qB¹ÄÓ ùq†l8†²QÕYvÀøèä^ÐHE¢K॓‹€ºbý¿ë¿£²ì‘‘Àï`²iÅC+Dn°Î>jjVÓeº(Éí¬ãY%¼¿Ê­íƒ‚© lhÓcT˜Œx0mèŽÔ]¥ò“ Hˆ¼÷Ú½$«§&Úƒ­âK¦.F"Ð*ÉOöO€I$-öb*w²XGNMî8j ”m: âO„NŠHðò/ÏwªÜÜA‡ÐJ¤§ªPƒ¾q»EjöŒÔ*¡îj×3*}Èò°ÜAýüÔüý>¶ÏÌ¡|Ѫ½måÁHüÖiúGhíÛç½’ªQ¦´èó•Ú[ð±¶=Lì`P3¡ô/ÇÆâW Y‚ŒŠýedR½Å¤*Æ€b¢ŸôŽ«”‘Áë߇cPî–¸§ëb‘ýªºWµþÜ¢†~ÀYáP<úqÒ]>§=“R·@Ç.cÒž®;&Ô» N_«Eù¨QjZõ|ÏT²;h±ášpJã§8Å:U~ÓÛ"Ø’Är)P׉߿œ tùÛb‘òïÁÄáÞl\ø<u„Ýq ¹ˆbýòÖn7b@Ì£ºŒ"då9kz¶ì×ñDYî˜ÁÝTµìr$~º$“)߀ò·Ø/Oˆ|zvL^œ ˜$‰þzÝÖ†º„e½ÔÃhl"D±yð¨lËýÜlI µI¼)7©ªÅÜhé# )fR×ßErLMjô&ÔdOÊEÃësŸêCQ»è…ô¯‹Ü6ƒÿÚ~=EHàÙ»öP‘½ð(´éˆ¶·ø=×fÅšRzÒäÊxr•ÜAí.É”ê;¦(Y̺2Órè'œžŒq~ðsΰ '©•ûK‡Hó·£}ðЄV÷0¨ˆ-Ÿ!ÿŽ3÷yalˆâ(3­æ‡êe A ÀÑulcÜC‰±¿Ë›‡#«ƒvÇ·LÜ^ø?=õïŒ6¤·ŸzìëÚŽí{A¨Ó¤ØI9ônÕ»wg)•ŽßœÀýW8tí×à^dµ³zøeT©hÀ¤_i*£§hÚï×MixùÈÇ“JHK½µðïP~ Ø:¯?†Z/Zj§Î‰b–ÈÚ¨åeg±\^jgž_V<_¡"Âq¼’T¦QB‹ðóXœ®ä¥®ITȽSÿg°Ycúµõ©HäÝBíò, õÛz)MVÀ«Ç½¹s—–·Dø"ó"뢚t2öËcJšT¦z²j‰[:$ &¹óZ©÷'N£Ìíž_{/,òÄ·î øñNãåQí~ˆ©CîS X›}Ù(Èý:P󨜯ÏYÝ#hlmùëüceî®$çšå6(¬²â®W}<Š„oº6âF¯Ÿ ½yñè§ôºCMó¸¿/>Éÿz)ÃBOü2ƒôWìÓy}¾/ ¦æv%£^ç[áľ|^|µX „_ŸzÑ3æù¼&›þ[åL:è ÿÕ~ê:Â]€ÜÉûüœ¼ËÝ’Ž2:ïµù§|_ÒwÏ«@“\ÐSÑÝÉý'@7-¹„Ìfˆ»R=ŽÆÌêØru>Çu{ݽµ9vo~ÿHF¥›ÝK& P±úÀˆõ!$}… '4Q¹³(ÅætÐz×ìçûb~‘•QÔ~”½–íÿ„ ²Õ€‰U*¼mò”·8 T™ÍM¿ ·ð÷ó¯ã2Ã@Ù[mýùõêI.9êA*Ÿ¾liô¥NÒ^Ÿ¹ÉQ.s[¹ÓII®¦Zùç\é•Ï@6«’h=}˜û[œ)Ç€ítnÃÕ²@9šx!êÆwÐôÉ·A~ù·\+W~9©U–<~µD§U¥¾#±ÔãUT£–Jˆº*R…ý.dþ8‡FB!TùŸQ§áEôœ–z[4£Ì)•ò'¨í(÷Íñ»3?7¥3WF¾A•&–КF1ÔñÒN˜"Y€IÚ¢Ú/EP”í2™3öE]Ú£¡eõ ¬'6ݰy/p"S¼Ûï,Cù•DÛ6% Žw)ì+á÷OÃj¢)Ÿ/¸6]7Q/áRàEI~ÎRœA‰á% òážmÞzÔSJ|Y¹øJ_¤Í^2Bã#9BÏU!à…V±8Pj\Š8d‘þfw|@Õœç 2)‹7yñûó´¦SñÀV]38b¯†Êa/ÇÚ¶ƒIæTÙhB&Ð^^ç ÙÜEmùí'±c|‡íy×:ŠÆÿ&h_k,ÐHàñ%O×> ¦Þiÿ; ƒª¥QÂÿ‰ô¢’þ¼TìFQÔ~o¾aƒr­”{áµå+¼qYõÒd6媠”ÐÜRòžDU‹j ³¤·@eyy :ñ€"—ÚEß Â7ÒTöÿDí-ZÁK墀º~lGqÙtv3¾/q ë™%9þÎÜ*¡n±Pԯל9”§Œš<ÚÞÞ®ƒHX¸âxðÃ; uKä_èAEÁ›·¼Š/¡lÎÕEǯd yõb6frɈ®?3˜‰†‰-ì¼"#“ðp ï‡yíTFÆÚ½ñ)~ŽëorùZúÈ)&%æö÷zrŽ’9J]oNéš U©×YŒ,“;"7ýT¾Ñ¢©8ÒºêW°«{AêwçßþØ(àX«MzÈ ÆûÙ¼†¨é-óÊk[1jQdPí *h]ÜŸqOYZVíOÆNýÐÒá'Y¨tÏ­¬|ëHlx´ðiÇ_$øx0ÂÝסø¼§ÅÊLœçâÚé(ê—ÏºŽ²6É£5ô~>¯lH^’&z§Ò<rQ¼±rï®ÜÔ(¬}Ó¶HÈú^Ó)ÙÉ@ŸI—zÑÔËÕÛ½…€,Ú>ïUz†Û3S’ùþ_×6-Õƒý©žcÈzæÈÔN9hêÆ²(ØyÃu˜?^»%§†£þyÂwÁ 6êå /Kd‡¾W 2Ýħ¸¦¹oéCýe̬K'Q=9¡Cÿ¡Í8ÜÉ€Ä[]ý9íÊW z¼}Î:–ŒÌ´& —Ï=ÁwµCìO¢¾ÙžønK $.vœ§ãëÌ`ßèë•h´Ön­ äݧ–„+s'Z>Wm(© i·ƒ»¯=sAB°¹î©Êg¨«lÎçÅSqýêóù@u­0Ü—ðG’Îkv"M„¾xèpP²ƒêJe'+òtì´ *ݾ~<ßà0ÓÊKuÛ¡’{ºyÚ‘¨<ý SØÈ¾·'ã"¿Oo>dýÊßúM¿MæìèDÁ-”yÖ&O½é‰†2·®é¬ø¤ñUÎS>ÿuh?*ú> ÔañÊfP5ë“-ŽŸD†Ý9¢ÖÓ@^®Tåfg tšÊrËOSf«j“߀†ÖQÁçM® ÖêŠmÉÍTdV/˜§®’G…œ‡{hù,©$–sAÕY“¢¹I|þþ›Òk—"Ð'²êœÊ9nÛ¤c‚êû×î¾ô h ­*µ§PþÛú]"~$ ?©Þa¾§ÌÄ­BË&A|ÿqIYηŠ>X5p YºŽFÝ1ä×ÅïyIY¨nðN“ϱ²úK_ ±9ÝÑt¼99–o‚ÞÝDÖ½HΗ¶ ÆV]žèÉÿà¹+~—®Ÿ¼£Á÷·]¥;þ±RPº ©×ŸøUO~çŽldzþ«ï,ÐbñìÚž4 ¼N~ _ŽÆ6}—|©@3baᄘ4?8 ``NÏòä&¡ÜØ— ûe"s2T7R²Y\$××ѧr`ËËyÝÕìq˜Ÿ”Ýý*9U|~Zµ(¦q h·„”„N£k¹Hfß(k¼ š4?ƒ ö Žy¿Ð9 ûÇØÅ-éN2-@üøÐ9º%¼Í¯ ¼!ƒääüKC£]($Wg¿ô².jäø¾£-` ‚,Ò!ÎõóH>œ²~Íu¾þßÈ™NF¶ªóHKþEÔŠŒ5˜\ò¾òŽ ÿà &’jšHƒïÆ£÷ù¯QàaדŸ+€²Dª³uo °í¶ÙÄ»lãsCRÁ¢(ÜÙ¯þöòs ¯l?“Ó ¬ü´€±%4òQ–_(ˆÊC¹á%c ;ú {Œ£Ø•AÒ½õ‡QÙ+b¶z³/Њ¶ ==Ò¢J¬gÛjZÛ7štšŸkªx=7t¡±½yé‰+ÿ@ì®ÖÏŒ_ƒÀi T];ói?46]ö²æ¾_y'ÜÅh=ªÝðxSÁæç´•Ög-—~JúsÕ¼¯ƒ wßÎcý‹ýHuÏ:cq )­ ïJB}€ô»SJóêÔIM#»£ˆ©Ó—36)@± VsÑP2mɆGŠt~ýë‡w=¤£ˆÀž×U‹Å@jÙå=ÁŒ%@v‘ÙmY¦‚òµùz¶O¯'{¦Ìw³6~Óñ˜qÊüøî«²€j™Ãúý,èöŸe„»zÑ çè<”]ƉŸ†P÷XU³ïÒnd•šÄ\Ê÷²ˆ‰òâHX¯µ R¯Ê/ êÀ”›É[Úìj¡¸`ŽŒëï4Ï7Ûeÿ­`¹1 „}þtœÏÍ»‚jïÅÒ‡¼µÃ¨/;„Ç\CQ©/ÎÀÅò g°9 t›íÚ߀z}²g+yR|„Ö}ÚŽm¼ëËARûpƲ1Ê×Râ AXD÷B‰8¢ÚÓ±«vCEÈ’:˜èwŽ;o};ì½ YØXP¨T[ƒ7W }¨rÐàx}±²¸ÊÑ¢gAÔ"`E]î? [Š•Ô®â.0—Y9”)Ð蘅Ð] ö’[z}6{QOîWGy mXó™:^ ”uãÇ+Æ4Qîe¤Csï 4 <¾L©W¨oì„ûæ9LݧNdZï^Ý{Y¶M;>|áçþízKc“@ J ÚŒ5dÔ¥Þ¨µûJ0ÝÔw\ÐÌ.C†çᤈ6:ß}x9ñ~Ÿ-ï¿CÙ_Ã×.Û¢TÕHÛIRþÔ 1CG4ö[ûu£à: 5Þ¼4'Ñ&›OEóóÃË»÷F ³*tÙç£'APx¹÷»OocÕãg?ª|¤Æ²^ È7oÊýqYÝ£qähÐ Ã#ÍQ_0F©Óå êQ<¶? cƒèÅxAµ$*׿_ž‚Æ=‘û¶<ê1É£—kŠ‘æï.OíÉÅÞEïç·£½å#sÓ8ÐV½÷÷a­Aç^ï¾øYýŽ×¿šZ³{oF¾·ïfrÇEtC3%›²§VÜŒ†[¸Ÿczõ·Dr™(w¡þõLÌUT.=øÂ5Ø‹Ÿ;eŽ>#¢Æ 9»,w Ý3O›ÜƒâÖí™s,øfÿ1ãV¥ Ðvþ±P` áÉŸòÇi@£n»¾JÈ J1Ï “âUw"¨;Œ…zºPyÏÍ ÁÏn@y+6¯ö9j­v‡Ÿú£qu‘a$‘ïC„»ZV^ ¶ö§Fö‰I þ+óZ¼H߯¼:Ѝ¥5|<àv;7:çùjþDé}â-å>9‰›±®™ ´5œŠ³ ܱåó‚U‹tAíË gQó^Ô9©Uÿ¯'¨•Ô·G¢œA×7R~ÔM¨g™flÊãç]©ª‚#M|=pO)¿ ã:çT ûÚÂ$ÏWr·v¹YÃ44Oж(IW›Uê´§ùuP£ÖF»ÌV£áäÕŽþ9TÙz Ž•ôõwýÀ±<µåòùk¨3@LÔþ5 tF»%áv4êêš4ž×@ˆ´¿3 AÈxU×—äìµVš.^´C §ŠKw5‡žÔ¹®(«ìîè?ðæö…(Å×®]t»;¾Iy鼉£JÙNJ|ý]Òæu ”VžZ$´K)rÚÇΙ €¬Ö‚ç¾{ldø½òQàç(M­FÅLQ¾þÕi­³C]AÞÄt$öûÝDr.2ÄÔ'Æ¢á×UŽºP¤OÏQ°­CvWËß}MQÏ™º0ø6x”t¾ô_qHµ=Ú“SÏí=ŠQ;ÛQ E?Mrì5?ÓâÑ”dòyU_êD­pøKðY¯æ@ã‰Å ï_=~ô(Ÿ¿º±mÑv$Y¯Š»ÿi¡©å¶)cçxOE;~9q¼Óe?4 ÌþºÊ9m»‹e† ¯¢ßºnÚó«.‡@Ég$p‹£È_Y»iÈõ6¨šž7y^]‚Fß#7+]ÖýýÏž…¢úŠcM¤½>¨ÓZ­·xiãïDƒÇýÁð˜ú-üy(CŒ ´Fm$? š–øš÷ŸÉ{?ijš·©( ê2ƒ=->Àz“”jS"Ï-ñ“’?Ÿ¡ŒêW“%Õý Ü0ñþ¼ç*í,wÛýú,÷ý¡×k/ýtmÀ|ç‰Í ù"üÈÞU¹HȾ¶>GäPO­þïMA ¯ÛÇëÍʤ|äj±÷V,möùë?úk¸Ë´n^ŸÒAýàß«ÿ}èA W<ˆ˜“PÿzûÀ!Ï0êÞ‚ [Íxvß ¼IýªK×P©Ýhô®zßó;¤œ8ûÍÄ؇‡œ·ÊßÃÇŸ&îÉçoÝú¬4¤H‘þ‰ýo#z=syÇ%4г9ËqhE†þƒ»>bMHòÿz8~¤• ±º*ýRž U6÷£u™–Þ|½ØwÅ èV,[ŸEGâÓ7=FSÏQÑZÁ|÷ïÇΊ{hi¨¼?ßs©ª0·õCÊùÂõ@û6ëwä2v¿bxnPG]R/ôBùPÍŸ¡ËÖ ±Á·•Ó”· ¶’·ÅÉÒIÅ©å?½¡Bq÷ÏÞôiT]5”M\{Ç¿†%¾AÃr]«øÅüú$¶ø©ì¶ºr®ø»ËÎ`´s‰˜Ò˜?ê:6ïòE‚iW¯ä¹ÕÜ1[ý3ÿ©yqßß«Ÿ§Ã« iy¢[Š"®I ß¶ÐÁ¨"þuE` ЪöxX *C[ÏóÀƒ°ú–Ðz°=ºd¹/4EIš#™43¨³? ­ÎÕ_-‘Tö!ÔŒÊÏeö&ÊAϹzE5ïåÃzÛ½E¼`7¿Þ¿ß"ÈEc½Ñœ1¶=ê=8H©[ ôK—Äû´¢Ððij9ƨËZs]ò8÷×2•w”©X0Øm~^Ñu€;\œf9Ž©Ïcæ±’½YæIwA÷üªü¹:¤.i¼ÛÜ *VÉÊ˹ÝsC_D‚’yoŸ=œzëh]¿˜‘¸~ÒÅÆIÐÎ X¦¥rÙ1Ýá†[QGåÄiÝNw`í½xQéÆ R Š#ÔMbP5tæÒÎæßHßõzݹx}ä¼,íÉ9u(´áiB ’"Kœ–ò~¥¯¾f'ŒºÊ!ß8P[U°»þÒ7MžjËH@÷-Qöw42^î_|â¢+¯Å/ À®Ot27.ÍwGý<‚O¤ :̰3_‘ø¶BiYI÷‹Ã˜hû@/ê,ÆÈyH©xù{ ꩛͊òû@Aë=ý`ÑKî÷yõ/KEs˜å¼í™-oÎÌãµÃôûsq4Iñ è;½ð—LXÏ{ûææŒâÓ`,‹) àöº8[óME Î ½Û» ü¿ÈÏ=ãv„—ª@§¿w|FÝ==â«(P»¸ô!ÐËåŒ/9ñ†Z£eÞ0vîÄ09†t+³¾Ø’íhrz\wS0°V(ÍŽŠUr¿$ßz*¨˜=™YA Ý}K'ïíz4h}!?ÓAú¹²ç¹³ÈÔÿùh³Ëwnó`™ñÆAÔ ÕÌ}àriI–m;*¶"IdR¢i›!¯Úþ{HoYP6l®Ú1€Äú¤Òô-!¼Ï×EVÌ©à}>ÐB5]ÆÏÅ{³n—…ë¡aS¿­@± PÊÚÌœ ìL¿2רÔ_-Å÷©ejï¹EÜ7…éÊH;ùaßï}Ç€ü®fnô) óbÒç6íâëm½»_yË;89Ê"#'¶\l’ãV™Âç àÜã~ì%?Ø““–] †ÒN¼‘Ê^›Z½QÝçô×X—Å@Òu‰ie, ö»Å{l6ÌͬÖîÑFâëM^K<€ã+\½õÅ*T0?­]H–FS—uAS^òùþÎ×°¸ ÈÜ:#«!Üô*Åö²WÃÀI–¢\ð=ä‡âé3ôW`¸Ù]ãJM'÷ÚÍÑ·~—Î"Á)ìQ”à6Ô¤Èì*SŠæËƒÙ/l¹ã¿O0ío#ûÚÁÓkx¹Üþ<ºõÉ»åÜþΨ áCH;Í9šOÂÞ‹§Q¾=Ì‘X%ˆ´U>¼Ð\Æë± ñz㮈ÕçëG·ý@Vo6SçÊ> oꈳŒx„Ô#¿4U{@m­ÛWY‡t ŸY±7¼ ÌÊìÇ7+€¸Çlìïã5|]ªø:6ÄXeGYxuÇ'Þö­Fª+·XdR’7å?k›ƒ?f¢îtÉxÌ>ÿ„“coÿ§*;äÝ&£Â€³_¼óÇôîëEŠË2‡@^ÉïÂÉ,E4ÉÝ­žUÎêâêöi¯jÚL“ߨ!Þ{á`Ñf¤z¿-Z^uêUCv¿;Îmq­Ý(ïVÇû7ó¹ñ·-{WÇüÿ€^S@%úU!ùøˆÇoŽ1²f:-‚iµÒsúnj“U_>ƒRŸ°èòRs÷nP^Êm½¾ˆG%ûy{Pã`ƒ„Vóù}gÙP Nâ@×.$WõvŸ?rîÁ±2¡] ûï©ÝÎUsHW|Ýqóús0¾ÿAL°d çóÛÕEO£V¬÷§©U¢ <{ò|Ï0¨²… _Àuдy±Ål+èèó|é‚zÓ ?ƒïfƒòvµÄóÈN¶YÊü5·Ý7[<)|†;¬°(~ÝV¤É:ÙÝV²|·A³ ª²‚ûب6+-¦´‘7]õ%‹7 Õ§-÷Ò^¡®k5çÛq`w>I%¹êð&t¿* œP3ûäIÛ‹ ¿¤$j± ´g7øú«"1©ü7ä7pR6*ì~¼éャƒÃ©¨Ôw¸ÒyêóŠì¾· WÙnÓYäuïmCÕÔÐ`@Ë/Ï~#÷몆<Î6dJ뵘Á¨¥¾g~ª°ˆ7½‚/ÜKVÅ嘊 >00Сô§1ê²î×54ñssVŠ˜Ev ýéi%S$ýX)êê Z+ÄÓïÿFûßJæá@`ËÎúùŸ@=ú½±6“k¨'cbfVÁí—­jvPG9ÁkÖ”á>Tv¿%P˜Ïçß,Ö‹W*›¹o¦—JäEƒÎ#Ñç0Þ g?æ'zsú‘ñŽQdH9˜ìÂ$úèä˜üüÜÒ‡$¥éHª›×[FšÈåç×wò¹Š_Ó2÷D‘ó´Ÿ£Ôz޻ˆ(þ/å@ŸŠ1ÄÒ½+N€õwtÇÞÁbÔtªx÷áq?°N]vòxã!—ç·§ vFeÞ½]ÿíì’wÊ/Dyã’×\}v‹¡î¢ifµPª8©Çwƒnè‘ZŽ»=hئf!uýâ…ì[É ¢¦°vÇ= ðl"7ZEƒz­À?Ø 0@lÏî# }˜é-0Š"Ä¥ijÏP•‰íÄ8ïý©^[uçAdìJ—¢¬ @ƺ•Y5‚@_~ÌÈŽøLàÓ«ó |®7~Œìއõäà ܮÀ¡F4â}*qn©Ç-/ñ¿3ÊøÒd1*ÙÌìˆåM^›uîFƒ}%?&&ê€b{©äß"¿l ‘?û uÛcÖþ-ÙúÞÙÜ[;î=Öph—g2*Ô$AÛ^ñðÅ´Ð+¿ù±î’×¶ˆŒ+\G“ÅmG:¹Ýê÷ÏŠ«íGƒ±¯²žr1Àl3úѹË©æ#a³ƒŠ¼a‡/ËÄ%A~²wj8ú1š$ží³Óó‚¾-™ô¨À.ÙƒîÈ–¦©ÚüÖ«ø…gïE‘a8¥îZh`qf‘í×ÍÜ÷m567[»¹/G3_ Eƒšâ¹£Î¿nñÆ^ÆI=(,ãfáhzÒ%N­„7ð7ýéŒL÷ƒºæºVÎr$ef>^c_ŠÌ ±¡»õyýEF ­'ðzÞ§m‰-RåN]SêÌc¢rw÷Æ ý ` <ûáÇÞ¯=þŠ]@;ω¿ód9ÇwÅ{½7AjYƉ#+PfVÔé­Í9$Æ,]ñz2r#^¼9€z«7½ž»pâ!ÂYÍh¼=y££¬.êOM…í»Šš¯°ol«Uí=µ·úÏ£á^sÃ9|Îþ¤ÝZäk‰´ÚŒ‘@}S8}®Ê쓆ƒ*¨Üzsõ„d¨¹Ç›tßûÇë´5Îþã$èWÒêÖðN•}i`8Lîm?'Œœ2öè–I N<*¯% ÇWKí“z(Ýùc°Sˈ÷ùéÓ¹y $9{tº¡Öµ[ƒnkÑåß¼oÕJPÊR‰(ñ_tço׉tc0èþÉx$¢Êm¾5¸öHòy'éU‡ï'½Õ‹6|¼‡¤ÕÇ |€n÷ゆcÑÜoψ|íG?cïuÖ©]$)¦=ÐI.Ë"t4Ðè¢êÑ Yȼx¸µ·IÓ/>-œHABF¨Æ+/PÖp”›já6»»XGôù#Õ³ãû 6Ï% Ú€ôu¾­WfNñ†Šoüw(#è'¬'¦äy}çHÁ×cë¸=ÍO8¬Þ‚&E³rd×jÐn•[¤8UÃûõB^ì`¶)êîXñÉEvŒ^Ï~ I ¥¢8jw[)¨Çt»)Š"CªU;9èfö–Õÿ²jZ–ãdp²Ó[ݼ>³ÒÖEñ" ðyÔÃ-8I ¿O0¹}+ïû¿'†µÊÇ)ÿÈ $ôVTLšZ€!íYáRÑ*$âÛqO¯ƒÑÉéªâb4H;æás é±MοC$oó~‚Á*ñõ ÃÈVɤH®›E-Å“Gì¯Û"ñWàÚM}©`šÁçÏöˆmLš=R}õ«>¼nFÕ©/D[  ÕÛ5ネÊ]Wn–ÌÝõUÍæ¢¯€iV´Sv0}ÂîlþŠí.QläûGàÕã£HT69Ð{[>¶™9!uóè¢Jz?#±e£œAã—Ä·šúÖ‥l,ššœó‚ -5÷[àÎn ýðS»êщ†Û}¬E¾óód鉇¼gEÎâ/†#áA§“fÌ8RU™ !ÕHŠtÝÚò’VjWý7½%öÎ)9_Aš2ǦÔL”tµ5×­ü ŠaÃð<€Á^·ò…rn¬ï"'·s¿.ûrÚàƒ2L¸Ô»Uék×ÒAçâ:K«Œ@gÝÖ×¼%Î`0eùZM• ºÎÛÿ¼’ä {àòéïÞ#‡òfÏíC¤•ÞT\¨–|Õ~¯%Pq»ZsIqo¬¤hüúêàȯ8ëvGÔîðxsr'pê~ßì¶GÃñ’¥_k#ºîþ÷Å`è'¼2àj’ëä›ÎG§høDi(në¶(µ©ýs²iO]E†ßAÿ—tîhþEñ_É  ¸!eùpR#Ò3K*¸ß/´gÍÞŒ[2—T"µ½tá5ŸŸIGÿ[uÆP¾¼Éíœ3buޏ¤Cr.aÂtÑo8Ÿ»ò3 öyzì»)~m|Ë¥}úKn³ÔW%‡ýéH¹œUÐÉÝŽä¨Äý7ž\@Ú±Ágÿ6¯åöΙûŸr°Väï»õ@Êö?JCýs“O‹†øPtÞâÌ–GÁ1¥`¤îDcyçƒÁåó-#6š@nj—e¾öDcéÀ$ßÈÐú±²oi*?U޼XÔŒFûm¸ j'/§åμýÏW/š uÍ\gªnŠ©ûcòœûgðËßo-È–Ojó“ç}Þº@¹AG–€-,n·6¦foëRM´üÊÍßP·Ž2âœc ê÷·1(DÃG{ÀSò5ŸO/7%inö.-ËÚ¤|Ðûïù…Þ˜ÐRv£ôtýWÉÿÚ ”€ôŸŒ–ÏhpÔ¢«Å#É+WŶ’ƒžm¥mcc`à}«°FhI‡.Ëó98šë³dzC¼ÉÖýÀÈôizˆZ–k¾tIñ~ÄXuªÅð¾¬t,;6¿I-áëd_Ÿ@òK5äo{QkØ33›ù 8Þ? ›&!»ÂÌ"T¹ÝûÞÌFiFjé½êeu¨Ö¨sm´•ïiÊÆ2x½ ¥VyU—@S~dêÝ3¼ÁTuàÊIÔ„ÜŽ¤s}`èô,YøÉ]P ¾½KæûPyd8ù{÷åoó-öj ¼|—¬ß±Û¸=n¬,á)†'}Ìx}Y+åNÈ!)ƹk½ê ²\­:{ÑP=|±ª#SdP]ûŠ?ê1Ö¶Üßé ÒÑb[󵥕þ¦2…Äm»7×í+’³Ï—o5mÈÙþþŒEóqÞ§û#Í—÷'ø;Åúð*0±|þ@âØnÞ¯ãçS_×å zÕå¦çRJHl:çïëÏÏ—Ä•^Á@åJö—¬c(¤â ½–¡ G¡y{ÂC †§ç9¶é{ŸÈù¥=Ü·ö£ÝÊæAùaç¡-é_QÓ5¤VCq¨³mk#}úÖÏü{IÜo·¤ÁÖ{¼I»jY—@9*i=çi:‡Ôo8+@EAgKô8÷ÑÍÐý{µ¨CœÿZæ\º“1ó àìèÄ¡/¾)(¼ µÿåÐ’! y…Í‚q_³àH1w2Þ¤ÚKÔdÆÜWÅmÍeæÛ6€^¸c“} ö/áý¹téaQ‹ê»…A5GùV÷Ú(T.Úä¶> ŒRZîfÙ{ó>ú6¼ ÉBUÓÇ3S]ÑxyÔµêÔÐr6¶Úa‰$:ýOó„8’&sï>~ð WýWª2ÉEý¡›2‘v(©MK- õ:¿§mAÍ?ÍS¶‘ôëZãEê dKüàök†ÛR¨ñwŽæn—¥cä6sëÎfÞÞyÀ…[eô· žMåBÚÀŸå& —âàvÄ-ÙÞÛÿ„žqE…ÊÌ£eR€Xºf¿·o ÷OÍÊê3V«ÑÐܦ4H%š÷ëö´:õeh\ùƒX™&ÂeÎèö áÉ­eÇÜ"öáÅ…}@þWpôù¶+¨ß^Bë×B:辰Äe}0Z´¬a[­.25R¸’a|Nˆ­žÝ—ÌAÝ}¹W†¸s‚bŸèá㈠vÛߨË  º|çÞ»oÁ‚÷[@Yj¼°D» ˜ÏRw òÚ2ÔråÊásñéÊŒõû£’Y«‘ò¨…Æ.ôj[ &ØÜ›±E½üÔ”ŒTaT²Té‚ù0y&¹„VŠŒÈŠt¿ vëñý7bõ@%èz´€0®Oµ-Þ:/¬ï,^Í›\wÕ‹÷cPwmÕS+嵄è•U ™"euÂ_ó‹QoÇ"¥”g‘HÐd¥{%Ž ¥Éö†rý¤°‡SOú¶åEoéCÐ?Þð©æº šøþÞó¬ô7Wj3Œ€°ãÖE6£tßN‹+¦‘¹ü1e+c²Ã‰A—Páðènó¿rÜoTQƒÌ úšWŠ#Û*ïÕhÌu$žÈ®ó ¢¡çÖåéû»QëI÷>ÂS&ÐìwÐË}£|xî¶÷2’¨ÅDºòQdÞ}D`K„tÞýr íÍÈg_çãÕáñ±×Y`°ÓõU|’;¯0•Qxgiê©zöQ{ø9ñóÙ’"¿Î4Å_NZÚ¿[oÁéÎÅÃ= šÑ=¹ ª‘¸r£î .·ÕìyÝ657¤·I¾ç÷ƒ×;úîcv@.ñ˜Þú­è¾^Ù—cxSŸ [jU@?gT¥t)pî²eß(üŇ%èWõ¸?²–ŽÞÊ=Ç(ŠÉ(N§k¹ìÍC¨ útg§“wxqN´îšbÔ0r#÷Ô€’VÜŸ9+0®ÞQ±®z0“f¬k<\¼âÒ@¬*ŽP"W€ËõªîöñJ§“4½PÙíIr‡¸&¨}PgÞŒ›CZôiòU^›‡ñ‘îšyÞˆäYm;÷H[s÷x^¨$ÐëK‹òòP}Y¤u×H;œÖÿášGֹɭêŸ9+rƒ[¾Fô!¿”ÒƒŸý¼$Ä„ÊÒ‡ŸÁ„k™Xãø”~È/;èÛ JýY>¹]ú&‘á"1ÀIImly× Ôû¢ŸÏ,²BÕ%ß<¯.¿†¬õ´Ã×ÎòÊ\WrV?ã€ZïÑsc6,Ð~¸f8oÎ å7åh Bܼ%&¾F MáÀæƒÛ@zHg‰ÐbÐÞO8ýK·Rd§ùt0è¨Ë•´EIÉé0›ö*îTÌbúYUÔ•ú”ÏúzåN-¯ìâuXY°(+øÿëÞßÁàN’îœF ÿRÛÏß·Æà¥B»DuTxœ“T`Xœ{æF PWvÓgSöQˆÒŒ‡ £JM%À9"í1 9_=¤w"-¾Ps‡Ðde±â0çÐÍE^´Éql¦ìXårTòq1=¹× é_B6îƒ&ï9+R.ÖE¬Uëb»õ'9òU=è9¬¨=uõÿœ@8÷†¨õœ¿,ùÔ-½Ê'$PÁ±Úã½Ø@_í¹tfú:÷iàÝ}»žÜAÍu’wãÍ/ñëgóá¼L4qõ3¹I@]ᔞÙh¼çãþ…V¬ÚU¨ÑWƒl7;¥wQÛp»Çî« Ëú½>r×7Pˆx'ý9g ¥ªõÇÌ€®å¡ F ²e4´.Ø—q¿“ ooÙKõŽ~H¼T^cÝÉöÆg{]­›iõ®áìPîJ“)ìïOÏnÿºFëòy2kïðzúm×3AÅh}¤Ïîß¼ËL«µ@“Ï^U?³Àë.Pž±@9•ó;4´8Ò”è6‡Ò—N®yÀ?×8éïâÜf± ì=MXPÎ}të fcv¨×y¯)‚ŠB‘ñä‡h½V²7ÊÌ®ôp‹ZÓªí/åC­¸6›ðÍ•ßvÿž÷o@ƒµ#'?î†FñZç~h鬶«MŒ*?n™—½…´…/Ö„¢’Êô¢K®g²Ü¾¾j›ªed;ºŠô ,kz+V?_f^Ff‚òK·ƒÞUåxa[2È<3 iz- ìuÁf¬á§|NZ¿™õ Œ† ƒzÇEHA_ qPš»ýdúà}Tž¶>Á¨^¸hÌÚ€Ì_áÂ*26ÈTÚâ«Ùt¡ßßèƒ@Iéó}Ô·tm™óájDP¥4»¢¦ RÓŸÿ»ÿXõ¥ÎíP’FCó¤L>O2ÊÒ…™ª Rb˜ãÿE˜ÄÿþsârSŽã§ÝÝ õ+‘î#(ÛŠ´Vc·C‘Ô½DÏìÏ4–™Y¢}FTzRyïQ£fn»À4ŽÙµ¤GOul !qŽû>{Ñ-$²qkîÊ`<WOv‰@£Ä©÷¦gÁÀqZHtÙ*ÞŽ+m[33"Žö¼jÞoçdç´TÔØ4}Kã4*lyô}ΓÎk™‹8¡á DaêÒÖßhй³pèììëoö8:½y615ÚHsž‹Šoä ^Õä¾à'`|fýõ@gTù“a§2³ŒÎG»¯îõøÿÝNÞ¶#„?ÈþÏváïÿ;þ–Hð—„„îãÏ•ÿ÷½À<ÿQøŸùÿ®:hòÿÎLÿŸ÷óÿÖôÜ?mclust/data/cross.rda0000644000176200001440000001744013205037574014327 0ustar liggesusers‹íšgS0×®éMzï½7é(ˆ‰Áˆ4EDDDAPQ (ˆ"‚Q:*MÔ„*HßôÞ{Ù”- õpžóœ/ï?xgÌLæZIî™Ìd%÷̲0´Uc°e ##£ £¤¢ £ Ü wIFFNFEF¿Kj'Ïk7oî®3ïÆ½`ì?íÿñÿñÿñÿ÷’ýã?þã?þã?þãÿv’&Kع{ûQëùHÇ%ðçpOtÈò Ùõw‰*R‚îÇ~:IoÑZrRZ¡† š+ hýJ D„Þ/ÒNÝI8™z‹” )|›ö>&¯Ù… iÒ ©ˆ»êÏ$ÐÒÙú˜8‘YDxô¸]Pü j‘¶‘÷НGÝÖª¡êþA“¤{³± š}òN‰>¼ô Ðxl§Š–ëþpÇu¢º¯ç/ 3‚NÈvD'”²ÉßM¼¦Jf’ƒuQ–ßùÂð»`P(üŽÍŸh°è`–×þ~ Óa»è‰HaELV2 m6@¾Mp5(¼ŸÈÿކlT¢ºüQO‰ZBY­ö.·º"&=©~úÁ ,yèR­ÏÄ"úÅ P¿dØ; 9°ÜÄóg=‡ˆšu4r³Á¦è¹­`§`hñ}¶‚ê]¿’¡¦ù 4[iÑRºÑ`ùÑóh‘ñc¾¯úK O­ _°¤¸ ¹¹L'Šÿé …êùŒÃÍ®€f:g2Yš:Ô)0V$ë8)C¦•òóEÑÔM7jYU”É;îUúÕ4Q˜cý3gÀHõÙî–VTÅûhLÕH“ÿ~¦Ãt ‘´Ë “õhÿ㺇“8…%PÒèT¢y¾:£†x[ôãÃ7Ïï4͹ÿÐÆ}9ÐHáïfw­Ô«ÿ½fY:ܢ̂Ací¦ÕZy[P<Ç{<¼Œ/É.käMÞº¼§ÑhEnõIždô3¨“í¨éƒ±÷G;tÁb|DÕ– (sá^§—D?¥øÂæÈÑÜÞO;kÇrÑ*ÙÞü$4T:jtá ±{>ùB &óBü“Q÷“ùµß¨G…Ç;„^u7EL*gÄ ó-ÃìW¨ùƒíÈZO) ñ¦í]í4£yáþ¿ }€T³ñùjƒ3šY,¿`;qIÖêž9êƒfUøÆ\'UÁgØoz­GàgjÓ7ž²fD`‘ .ÊP«Ç¾œ[iõæ"¹?ªâH.Oùú˜¢Ú óŸ7D“=gRm;ÑHÊÐÜ ÖåðæÜîx‰›Å9ý± ¾å»äro ‘ü—…~^Õëämt‚LÑ õ×~±7`½ÛTÑ$‘u_*nr#¾ADÁ¿&ü~ï@ìÖêå̳åà­x@2Û›½h}˜ÖL¿ ½r}î×.£Å©A¬xÌ´ùâ8µ·#` ½¤Ï¯©”f0?4x‰~^‘ñù9¢jNVw—Í£E>û#òx4õeVã±ßZlÞ¸x0ë!øk¬LçVWYDfAS£òXW„(Z.^‰Ë/6DMñnª`moãáçœÚ¨ÿF‡vLÇ#0Ê|¾hÝŸ¹ ÆYl,ÿ0[¢ù׺#|ëŽh9ŸÝkø6š×SL ˜ÐL›þh²8š“<Ö¶ÇL=æf «y×~6ÂQk‘ÎZG‹NRõùÒ¡…C  h2â{Ñ—»oÐâ´Ã+uÏ\Ô›=¼m\¡ úUæÊe˜™ÑÒÈQµò%°úHQÝsó "mDÉ><¼…Šò{|ƒ9!ÅŸ½ äÁl %ïGÊÔ$QÑ‘—Îf¾Z«$Wᔧέ]èñÇM¿¼N=Ô±õ®•¼§ ´Úê]¡ŽG&ÑîóãÚpþÞÔ‡P°>ìv‰KaL.‘›UㄾêûbvóÆ‘…œ`0k{lÄ} µ›ç¼ œD·Úí%:À¶X0UÀ5šû&þ@nZ…Îæ7õ ‘¬L•ÂHš7½1º'L ŒÈ×Ï/~ZDkç‹wŠä{Ñï‹— nüäCÃç«ÚÎÄ¢•ƒÂ?þ•¡Ï4I òí¨çeÈw†W[hxù~"K3 :Ÿ(|Fg†èïÙ”¢BY1¢õ#9PA;ä´)L&b˜sF ­PMå³›Æ7߃Ť³£bæ(BúÖ ÔR *òý¯îpWA2µ¾àÏ£h<øSc/¿6˜ÿÙ˜áJk6S[ÒZ¤!0’@$‹yŒGÊ ÔI2° lo¢+õU œoÐB‰¾Ç× ²h6«ü`âsO@2»<ÍïFÈ~–|d “Í;¬²æë`mzK]àž·Vg¯¹®™‚¥R9 VăV-*j?Mƒ"«7Ýj!Â`F”ý0Gpú»åüéPú[°òØçÛŽ¸²ækÉ©™Ím)Rw]Qu+ßIkS"ø«½˜Â Ðj\G»…”4*»ÿðÅ‹Q´¨®7\5ztSÕ6„žT‡äm!n¦4´¦ë¨M­®‡rb„fß÷£²®Š¹kuD°–nàœÄ‰ˆng£Î’΂_Ħ‹°!ý™ ΄šr&lžÞ/@ã¯WŸÆˆ- Š÷÷L]ióÁ C+{ÕZÐörHˆºþ¶Ó\OK[w«š 1­žnáÒ,@‹K^Œ WÀZYÛSp»õU‹Yl¸Pñ¦˜!ŵs µNr3\FSŸÓ{M%SíK]»¢ÁYð­çÊ̘Z/ôâ|»+d¿ñ‰)ñûIHu‰þ&m÷<Ü“39¡¤6 Ź%ÜÛÂT¯gÆ„ô.crî‚d*5hËú¶´fC‰Ë·ú}ƒ jØõTÚ,P쬆Yâëp(ž»:fg?÷¶¯iúšCÌujâ{.–¿Ãj¤õ[KlF„ë’6!ÅI¹ÚÅ<~&:pœÅûnøÏàÐ~(eÕÄ{DÝí¤2ëö~¤Eƒçž86!“QCö£,ÑíÖ6÷ó]–âoõâIPµT¯¼Abòb]™a*;Èÿ2%o¥»+£üç•P|É»5?±JjWäo1}HÙå¢`°!p ¯ÛºŠ&öwWVEAuB4³jëm´ xÈ_ólv_([3×™ó ŠÞ¹IË; ©%Ù yП®<‹ýÚ›`õgxu^1l<´8¨„%\^éN§ÇÒdA·UR1ßÛÌDÉüP,÷~AÀõã¬Ìh]»}9ʶ<”’  âa½&½î˜]ÎÑ&µÄK^í¢;dÙ½ ]GÈ ù"á«T'”rºuÄ_³&‰†T‚ƒPÚºsÂ4ôX.‘‹AXæbù¤pÄ”,Î;ý¾ *è­=M~ fNù™w>*‹ODzŽU–Š< -‚™Q;‡e÷sS]ü±ŽUNÝ1á§s|¶mGncêZ‰ÑŸ˜±ÈÞø©­-æ‘m"k>zÊd›ˆö$cÅP•-›‡˜•Ì7Y<Ær:.zG´¯Cª —Ü{Ì-Ûc¼œGBC=N=wÿ¢ï\#Ú¡ò±¿*®?`îÉ‹à0êç˜'rÅjª4JÙý`I6™ÅÒœÖÙ,uã\ýÛÈ.f(-S¦>äqKn¼±>àimZ©&Ö ˜PL‡”7?Üm j,|/_êÝ GŸËîk„|4\OüSn`Esò«¨¨ ²SȼqñØÁ’5}~ÉÊéX4íÔ/‹‹Ÿ ]ü¦PÄæÒ¡-)ÐÁB„‘³t-;˜gtÖmŒÊüyw} Ëæ mw› â®Ó‡Z¤1kƒîÀ™ÖßP'­ç¤Iø4Zù&)‘ 5UHy·T /´ ²åÄŠj oCô!{ø ×{A>V»Éõr'Po²ÙRÃ'ƒU_L&ñk@ MSï,­h´A÷ӰȱËí[•k¯°ÒÞ×â­žUîþó¤Î> »ɳeäoõhbAºB‡ßÎG°Š°ú‡Wô¯ÐšHìÍ@(@O»B—Ä„ÊߺÌ\±y+Ï5×¾[;[j¬Úò ã—¨…k¹W0E«iîWvLݪpÙ¾ó_ŽY×o=Že^Kïk+ÅrËÇ/hcAÓ¤«½_?`þo…çч4Ì(ýÌ`-ÏK «Óâ4[Š5b¡üƒ9Ë1qÈPùƒiÐ[RÔåsDÞdFu‡%¶™¥±hpñP‡<æ2m`; 3Höþ>¦,|k´ ;P+YÜ»ãh¨rhx,úG¦»§«¸èb†ùW<)OR?ÄJA’'Å«°8µ‹ñ Ô>û1äè’&¤n}8[}ŠÚݧ_ÿí ¶æ•ÕÞž“LGv ÐjYMw2 r2ڼﱂr;*EĽPtÿ-¦ú(´ó—/"tsÌ¿2Z37¡ÇòÜ^Ê;¿ d€§÷²T9Åd›´—+¨õ5§5¢Šv:~öSX$§IßÏ×K…ý½#böJè°_ɬ‡RdõÚÄ9¨fD£ç²õ²“G?ÝQrÆ*.ñ/*<Ê Û[ïäÍa˜w}Ø7žªm¦›»³y}‡änŸ#4À¨þ2}ed2|Ý•L[¸Žy¢×¬ùÇ1¯Sp^4õa°î}Dòæð5(óçÍÝp̨›T}órѲxÖbqå¼$ñŸX€6-QÉS¹zŽ7ÖéaÉMj™ß‹ï ï°¥-1 «Ý7€—âˆhEÝ_:—uËS°çP³A©Qj"¿¼f¿D¢jJ;%ÍMý†îaf­¾s§ù®¡í¤ëT¿×îbq ÏDšp¬û£‚¾µÎ*½w/îØ=?¹K¿rz2¡67Ç÷Y-H{Ãf´ˆ­sÚí±ÖA>9ÆnFë*(ò5#N;Ê“9¯Wg>ÌˉÑ@ì°#T‰Ör¿€åŽï ”NE½FÑýÌÊ9»ç1Fç‰ån(Ù‡±ŠÀ¶Ð2™”udå/‰UÀü3®;’*‘X<3êG¬,B­mu—?–cÒ˜ÿ3SÌ)Ìã5Q‹å«jç}Ò°¨ÍMŠÜS(ü æ\ø*ȶÁtŒA€ s¯¨«îo£ƒš?,&u¾‚òiLkg91ÿ›[ùl‘MhV––×êPÖ}¿îf>Åt¤u¦^€œâ„¡U7Ì6Ýx‚5:mÌ dgɾ§¦˜PÜ<³Ä?,‹y9öðü ‡ªØ÷8äj•ßöè±c¾“ЧÄÈoaÅYZ¦b{![f©]ñPÈ|fÛÐAMªôùèê x&<Ü&žçOœ÷eDfÝŽ‚«92bƒ‹,Ý®{-u ÌŸ½/Ä)}Ò_VNÿRX 9š®v! Ä’®;ú²&PÞ"˜Í…3òΗJzĵ^\8u'¼«R$œË/íÆ\º¤ñF9¬Üë®›ö3›µ™3HHBËGïö‰}ƒÊ©76_IKb–Ù;[) g°ÔKñK¿îb±T‘ë‹×!·]Òg¸°ë‹TSnç@™UêgU͘úá=^‘¬”A~×Üì, 2<¬÷KÒI[såq¡?d÷³ËèÕÁ!¶¿òjŸÜKûÕàêÂ%´•c®ñíçM,“Ö˜!P“†›o¾IÄd_7Þܽ…G]е¡ÒÕë½yW·°Êa'ò(æOh».ó8{5 yRtøCŽß’—N`>®9rÝB=È(áwgK”x…è€4jHu&ÔÒµ<|; >×Ü?Ki‹ãè´Éý »Rlc¼fƒÍ5Ë7Õ°‚a¬R¸øÈrÄQEz8³eüUñ" ï9VC°¶ÂzYq¹Pâ­Ç;ÿO¨x¢Ò6™‹Š‹íþ(dbÊô.á…~-,ó’¬[H<’ÑÜå£Ü{J2|ÑñN.ƒÂRîði¬&¤IU4 /;;m?®Xø\™ëÆMÈBÚ§o¥â Ù5üÄ߀‘—wF«)âÑ—°H’p–ì?ßߧ鬋ZüàˆbÞ¡ÓÞÚ¯îUkôáxà¡yäåR´ÞêtÿA·;”ÓœU`|*Å7ŽJž‡ä‹Ê¦Ñó¹àßÀZ•,,fÅkâó HÞ>ó¬;ÊÈKP²®SAÖ.žp‘j,/¾dXP¯Æù&"9e¶³ÂÝÇÐØ«O’G²M  ’Í3…ª$´Öwˆâø¡—X|ö웋4_?áîê‚ ¤˜òÞ99"O7nmçCñŸ‡_ö€Íë}S#äD¬dÅ[ás#,zÐ,–‹¦a%½Ì!Ì3¨ùTŠÉ‡{½ÀöXy­Lð›…,˜×du©sMùKïWjÀÔ9Ÿ2U.tC±H;žÂÈêúÒ}Cï”9þ~ÄTÄK:ך›¤`‘ãK§¦¦h±€RƒSyŽ>ZJL3ZòÁâtÇù̱Ѓ‰åOsiXìÉ÷þÁçõ«Ô¼Òtê'ä(MÜ5"œP4`fÙy˜j“@‘Ÿr?úKÚá¥óÍ…¼Ÿxc¶vÿÇzÎÍYv?· §rÀUÊ®Óhd0h€õQ ¢²·8U‡9²lž&ø~ÀÊÎvhаâ‰3㟘I¶X>Ä” Òlq[_ mGN ¥v+bFÎd»ùØÁo¾T;ò”P\ù’ëYŸDNŒËB%>Wç÷êXåë§/ ƒ¡˜êÃ÷!WÚv¸7èn¡È=v°Pî=Źe§“½÷_“œ™Ôü*“7ÁrZÌäñ9(z+ŸU¨òVÑäEç}Óyg³F)ý‚\-VQ÷3j ¬Ä\îQvH8Ùz xlŒ–e?%CJ´ÅZiwT­Ãr×},1&)9ÔQÝfyrzƒ ZHsÑ"%C†”…UƒHA¼ÇÍî-M-ä Š›Â–P-̲?2£«(xŸHVl‚\4Ý„ÆPKÌO56>}spÌre¥Ç*¶gB~ÀÚwx¯í@:žE™ [X.×É-(ü TõŒõ dZÀ6­1t)²XãþñžaÚ´põÛã*#¨)ŸôˆK*–t¦¼žŒ‡ÒÛÞ õ"PÐEæ·¼“ìq7Àäéëñ™XJÛâõÀÑI¬²Ç·Bêà*«¶l×δ‡üû±'¡í6é:õÀO+äIØH|ÆbÔ.ûßÃÊs 39z-Pè¾ z+ÐÐ=4eHËYAÏ» gïÑ ½Õ;P*ñýAyÝPuO$½LÏG¼7GÄïT±M1«qÖÉÆœô<ù*£ur¦6o¡6ÂÒk^U(ê#§¦K±è»ÉôýPr×Ûhó½ÅÜ£bݧ’`‘©ÛÒ£{^cg”L<Ïv÷íàÃwÒŒ'Õv†1Ÿ÷Áò=dPÚ®ûdOµä>eýGÑÌV¦p\7…*†Kã#½¾XaÚCO{Hõb¶É Ù@aOzæ‰_,6ýxùÃ-wL`Õ¼ŸÖ rËËêécVj.^ j0Ê~—:“\ ­5Ÿve¹…fgÃ)*— €ëŧ6c”ûÞZkÝ£é8}ý>èoPËvá"­‘·5mѦÒFÂ)û ÆÖÿmE4èòŸ¼âçŽT¼m˜‡Tw¯Ç#'$!Õ¥û®¶hˆ¾ªÊ2’÷.ר|sAì‚CXò×w¨ &¨®‘ñ¤C#§ë {E›Á®S¡©8¾d¨œÍb[±Fð‡Ùm4cçÂYa óÛF/6¢…K)•’ ¢Ç«xŠGÏÁõz­œtOKT?¸Èó¢¸ ÌUwŸÙÖjš{”AÁ%µô‘”Fð‡ï’ÙD¼¤"’íÔ¯?ótX­Ü~p*Ÿ ¬2ýÝ“0€ˆ.'ج¾®Õ×A- ëˆX8r„áa *RY?{¿ì¦%;jrWŒ5Æš.Õ2›p5»™¶B´ýõ•é&uDp{b-¶³ŒÚ÷šÆÌ²Ð–ézäÍ0[ª£Cñ-»t§ÝP-µ€¨Æföã§¾ó 6Ùaß³Õ`¢yjŒZ+àû>öC:y`µq-ýwA&ÓÈýå[ÿ-E2¬G1 €öY û#T½PLœcµ@„±ñ !™0PMš¾6Á]cjjxyà Í×S`ÔœÝì “’.söQÒ`:³lê|˜ŸÖ˜Ž¤/%wÕèâQ©oওæYÔ« ëÕøV-'‘¼ö©æ¡7×ëŸ/û ¹„kÃî. ûd6í¬)ª¹uq}X´ µ'&3ö!¢áåÝÕ' ‡7·Š"Üî8>mA$É4§œåH´ò<,-tÊ ,¾[¤)$¢±ŒUïc†>hÒ©ø3_W "dRXr[ò a—Å*«e^Ô{å©ÌqÔJ.s;ñ¯49Ö¸`Ñb`iž ºÆû‚‘®*éîX¯ÐC«|µ}’e¢`ª¶œ2ÿ5Úh·7[ÐD]9ºŽbwÖÁ_ ÎßA󴤼ë¯áŠ„‡yÒ3´Æ-âow¬™1\ˆ¸l :$…R^96Ô’f;cwïO¼þúçò´A®û¡Ìù-æùpe ôïqKßWÜJ;Ó‰`Q]ÐÞClôY.v4wyƒQÍ?Ñ4üRÙð§üµíêå ìD›ûÍ?AžV´ÍCË*%~‘Ò‡^ÇE¼s‚Ôj¢&Ôh±øuš‘;ê|åwbˆ±ä+ *Ñ–Z£þÕy5(î€É²ž•ZÈ^KÙ×28¿_F_Ò»Ÿý¤» ÿjÝï/n ì}®WÓ&NÙ’orÄI†;`š÷` ðEkü†EhéA£ß±D@jQ0âhvƒâlwx>€;.G+]ë"–`œrž¼ÉÃŒP~tLî–AÍÇ;ͤǵÀZ_À ì•shЖ,œ²ÂdNßÅÿDnûRÐw¦ÒöÀUM0£ßúzb¬z…ïi¤ZS¯eos‚ÑjÖ ù© 4̰_òEè+Î÷±*C-5 ™¼ÓÏÉ5*·g'}<ïÆ›Á,ðŽÐÁ”XÕ‚Ü-ÏJ…(1Ùáê8º‹ˆX©|®Â¼츭\­¯ãqÃ:Ť˜_þú) Uµ6gZGÝ[ˆµ —ëã½VK0¹áÏ9ó¡u7/Ð[Éš¡2‘»êã‰4ˆÄø„jõxùÔh,Çä™>é¢Ui¯žé‰#`Ál6‡­ãXê±dyR<“}tX 4«J;ðÿ¡¼— mHÂh8sÂí¨m&šÛ˜:Zmˆ–®8Pâ)ˆ_†OŠ ñòñ›f‡“@Gý•ͧñŽh^ÕHê+/@ÍÏuÇÝÛ@yeBªA§1"½³h®úV¾¦$±è­Òloý«åghsHùðT­ê_߯6/­PÇð—Ow쎀ʔ¸3ʃh©âÉ­½™¨“#Î÷Ñf"˜bqü,w 3‡eŸ ‚úÓªQÎ[—A[øª £¬-ÊÞgŽzíï=×bæK¯S~¬¹iÊÝ{½æÑX{'‚uiƒëv÷bÑp!W¹kXâÔ®Œ }ñjó²Þ ó°µ{æy;Ô2¢ôy†ÅÌÚ›¾pÚÌEëFk­O1ÉÜÝçÄmÔ¨$K°èÇä‡ù³Ä"Àµ—m|…*`&^Ç8Å¢ýÖùÒÇ6ˆñΡµ+åèÉ>J9ÖTÞÎäðH¢ÑOú|.ÛÔ;" ˜¾ÚPÆ ÑT«$†÷P_ž—{¨s>–žl—ÒCÒ²{½Îh‚>%­ËºaÅhYþǽ<úBL&¥Û)²œFZ«”â… RrõU×>ÔÓîѯ??k6ÿ¬DÜò‘¶´-ÚŒ³Dƒù‚¥;–¿4ÐBBM°¥¼>*MiÐÙžºŠjï¾ÀÃn`Øn‚c¹Wm^µšlšCM—KtÒßI Òï4íJ!ê —­‘t‹Äû.:!¢;Z_”­³ÔÆDþ ÷“ Ñ2¦ ,-„~¥Øûù¿S^¼â¾ ¦ÿÔÇ“‘“þïÔÿÐíJ<ÝoîÆÿOG¶½ÛYÿ£ýo ½‹ç5¯ëÿ«þÿ@í¿Òíÿ x*ç†/mclust/data/diabetes.rda0000644000176200001440000000274413205037574014757 0ustar liggesusers‹å˜=l\EÇ×>'K K4”.(…å÷ýN ÒI ‚íøÛÎÏw¶ã³}wï݇¿ÂsŒ²B°A”.))(((^I™’Ò%%%UÈ%ï?³™SÐ@¥ÓÌîÎÎofvßî{¿1cÎ *¥úUn _õçzjO*Õ§Ôµž|}e½´\‰+‘R¹·ž öä=yú|Îÿã×gè}—诲ÿ'ý}ÿò/÷ßÿÄ^»R®•¢ÞFSC/t^­–ÊñvÓ0½Z«´+5²ÍÑfý`­²¹^.ÕÈê“íæ&·®|Ú®4ãžò¤÷{óÙ.Lªç…™B&ç §)oAÎÂþVAÎ#9•brâ~ˆKý3©ôkò§`?Êþiô“3N£ÿ3ƒ7nøã¼Ré‡ò˜39FœÓF~ÓûW2/ªÍi^"çQ{ v“éåùÏõ£~®¿±n3F'‰o¬—Y_ŠwóÔåùL._wη ã¢xæSÃ?Å»y%¹´gSc=Y—Ùäò¼ŒúÑ>›SrßPÿK² ýPý9nÌo’ü3“·1¯» Ì[…Ýò…´¯œgr½˜É5ÈR"í£™¬Ãÿ2æm£þÛàÝÁx ¹~vK°+¡½ ^‹8'™le²š~É>kóyÔ†¿&æEÈ“úcøï<Æ|ªÚŸÐN$?BžMÈüD°ëR¼È³]—~Zï þù·À‹aß…ÿöcCÂ>¾ ‰¸Úä_I;Š»ûåƒzÄ]ÄÛ¸)ã#¿MZÄ]ÈúDz¾dOó£Y?ZüÅ©\’-ØÀrñí¡^»äòÜ=ij n—ê‹ý½‹:í`^wH®#Õkñì|ZgòKùSœäg÷DÖíœCÌû2¡q¬Ã]ÄyNÒ…ü vÈëÞ/hùïC&ïø?H¥ýÕ ù%Àÿ°´;¤¼àï‹·aÿc&aݾ¿žÉ/Á{ôW&OW2ùùcÿŸÁîëò0ÌäÄy„<Ž‘ÿY ?£ÿ«L~»6Õ uùæ#ćkˆñ¡þ÷Áùýgh÷+ò@œ~ÏäÉÆÁû!‘燋Ø%pÖ ·0^FÝ7 WÉžÎÍ¢¯Ò8æ¯Ó¹KmŒ—éüÇ:.*yÓü"Ú«h¯A6°Î5ºà§‚v•âJ¥ʳû:üT »*äìŠJú[A¾sù>°„üÆ ò½j1‘÷&ÕµYuÚ |èž.Èû®œJÿ“ÆøR"וë>,×k…æ%òþ+Ñ=R—÷g+‘õ_Hå½\ÇóÒ8‘÷a9‘ûƒö×üÜ=—ëBym¢ÿ>üÇJÞÇ÷ðÜ6ÁÛ$Gˆ¿ç¥~.ßh^LçÆwyŸí÷|7‘ëB÷Hý <×%ø½ƒþrQ¾_\‡ ”Œÿc%ו¿7Ðÿ¡’ïwôž²¢äúÒó0f쿲Qgz_ÛO¥ÿ÷”ÜchGtåûò;J¾¯ßoóßW²þ#tÉïÄ­Òf…¾ýäÇcÖxmµÖ*oGj®oE­ÚúšQT_Í|öÿm|n®”âÒHµÙ#ÔkÍíÎÈ‹äS ôY¤Ø¤8¤¸¤x¤ø¤¤„¤ä¡ô[£¬Y¬Ù¬9¬¹¬y¬ù¬¬…¬1Ãf†Í ›63lfØÌ°™a3Ãf†Í ‡3f8Ìp˜á0Ãa†Ã ‡3\f¸Ìp™á2Ãe†Ë —.3\f¸Ìð˜á1Ãc†Ç 33|føÌð™á3Ãg†ÏŒ€3f̘0#`FÀŒ€3Bf„Ì™2#dFÈŒ!3Bf„ÌÈ3#ÏŒ<3òÌÈ3#ÏŒ<3òÌÈ3#OŒœ5:ªUK«¶V­ºZõ´êk5Ðj¨UM³4ÍÒ4KÓ,M³4ÍÒ4KÓ,M³4ÍÒ4[ÓlM³5ÍÖ4[ÓlM³5ÍÖ4[ÓlMs4ÍÑ4GÓMs4ÍÑ4GÓMs4ÍÑ4WÓ\Ms5ÍÕ4WÓz§BöŸÁ'O¸[~mclust/data/banknote.txt.gz0000644000176200001440000000334013205037575015462 0ustar liggesusers‹YKr7 Üçs‚Wƒá›Ê2«$p¥dYU‰”ržîŸ! €à˜x¡gK‚Ah4ôïÏ/ÏÏ·__Þ_ŸßÎ?¾>·ßÞ^¿=·Ÿ?žÏ¿·?>þÙ~yûòúñþ察^_Þ?ßÞ_¶ü#oà ~=`+[y¤ B5;¿»úš Óixþc<Û0n©zÛ¾ZJ»€¿GÛj‘z|§iWèöv•ýôUª-4;?Ûu=¦GÁáüœ”G§áQ]&íÎ¥Ùx¾óiÙ.óÑG ηO·•æñ´Ûgáü´«!è—q™Ø3³·º\ÒMgg¶l Êk‚<׎Ô|–õÞtz™-ó'å2ãg9o•>QÃYiÁBKeAi…\¶b¤› X¯ìkû©a¹òyÞNsý7»âQZ/àÉê‚°ˆu=µ‹ÓXŽtY!è\ÉfÈ‘t€2%®¥îk“"çåÇ¡!BLn’ƒ =Ìûƒ½8l?eiÂ̰ŽãÄk•#òï2¿ôÅTâ†ÉÐ𜚀y„¥AAÙbÃeOƒQ6Ðz9I~u˜a'Ú"Â"{óÏÏ÷çË÷¯/oO:™r‹1áŽèê‹u+ n§mr5jÃx¨Crí÷5™ªØ‹HíïVÏb“©þ…„«P]“wöÙñŠe1¾hÚ¦g$t%}Y¤Ï÷YBY}OÛa- `Ì~ CÌÈ~;n<%#×'ÅšiŒ–v àš«ù ) å(£UtžMÐâ¶r-V>.L6ªøÁ·œÈJY’È0@‚ë4mc½Þý{%l*úÔ5VsŠ7çVÌYÏ“ñHt蹋Lϼ:èæŽŸµ g„å™Öè=`ÂÈ÷RJ` G³±t(ˆBŒ ø5ðé¡ô.0£"êá:kÅ¡+ ^hiqçHAÕHÕQ âÄ[Kg›øIbÈXÊp_Hxô¯Š|¥4ÚÌSÌ‚à‹à%ù²`Ns–‚wæ Ò5©˜e$%kå‰Pz!‰LÌ”îatA#°u[Ó‰„ ¥Q/å¥6Ày±çG…Œ#JÝç·ÒÞb»slâñv&b/«©»‰çBºVH…älBÉ Fh é¬ÓzçOnL&zMÓ"^bó‰eLº§µ•kñ"¦·|±»U\âe´ç8ŽêÎ1ÓnD<¬t¯ŠÜ^é~–‘œ;¯n°§aÇÜKšx¨ã2[¥Ò‚YtÊxáÍIÇ^œï@”„$QƃÓÝYAñí¦22«ý5½…SmŒáÑlÌjÂiÖ<ÏÏ¥sHOˆÎ{ܹ[›Y˜ö _Ðän"÷@»Ah5WzQŠaßBø_®æ®™´Ÿ‚ZûqR“À¸Ã5(E¼“ÂÂ=cÁCÌÒûæ•b¸èÓÑuN ¦RæxÒÙY/Œgp™Šë,ª7I.[+ó$ b2éCŒÅ{su¾Ø¢E¬L 鞪qE¼dI§1[ü`uüÖvPÏ›£çAKš²½ D¨à®ú_u×´ ªà£ÿŸŸNicïÄŠci¿¡•:ù¬ä&cK•ñS~`§b‰€üa6á²±‘ ÓvÁ$ž“I[+U”ÿGC ×ü~|Úl¤!mclust/R/0000755000176200001440000000000013205037573011767 5ustar liggesusersmclust/R/mclust.R0000644000176200001440000103657613175412052013436 0ustar liggesusersMclust <- function(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), x = NULL, verbose = interactive(), ...) { call <- match.call() data <- data.matrix(data) if(!is.null(x)) if(!inherits(x, "mclustBIC")) stop("If provided, argument x must be an object of class 'mclustBIC'.") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name("mclustBIC") mc[[2]] <- data Bic <- eval(mc, parent.frame()) G <- attr(Bic, "G") modelNames <- attr(Bic, "modelNames") Sumry <- summary(Bic, data, G = G, modelNames = modelNames) if(length(Sumry)==0) return() if(!(length(G) == 1)) { bestG <- length(tabulate(Sumry$cl)) if(warn) { if(bestG == max(G) & warn) warning("optimal number of clusters occurs at max choice") else if(bestG == min(G) & warn) warning("optimal number of clusters occurs at min choice") } } oldClass(Sumry) <- NULL Sumry$bic <- Sumry$bic[1] Sumry$hypvol <- if(is.null(attr(Bic, "Vinv"))) as.double(NA) else 1/attr(Bic, "Vinv") # df <- (2*Sumry$loglik - Sumry$bic)/log(Sumry$n) df <- if(is.null(Sumry$modelName)) NULL else with(Sumry, nMclustParams(modelName, d, G, noise = (!is.na(hypvol)), equalPro = attr(Sumry, "control")$equalPro)) ans <- c(list(call = call, data = data, BIC = Bic, df = df), Sumry) orderedNames <- c("call", "data", "modelName", "n", "d", "G", "BIC", "bic", "loglik", "df", "hypvol", "parameters", "z", "classification", "uncertainty") structure(ans[orderedNames], class = "Mclust") } print.Mclust <- function(x, digits = getOption("digits"), ...) { cat("\'", class(x)[1], "\' model object:\n", sep = "") G <- x$G noise <- !is.null(attr(x$BIC, "Vinv")) if(G == 0 & noise) { cat(" best model: single noise component\n") } else { M <- mclustModelNames(x$modelName)$type cat(" best model: ", M, " (", x$model, ") with ", G, " components\n", if(noise) " and a noise term\n", sep = "") } invisible() } summary.Mclust <- function(object, parameters = FALSE, classification = FALSE, ...) { # collect info G <- object$G noise <- if(is.na(object$hypvol)) FALSE else object$hypvol pro <- object$parameters$pro if(is.null(pro)) pro <- 1 names(pro) <- if(noise) c(seq_len(G),0) else seq(G) mean <- object$parameters$mean if(object$d > 1) { sigma <- object$parameters$variance$sigma } else { sigma <- rep(object$parameters$variance$sigmasq, object$G)[1:object$G] names(sigma) <- names(mean) } if(is.null(object$density)) title <- paste("Gaussian finite mixture model fitted by EM algorithm") else title <- paste("Density estimation via Gaussian finite mixture modeling") # obj <- list(title = title, n = object$n, d = object$d, G = G, modelName = object$modelName, loglik = object$loglik, df = object$df, bic = object$bic, icl = icl(object), pro = pro, mean = mean, variance = sigma, noise = noise, prior = attr(object$BIC, "prior"), classification = object$classification, printParameters = parameters, printClassification = classification) class(obj) <- "summary.Mclust" return(obj) } print.summary.Mclust <- function(x, digits = getOption("digits"), ...) { cat(rep("-", nchar(x$title)),"\n",sep="") cat(x$title, "\n") cat(rep("-", nchar(x$title)),"\n",sep="") # if(x$G == 0) { cat("\nMclust model with only a noise component:\n\n") } else { cat("\nMclust ", x$modelName, " (", mclustModelNames(x$modelName)$type, ") model with ", x$G, ifelse(x$G > 1, " components", " component"), if(x$noise) "\nand a noise term", ":\n\n", sep = "") } # if(!is.null(x$prior)) { cat("Prior: ") cat(x$prior$functionName, "(", paste(names(x$prior[-1]), x$prior[-1], sep = " = ", collapse = ", "), ")", sep = "") cat("\n\n") } # tab <- data.frame("log-likelihood" = x$loglik, "n" = x$n, "df" = x$df, "BIC" = x$bic, "ICL" = x$icl, row.names = "") print(tab, digits = digits) # cat("\nClustering table:") print(table(factor(x$classification, levels = { l <- seq_len(x$G) if(is.numeric(x$noise)) l <- c(l,0) l })), digits = digits) # if(x$printParameters) { cat("\nMixing probabilities:\n") print(x$pro, digits = digits) cat("\nMeans:\n") print(x$mean, digits = digits) cat("\nVariances:\n") if(x$d > 1) { for(g in 1:x$G) { cat("[,,", g, "]\n", sep = "") print(x$variance[,,g], digits = digits) } } else print(x$variance, digits = digits) if(x$noise) { cat("\nHypervolume of noise component:\n") cat(signif(x$noise, digits = digits), "\n") } } if(x$printClassification) { cat("\nClassification:\n") print(x$classification, digits = digits) } # invisible(x) } # old version, wrong df, data not needed, inefficient because compute dens on original scale # logLik.Mclust <- function(object, data, ...) # { # if(!missing(data)) # object$data <- data.matrix(data) # par <- object$parameters # df <- with(object, (G-1) + G*d + nVarParams(modelName, d = d, G = G)) # # # l <- matrix(as.double(NA), object$n, object$G) # # for(k in seq(object$G)) # # { l[,k] <- par$pro[k] * dmvnorm(data, par$mean[,k], # # par$variance$sigma[,,k]) } # # l <- sum(log(rowSums(l))) # # l <- sum(log(do.call("dens", object))) # attr(l, "nobs") <- object$n # attr(l, "df") <- df # class(l) <- "logLik" # return(l) # } logLik.Mclust <- function(object, ...) { par <- object$parameters # df <- with(object, (G-1) + G*d + nVarParams(modelName, d = d, G = G)) df <- with(object, nMclustParams(modelName, d, G, noise = (!is.na(hypvol)), equalPro = attr(BIC, "control")$equalPro)) l <- sum(do.call("dens", c(object, logarithm = TRUE))) attr(l, "nobs") <- object$n attr(l, "df") <- df class(l) <- "logLik" return(l) } predict.Mclust <- function(object, newdata, ...) { if(!inherits(object, "Mclust")) stop("object not of class \"Mclust\"") if(missing(newdata)) { newdata <- object$data } newdata <- as.matrix(newdata) if(ncol(object$data) != ncol(newdata)) { stop("newdata must match ncol of object data") } object$data <- newdata prior <- object$parameters$pro noise <- (!is.na(object$hypvol)) # old # z <- do.call("cdens", object) # z <- sweep(z, MARGIN = 1, FUN = "/", STATS = apply(z, 1, max)) # z <- sweep(z, MARGIN = 2, FUN = "*", STATS = prior/sum(prior)) # z <- sweep(z, MARGIN = 1, STATS = apply(z, 1, sum), FUN = "/") # new: more efficient and accurate z <- do.call("cdens", c(object, list(logarithm = TRUE))) if(noise) z <- cbind(z, log(object$parameters$Vinv)) z <- sweep(z, MARGIN = 2, FUN = "+", STATS = log(prior/sum(prior))) z <- sweep(z, MARGIN = 1, FUN = "-", STATS = apply(z, 1, logsumexp)) z <- exp(z) # cl <- c(seq(object$G), if(noise) 0) colnames(z) <- cl cl <- cl[apply(z, 1, which.max)] out <- list(classification = cl, z = z) return(out) } mclustBIC <- function(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = list(hcPairs=NULL, subset=NULL, noise=NULL), Vinv = NULL, warn = mclust.options("warn"), x = NULL, verbose = interactive(), ...) { dimData <- dim(data) oneD <- (is.null(dimData) || length(dimData[dimData > 1]) == 1) if(!oneD && length(dimData) != 2) stop("data must be a vector or a matrix") if(oneD) { data <- drop(as.matrix(data)) n <- length(data) d <- 1 } else { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) } if(is.null(x)) { if(is.null(modelNames)) { if(d == 1) { modelNames <- c("E", "V") } else { modelNames <- mclust.options("emModelNames") if(n <= d) { # select only spherical and diagonal models m <- match(modelNames, c("EII", "VII", "EEI", "VEI", "EVI", "VVI"), nomatch = 0) modelNames <- modelNames[m] } } } if(!is.null(prior)) { # remove models not available with prior modelNames <- setdiff(modelNames, c("EVE","VEE","VVE","EVV")) } if(is.null(G)) { G <- if (is.null(initialization$noise)) 1:9 else 0:9 } else { G <- sort(as.integer(unique(G))) } if(is.null(initialization$noise)) { if (any(G > n)) G <- G[G <= n] } else { noise <- initialization$noise # TODO: remove after check # if(!is.logical(noise)) # { if(any(match(noise, 1:n, nomatch = 0) == 0)) # stop("numeric noise must correspond to row indexes of data") # noise <- as.logical(match(1:n, noise, nomatch = 0)) # } # initialization$noise <- noise # nnoise <- sum(as.numeric(noise)) if(is.logical(noise)) noise <- which(noise) if(any(match(noise, 1:n, nomatch = 0) == 0)) stop("numeric or logical vector for noise must correspond to row indexes of data") initialization$noise <- noise nnoise <- length(noise) if(any(G > (n-nnoise))) G <- G[G <= n-nnoise] } if(!is.null(initialization$subset)) { subset <- initialization$subset if(is.logical(subset)) subset <- which(subset) initialization$subset <- subset if(any(G > n)) G <- G[G <= n] } Gall <- G Mall <- modelNames } else { if(!missing(prior) || !missing(control) || !missing(initialization) || !missing(Vinv)) stop("only G and modelNames may be specified as arguments when x is supplied") prior <- attr(x,"prior") control <- attr(x,"control") initialization <- attr(x,"initialization") Vinv <- attr(x,"Vinv") warn <- attr(x,"warn") Glabels <- dimnames(x)[[1]] Mlabels <- dimnames(x)[[2]] if(is.null(G)) G <- Glabels if(is.null(modelNames)) modelNames <- Mlabels Gmatch <- match(as.character(G), Glabels, nomatch = 0) Mmatch <- match(modelNames, Mlabels, nomatch = 0) if(all(Gmatch) && all(Mmatch)) { out <- x[as.character(G),modelNames,drop=FALSE] mostattributes(out) <- attributes(x) attr(out, "dim") <- c(length(G), length(modelNames)) attr(out, "dimnames") <- list(G, modelNames) attr(out, "G") <- as.numeric(G) attr(out, "modelNames") <- modelNames attr(out, "returnCodes") <- attr(x, "returnCodes")[as.character(G),modelNames,drop=FALSE] return(out) } Gall <- sort(as.numeric(unique(c(as.character(G), Glabels)))) Mall <- unique(c(modelNames, Mlabels)) } if(any(as.logical(as.numeric(G))) < 0) { if(is.null(initialization$noise)) { stop("G must be positive") } else { stop("G must be nonnegative") } } if(d == 1 && any(nchar(modelNames) > 1)) { Emodel <- any(sapply(modelNames, function(x) charmatch("E", x, nomatch = 0)[1]) == 1) Vmodel <- any(sapply(modelNames, function(x) charmatch("V", x, nomatch = 0)[1]) == 1) modelNames <- c("E", "V")[c(Emodel, Vmodel)] } # set subset (if not provided) for initialization when data size is # larger than the value specified in mclust.options() if(n > .mclust$subset & is.null(initialization$subset)) { initialization$subset <- sample(seq.int(n), size = .mclust$subset, replace = FALSE) } l <- length(Gall) m <- length(Mall) if(verbose) { cat("fitting ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = l*m+1, style = 3) on.exit(close(pbar)) ipbar <- 0 } EMPTY <- -.Machine$double.xmax BIC <- RET <- matrix(EMPTY, nrow = l, ncol = m, dimnames = list(as.character(Gall), as.character(Mall))) if(!is.null(x)) { BIC[dimnames(x)[[1]],dimnames(x)[[2]]] <- x RET[dimnames(x)[[1]],dimnames(x)[[2]]] <- attr(x, "returnCodes") BIC <- BIC[as.character(G),modelNames,drop=FALSE] RET <- RET[as.character(G),modelNames,drop=FALSE] } G <- as.numeric(G) Glabels <- as.character(G) Gout <- G if(is.null(initialization$noise)) { ## standard case ---- if (G[1] == 1) { for(mdl in modelNames[BIC["1",] == EMPTY]) { out <- mvn(modelName = mdl, data = data, prior = prior) BIC["1", mdl] <- bic(modelName = mdl, loglik = out$loglik, n = n, d = d, G = 1, equalPro = FALSE) RET["1", mdl] <- attr(out, "returnCode") if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } if (l == 1) { BIC[BIC == EMPTY] <- NA if(verbose) { ipbar <- l*m+1; setTxtProgressBar(pbar, ipbar) } return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = initialization, warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$subset)) { ## all data in initial hierarchical clustering phase (no subset) ---- if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(data = data, modelName = mclust.options("hcModelNames")[1]) } else { hcPairs <- hc(data = data, modelName = "EII") } } else { hcPairs <- NULL # hcPairs <- hc(data = data, modelName = "E") } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) for (g in Glabels) { if (d > 1 || !is.null(hcPairs)) { cl <- clss[,g] } else { cl <- qclass( data, as.numeric(g)) } if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } z <- unmap(cl, groups = 1:max(cl)) if(any(apply( z, 2, max) == 0) & warn) { # missing groups if(warn) warning("there are missing groups") small <- sqrt(.Machine$double.neg.eps) z[z < small] <- small z <- t(apply( z, 1, function(x) x/sum(x))) } for(modelName in na.omit(modelNames[BIC[g,] == EMPTY])) { out <- me(modelName = modelName, data = data, z = z, prior = prior, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = as.numeric(g), equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } } } else { ## initial hierarchical clustering phase on a subset ---- subset <- initialization$subset # TODO: remove after check # if (is.logical(subset)) subset <- which(subset) if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(modelName = mclust.options("hcModelNames")[1], data = data[subset,]) } else { hcPairs <- hc(modelName = "EII", data = data[subset,]) } } else { hcPairs <- NULL # hcPairs <- hc(modelName = "E", data = data[subset]) } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) for (g in Glabels) { if (d > 1 || !is.null(hcPairs)) { cl <- clss[, g] } else { cl <- qclass(data[subset], as.numeric(g)) } if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } z <- unmap(cl, groups = 1:max(cl)) if(any(apply( z, 2, max) == 0) & warn) { # missing groups if(warn) warning("there are missing groups") small <- sqrt(.Machine$double.neg.eps) z[z < small] <- small z <- t(apply( z, 1, function(x) x/sum(x))) } for (modelName in modelNames[!is.na(BIC[g,])]) { ms <- mstep(modelName = modelName, z = z, data = as.matrix(data)[initialization$subset,], prior = prior, control = control, warn = warn) # # ctrl <- control # ctrl$itmax[1] <- 1 # ms <- me(modelName = modelName, data = as.matrix(data)[ # initialization$subset, ], z = z, prior = prior, control = ctrl) # es <- do.call("estep", c(list(data = data, warn = warn), ms)) out <- me(modelName = modelName, data = data, z = es$z, prior = prior, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = as.numeric(g), equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } } } } else { ## noise case ---- noise <- initialization$noise if (is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) if (is.null(initialization$subset)) { ## all data in initial hierarchical clustering phase (no subset) ---- if(nnoise == n) stop("All observations cannot be initialised as noise!") if (!G[1]) { hood <- n * log(Vinv) BIC["0",] <- 2 * hood - log(n) if (l == 1) { return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = list(hcPairs = hcPairs, noise = initialization$noise), warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(modelName = mclust.options("hcModelNames")[1], data = data[-noise, ]) } else { hcPairs <- hc(modelName = "EII", data = data[-noise, ]) } } else { hcPairs <- NULL # hcPairs <- hc(modelName = "E", data = data[-noise]) } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } z <- matrix(0, n, max(G) + 1) for (g in Glabels) { z[] <- 0 k <- as.numeric(g) if(d > 1 || !is.null(hcPairs)) { cl <- clss[,g] } else { cl <- qclass(data[!noise], k = k) } z[-noise,1:k] <- unmap(cl, groups = 1:max(cl)) if(any(apply(z[-noise,1:k,drop=FALSE], 2, max) == 0) & warn) { # missing groups if(warn) warning("there are missing groups") # todo: should be pmax(...) qui sotto?? z[-noise,1:k] <- max(z[-noise,1:k], sqrt(.Machine$double.neg.eps)) # todo: should be t(...) qui sotto?? z[-noise,1:k] <- apply(z[-noise,1:k,drop=FALSE], 1, function(z) z/sum(z)) } z[noise, k+1] <- 1 K <- 1:(k+1) for (modelName in na.omit(modelNames[BIC[g,] == EMPTY])) { out <- me(modelName = modelName, data = data, z = z[, K], prior = prior, Vinv = Vinv, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = k, noise = TRUE, equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } } } else { ## initial hierarchical clustering phase on a subset ---- subset <- initialization$subset subset <- setdiff(subset, noise) # remove from subset noise obs initialization$subset <- subset if(length(subset) == 0) stop("No observations in the initial subset after removing the noise!") if (!G[1]) { hood <- n * log(Vinv) BIC["0",] <- 2 * hood - log(n) if (l == 1) { return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = list(hcPairs = hcPairs, subset = initialization$subset), warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(modelName = mclust.options("hcModelNames")[1], data = data[subset,]) } else { hcPairs <- hc(modelName = "EII", data = data[subset,]) } } else { hcPairs <- NULL # hcPairs <- hc(modelName = "E", data = data[subset]) } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } for (g in Glabels) { k <- as.numeric(g) if (d > 1 || !is.null(hcPairs)) { cl <- clss[, g] } else { cl <- qclass(data[subset], k = k) } z <- unmap(cl, groups = 1:max(cl)) if(any(apply(z, 2, max) == 0) & warn) { # missing groups if(warn) warning("there are missing groups") small <- sqrt(.Machine$double.neg.eps) z[z < small] <- small z <- t(apply( z, 1, function(x) x/sum(x))) } for (modelName in na.omit(modelNames[BIC[g,] == EMPTY])) { ms <- mstep(modelName = modelName, z = z, data = as.matrix(data)[subset,], prior = prior, control = control, warn = warn) es <- do.call("estep", c(list(data = data, warn = warn), ms)) if(is.na(es$loglik)) { BIC[g, modelName] <- NA RET[g, modelName] <- attr(es, "returnCode") } else { es$z <- cbind(es$z, 0) es$z[noise,] <- matrix(c(rep(0,k),1), byrow = TRUE, nrow = length(noise), ncol = k+1) out <- me(modelName = modelName, data = data, z = es$z, prior = prior, Vinv = Vinv, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = k, noise = TRUE, equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") } if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } } } } if(verbose) { ipbar <- l*m+1; setTxtProgressBar(pbar, ipbar) } if(!is.null(prior) & any(is.na(BIC))) warning("The presence of BIC values equal to NA is likely due to one or more of the mixture proportions being estimated as zero, so that the model estimated reduces to one with a smaller number of components.") structure(BIC, G = Gout, modelNames = modelNames, prior = prior, Vinv = Vinv, control = control, initialization = list(hcPairs = hcPairs, subset = initialization$subset, noise = initialization$noise), warn = warn, n = n, d = d, oneD = oneD, criterion = "BIC", returnCodes = RET, class = "mclustBIC") } print.mclustBIC <- function(x, pick = 3, ...) { subset <- !is.null(attr(x, "subset")) oldClass(x) <- attr(x, "args") <- NULL attr(x, "criterion") <- NULL attr(x, "control") <- attr(x, "initialization") <- NULL attr(x, "oneD") <- attr(x, "warn") <- attr(x, "Vinv") <- NULL attr(x, "prior") <- attr(x, "G") <- attr(x, "modelNames") <- NULL ret <- attr(x, "returnCodes") == -3 n <- attr(x, "n") d <- attr(x, "d") attr(x, "returnCodes") <- attr(x, "n") <- attr(x, "d") <- NULL cat("Bayesian Information Criterion (BIC):\n") NextMethod("print") cat("\n") cat("Top", pick, "models based on the BIC criterion:\n") print(pickBIC(x, pick), ...) invisible() } summary.mclustBIC <- function(object, data, G, modelNames, ...) { mc <- match.call(expand.dots = FALSE) if(missing(data)) { if(!missing(G)) object <- object[rownames(object) %in% G,,drop=FALSE] if(!missing(modelNames)) object <- object[,colnames(object) %in% modelNames,drop=FALSE] ans <- pickBIC(object, ...) class(ans) <- "summary.mclustBIC" } else { if(is.null(attr(object,"initialization")$noise)) { mc[[1]] <- as.name("summaryMclustBIC") } else { mc[[1]] <- as.name("summaryMclustBICn") } warn <- attr(object, "warn") ans <- eval(mc, parent.frame()) if(length(ans) == 0) return(ans) Glabels <- dimnames(object)[[1]] if(length(Glabels) != 1 && (!missing(G) && length(G) > 1)) { Grange <- range(as.numeric(Glabels)) if(match(ans$G, Grange, nomatch = 0) & warn) warning("best model occurs at the min or max of number of components considered!") } } ans } summaryMclustBIC <- function (object, data, G = NULL, modelNames = NULL, ...) { dimData <- dim(data) oneD <- (is.null(dimData) || length(dimData[dimData > 1]) == 1) if (!oneD && length(dimData) != 2) stop("data must be a vector or a matrix") if (oneD) { data <- drop(as.matrix(data)) n <- length(data) d <- 1 } else { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) } initialization <- attr(object, "initialization") hcPairs <- initialization$hcPairs subset <- initialization$subset prior <- attr(object, "prior") control <- attr(object, "control") warn <- attr(object, "warn") oldClass(object) <- NULL attr(object, "prior") <- attr(object, "warn") <- NULL attr(object, "modelNames") <- attr(object, "oneD") <- NULL attr(object, "initialization") <- attr(object, "control") <- NULL d <- if (is.null(dim(data))) 1 else ncol(data) if(is.null(G)) G <- dimnames(object)[[1]] if(is.null(modelNames)) modelNames <- dimnames(object)[[2]] bestBICs <- pickBIC(object[as.character(G), modelNames, drop = FALSE], k = 3) if(all(is.na(bestBICs))) { return(structure(list(), bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC")) } temp <- unlist(strsplit(names(bestBICs)[1], ",")) bestModel <- temp[1] G <- as.numeric(temp[2]) if(G == 1) { out <- mvn(modelName = bestModel, data = data, prior = prior) ans <- c(list(bic = bestBICs, z = unmap(rep(1,n)), classification = rep(1, n), uncertainty = rep(0, n)), out) orderedNames <- c("modelName", "n", "d", "G", "bic", "loglik", "parameters", "z", "classification", "uncertainty") return(structure(ans[orderedNames], bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC")) } if(is.null(subset)) { if(d > 1 || !is.null(hcPairs)) { z <- unmap(hclass(hcPairs, G)) } else { z <- unmap(qclass(data, G), groups = 1:G) } out <- me(modelName = bestModel, data = data, z = z, prior = prior, control = control, warn = warn) if(sum((out$parameters$pro - colMeans(out$z))^2) > sqrt(.Machine$double.eps)) { # perform extra M-step and update parameters ms <- mstep(modelName = bestModel, data = data, z = out$z, prior = prior, warn = warn) if(attr(ms, "returnCode") == 0) out$parameters <- ms$parameters } } else { if(d > 1 || !is.null(hcPairs)) { z <- unmap(hclass(hcPairs, G)) } else { z <- unmap(qclass(data[subset], G)) } ms <- mstep(modelName = bestModel, prior = prior, z = z, data = as.matrix(data)[subset,], control = control, warn = warn) es <- do.call("estep", c(list(data = data), ms)) out <- me(modelName = bestModel, data = data, z = es$z, prior = prior, control = control, warn = warn) # perform extra M-step and update parameters ms <- mstep(modelName = bestModel, data = data, z = out$z, prior = prior, warn = warn) if(attr(ms, "returnCode") == 0) out$parameters <- ms$parameters } obsNames <- if (is.null(dim(data))) names(data) else dimnames(data)[[1]] classification <- map(out$z, warn = warn) uncertainty <- 1 - apply(out$z, 1, max) names(classification) <- names(uncertainty) <- obsNames ans <- c(list(bic = bic(bestModel, out$loglik, out$n, out$d, out$G, noise = FALSE, equalPro = control$equalPro), # bic = as.vector(bestBICs[1]), classification = classification, uncertainty = uncertainty), out) orderedNames <- c("modelName", "n", "d", "G", "bic", "loglik", "parameters", "z", "classification", "uncertainty") structure(ans[orderedNames], bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC") } summaryMclustBICn <- function(object, data, G = NULL, modelNames = NULL, ...) { dimData <- dim(data) oneD <- is.null(dimData) || length(dimData[dimData > 1]) == 1 if(!oneD && length(dimData) != 2) stop("data must be a vector or a matrix") if(oneD) { data <- drop(as.matrix(data)) n <- length(data) d <- 1 } else { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) } initialization <- attr(object, "initialization") hcPairs <- initialization$hcPairs subset <- initialization$subset noise <- initialization$noise # TODO: remove after check # if(!is.logical(noise)) # noise <- as.logical(match(1:n, noise, nomatch = 0)) if(is.logical(noise)) noise <- which(noise) prior <- attr(object, "prior") control <- attr(object, "control") warn <- attr(object, "warn") Vinv <- attr(object, "Vinv") oldClass(object) <- NULL attr(object, "control") <- attr(object, "initialization") <- NULL attr(object, "prior") <- attr(object, "Vinv") <- NULL attr(object, "warn") <- NULL ## if (is.null(G)) G <- dimnames(object)[[1]] if (is.null(modelNames)) modelNames <- dimnames(object)[[2]] bestBICs <- pickBIC(object[as.character(G), modelNames, drop = FALSE], k = 3) if(all(is.na(bestBICs))) { return(structure(list(), bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC")) } temp <- unlist(strsplit(names(bestBICs)[1], ",")) bestModel <- temp[1] G <- as.numeric(temp[2]) if(G == 0) { ans <- list(bic = bestBICs[1], z = unmap(rep(0,n)), classification = rep(0, n), uncertainty = rep(0, n), n = n, d = ncol(data), modelName = bestModel, G = 0, loglik = n * log(Vinv), Vinv = Vinv, parameters = NULL) orderedNames <- c("modelName", "n", "d", "G", "bic", "loglik", "Vinv", "parameters", "z", "classification", "uncertainty") return(structure(ans[orderedNames], bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC")) } G1 <- G + 1 if(is.null(subset)) { z <- matrix(0, n, G1) if(d > 1 || !is.null(hcPairs)) { z[-noise, 1:G] <- unmap(hclass(hcPairs, G)) } else { z[-noise, 1:G] <- unmap(qclass(data[-noise], G)) } z[noise, G1] <- 1 out <- me(modelName = bestModel, data = data, z = z, prior = prior, Vinv = Vinv, control = control, warn = warn) } else { subset <- setdiff(subset, noise) # set subset among those obs not noise if(d > 1 || !is.null(hcPairs)) { z <- unmap(hclass(hcPairs, G)) } else { z <- unmap(qclass(data[subset], G)) } ms <- mstep(modelName = bestModel, data = as.matrix(data)[subset,], z = z, prior = prior, control = control, warn = warn) es <- do.call("estep", c(list(data = data, warn = warn), ms)) es$z <- cbind(es$z, 0) es$z[noise,] <- matrix(c(rep(0,G),1), byrow = TRUE, nrow = length(noise), ncol = G+1) out <- me(modelName = bestModel, data = data, z = es$z, prior = prior, Vinv = Vinv, control = control, warn = warn) } obsNames <- if(is.null(dim(data))) names(data) else dimnames(data)[[1]] classification <- map(out$z, warn = warn) classification[classification == G1] <- 0 uncertainty <- 1 - apply(out$z, 1, max) names(classification) <- names(uncertainty) <- obsNames ans <- c(list(bic = as.vector(bestBICs[1]), classification = classification, uncertainty = uncertainty, Vinv = Vinv), out) orderedNames <- c("modelName", "n", "d", "G", "bic", "loglik", "parameters", "Vinv", "z", "classification", "uncertainty") structure(ans[orderedNames], bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC") } print.summary.mclustBIC <- function(x, digits = getOption("digits"), ...) { if("classification" %in% names(x)) { bic <- attr(x,"bestBICvalues") l <- length(bic) if(l == 1) { cat("BIC value:\n") print(bic, digits = digits) } else { cat("Best BIC values:\n") bic <- drop(as.matrix(bic)) bic <- rbind(BIC = bic, "BIC diff" = bic - max(bic)) print(bic, digits = digits) } cat("\nClassification table for model (", colnames(bic)[1], "):", sep = "") print(table(x$classification), digits = digits, ...) } else { cat("Best BIC values:\n") x <- if(length(x) == 0) attr(x,"bestBICvalues") else drop(as.matrix(x)) x <- rbind(BIC = x, "BIC diff" = x - max(x)) print(x, digits = digits) } invisible() } pickBIC <- function(x, k = 3, ...) { if(!is.matrix(x)) { warning("sorry, the pickBIC function cannot be applied to the provided argument!") return() } Glabels <- dimnames(x)[[1]] modelNames <- dimnames(x)[[2]] mis <- is.na(x) if(all(mis)) { warning("none of the selected models could be fitted") return(rep(NA,k)) } x[mis] <- - .Machine$double.xmax x <- data.frame(as.vector(x), Glabels[as.vector(row(x))], modelNames[as.vector(col(x))]) # x <- x[rev(order(x[,1])),] # order by including first simpler models if ties are present x <- x[order(-x[, 1], x[,2], x[,3]),] namesx <- apply(x[,-1,drop = FALSE], 1, function(z) paste(as.character(z[2]), as.character(z[1]), sep = ",")) k <- min(k, nrow(x)) x <- x[1:k,1] x[x == - .Machine$double.xmax] <- NA namesx <- namesx[1:k] namesx[is.na(x)] <- " " names(x) <- namesx x } mclustModel <- function(data, BICvalues, G=NULL, modelNames=NULL, ...) { mc <- match.call(expand.dots = FALSE) if (is.null(attr(BICvalues,"initialization")$noise)) { mc[[1]] <- as.name("summaryMclustBIC") } else { mc[[1]] <- as.name("summaryMclustBICn") } nm <- names(mc) mc[1:3] <- mc[c(1,3,2)] nm[1:3] <- nm[c(1,3,2)] nm[nm == "BICvalues"] <- "object" names(mc) <- nm ans <- eval(mc, parent.frame()) ans$classification <- ans$uncertainty <- NULL attr( ans, "bestBICvalues") <- NULL attr( ans, "prior") <- NULL attr( ans, "control") <- NULL attr( ans, "initialization") <- NULL oldClass(ans) <- "mclustModel" ans } mclustModelNames <- function(model) { type <- switch(EXPR = as.character(model), "E" = "univariate, equal variance", "V" = "univariate, unequal variance", "EII" = "spherical, equal volume", "VII" = "spherical, varying volume", "EEI" = "diagonal, equal volume and shape", "VEI" = "diagonal, equal shape", "EVI" = "diagonal, equal volume, varying shape", "VVI" = "diagonal, varying volume and shape", "EEE" = "ellipsoidal, equal volume, shape and orientation", "EVE" = "ellipsoidal, equal volume and orientation", "VEE" = "ellipsoidal, equal shape and orientation", "VVE" = "ellipsoidal, equal orientation", "EEV" = "ellipsoidal, equal volume and shape", "VEV" = "ellipsoidal, equal shape", "EVV" = "ellipsoidal, equal volume", "VVV" = "ellipsoidal, varying volume, shape, and orientation", "X" = "univariate normal", "XII" = "spherical multivariate normal", "XXI" = "diagonal multivariate normal", "XXX" = "ellipsoidal multivariate normal", warning("invalid model")) return(list(model = model, type = type)) } defaultPrior <- function(data, G, modelName, ...) { aux <- list(...) if(is.null(aux$shrinkage)) { shrinkage <- 0.01 } else if(is.na(aux$shrinkage) || !aux$shrinkage) { shrinkage <- 0 } else if(aux$shrinkage < 0) { stop("negative value given for shrinkage") } else { shrinkage <- aux$shrinkage } if(is.null(aux$mean)) { mean <- if (is.null(dim(data))) mean(data) else colMeans(data) } else if(any(is.na(aux$mean))) { if(shrinkage) stop("positive shrinkage with no prior mean specified") mean <- if (is.null(dim(data))) mean(data) else colMeans(data) } else { if(!shrinkage) stop("prior mean specified but not shrinkage") mean <- aux$mean } switch(EXPR = modelName, E = , V = , X = { dof <- 3 if(is.null(aux$scale)) { scale <- var(data)/G^2 } else { scale <- aux$scale } list(shrinkage = shrinkage, mean = mean, dof = dof, scale = scale) }, ## EII = , VII = , XII = , EEI = , EVI = , VEI = , VVI = , XXI = { n <- nrow(data) p <- ncol(data) dof <- p + 2 if(is.null(aux$scale)) { fac <- (1/G)^(2/p) scale <- (fac * sum(apply(data, 2, var)))/ p } else { scale <- aux$scale } list(shrinkage = shrinkage, mean = mean, dof = dof, scale = scale) }, ## EEE = , EVE = , VEE = , VVE = , EEV = , VEV = , EVV = , VVV = , XXX = { n <- nrow(data) p <- ncol(data) dof <- p + 2 if(is.null(aux$scale)) { fac <- (1/G)^(2/p) if(n > p) { scale <- fac * var(data) } else { scale <- fac * diag(apply(data, 2, var)) } } else { scale <- aux$scale } list(shrinkage = shrinkage, mean = mean, dof = dof, scale = scale) }, stop("no default prior for this model")) } emControl <- function(eps = .Machine$double.eps, tol = c(1.0e-05, sqrt(.Machine$double.eps)), itmax = c(.Machine$integer.max, .Machine$integer.max), equalPro = FALSE) { if(any(eps < 0)) stop("eps is negative") if(any(eps >= 1)) stop("eps is not less than 1") if(any(tol < 0)) stop("tol is negative") if(any(tol >= 1)) stop("tol is not less than 1") if(any(itmax < 0)) stop("itmax is negative") if(length(tol) == 1) tol <- rep(tol, 2) if(length(itmax) == 1) itmax <- c(itmax, .Machine$integer.max) i <- is.infinite(itmax) if(any(i)) itmax[i] <- .Machine$integer.max list(eps = eps, tol = tol, itmax = itmax, equalPro = equalPro) } priorControl <- function(functionName = "defaultPrior", ...) { c(list(functionName = functionName), list(...)) } cdensEEE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) > 2) stop("data must be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EEE", WARNING = WARNING, returnCode = 9)) } if(is.null(parameters$variance$cholSigma)) stop("variance parameters are missing") temp <- .Fortran("eseee", as.logical(1), as.double(data), as.double(mu), as.double(parameters$variance$cholSigma), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(p), double(1), double(n * G), PACKAGE = "mclust")[10:12] lapackCholInfo <- temp[[1]][1] loglik <- temp[[2]] z <- matrix(temp[[3]], n, G) WARNING <- NULL if(lapackCholInfo) { if(lapackCholInfo > 0) { WARNING <- "sigma is not positive definite" if(warn) warning(WARNING) } else { WARNING <- "input error for LAPACK DPOTRF" if(warn) warning(WARNING) } z[] <- NA ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EEE", WARNING = WARNING, retrunCode = ret) } emEEE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEEE(data, parameters = parameters, warn = warn)$z meEEE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEEE <- function(data, parameters, warn = NULL, ...) { if (is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) > 2) stop("data must be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EEE", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$cholSigma)) stop("variance parameters are missing") temp <- .Fortran("eseee", as.logical(1), as.double(data), as.double(mu), as.double(parameters$variance$cholSigma), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(p), double(1), double(n * K), PACKAGE = "mclust")[10:12] lapackCholInfo <- temp[[1]][1] loglik <- temp[[2]] z <- matrix(temp[[3]], n, K) WARNING <- NULL if(lapackCholInfo) { if(lapackCholInfo > 0) { WARNING <- "sigma is not positive definite" warning(WARNING) ret <- -4 } else { WARNING <- "input error for LAPACK DPOTRF" warning(WARNING) ret <- -5 } z[] <- loglik <- NA } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EEE", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEEE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEE", d = p, G = G, Sigma = matrix(as.double(NA), p, p), cholSigma = matrix(as.double(NA), p, p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EEE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meeee", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(p * p), double(K), double(p), PACKAGE = "mclust")[7:13] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEE"), prior[names(prior) != "functionName"])) temp <- .Fortran("meeeep", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(p * p), double(K), double(p), PACKAGE = "mclust")[c(11:17, 10)] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) cholSigma <- matrix(temp[[6]], p, p) pro <- temp[[7]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) Sigma <- matrix( NA, p, p) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- logprior <- NA sigma <- array(NA, c(p, p, G)) Sigma <- matrix(as.double(NA), p, p) ret <- if(control$equalPro) -2 else -3 } else { Sigma <- unchol(cholSigma, upper = TRUE) sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- Sigma if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- dimnames(cholSigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEE", d = p, G = G, sigma = sigma, Sigma = Sigma, cholSigma = cholSigma) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EEE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEEE <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEE", d = p, G = G, sigma <- array(NA, c(p,p, G)), Sigma = matrix(as.double(NA), p, p), cholSigma = matrix(as.double(NA), p, p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EEE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("mseee", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p), double(p * G), double(p * p), double(G), PACKAGE = "mclust")[7:9] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEE"), prior[names(prior) != "functionName"])) temp <- .Fortran("mseeep", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$scale) else priorParams$scale), as.double(priorParams$dof), double(p), double(p * G), double(p * p), double(G), PACKAGE = "mclust")[11:13] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) cholSigma <- matrix(temp[[2]], p, p) pro <- temp[[3]] sigma <- array(0, c(p, p, G)) Sigma <- unchol(cholSigma, upper = TRUE) for(k in 1:G) sigma[, , k] <- Sigma WARNING <- NULL if(any(mu > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- sigma[] <- Sigma[] <- cholSigma[] <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- dimnames(cholSigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEE", d = p, G = G, sigma = sigma, Sigma = Sigma, cholSigma= cholSigma) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EEE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEEE <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EEE")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) if(is.null(cholSigma <- parameters$variance$cholSigma)) { if(is.null(Sigma <- parameters$variance$Sigma)) { stop("variance parameters must inlcude either Sigma or cholSigma" ) } cholSigma <- chol(Sigma) } for(k in 1:G) { m <- ctabel[k] x[clabels == k,] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EEE") } cdensEEI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EEI", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("eseei", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EEI", WARNING = WARNING, returnCode = ret) } cdensEII <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EII", WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(sigmasq < 0) stop("sigma-squared is negative") if(!sigmasq) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EII", WARNING = WARNING, returnCode = 9)) } temp <- .Fortran("eseii", as.double(data), as.double(mu), as.double(sigmasq), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[9:10] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EII", WARNING = WARNING, returnCode = ret) } emEEI <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEEI(data, parameters = parameters, warn = warn)$z meEEI(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEEI <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EEI", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("eseei", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EEI", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEEI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) > 2) stop("data should be in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEI", d = p, G = G, scale = NA, shape = rep(NA,p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EEI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meeei", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(p), double(K), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEI"), prior[names(prior) != "functionName"])) temp <- .Fortran("meeeip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(p), double(K), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[6]] shape <- temp[[7]] pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) sigma <- array(NA, c(p, p, G)) Sigma <- matrix(as.double(NA), p, p) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } sigma <- array(NA, c(p, p, G)) Sigma <- matrix(as.double(NA), p, p) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(0, c(p, p, G)) Sigma <- diag(scale * shape) for(k in 1:G) sigma[, , k] <- Sigma if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEI", d = p, G = G, sigma = sigma, Sigma = Sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EEI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEEI <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEI", d = p, G = G, scale = NA, shape = rep(NA,p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EEI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("mseei", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p * G), double(1), double(p), double(G), PACKAGE = "mclust")[6:9] } else { storage.mode(z) <- "double" priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEI"), prior[names( prior) != "functionName"])) temp <- .Fortran("mseeip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(1), double(p), double(G), PACKAGE = "mclust")[10:13] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[2]] shape <- temp[[3]] pro <- temp[[4]] WARNING <- NULL if(any(c(shape, scale) > signif(.Machine$double.xmax, 6)) || any(!c( scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- scale <- shape[] <- NA sigma <- Sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- array(0, c(p, p, G)) Sigma <- diag(scale * shape) for(k in 1:G) sigma[, , k] <- Sigma ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEI", d = p, G = G, sigma = sigma, Sigma = Sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EEI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEEI <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EEI")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) shape <- parameters$variance$shape if(length(shape) != d) stop("shape incompatible with mean") cholSigma <- diag(sqrt(parameters$variance$scale * shape)) for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EEI") } cdensE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one-dimensional") data <- drop(data) n <- length(data) mu <- drop(parameters$mean) G <- length(mu) if(any(is.na(unlist(parameters[c("mean", "variance")]))) || any(is.null(parameters[c("mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(names(data), NULL) return(structure(z, logarithm = logarithm, modelName = "E", WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(length(sigmasq) > 1) if(warn) warning("more than one sigma-squared given") if(sigmasq < 0) stop("sigma-squared is negative") if(!sigmasq) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(names(data), NULL) return(structure(z, logarithm = logarithm, modelName = "E", WARNING = WARNING, returnCode = 9)) } temp <- .Fortran("es1e", as.double(data), as.double(mu), as.double(sigmasq), as.double(-1), as.integer(n), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[8:9] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(names(data),NULL) structure(z, logarithm = logarithm, modelName = "E", WARNING = WARNING, returnCode = ret) } emE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepE(data, parameters = parameters, warn = warn)$z meE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepE <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one-dimensional") data <- drop(data) n <- length(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- drop(parameters$mean) G <- length(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(names(data), NULL) return(structure(list(modelName = "E", n=n, d=1, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(length(sigmasq) > 1) if(warn) warning("more than one sigma-squared specified") if(sigmasq < 0) stop("sigma-squared is negative") if(!sigmasq) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(names(data), NULL) return(structure(list(modelName = "E", n=n, d=1, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = -1)) } temp <- .Fortran("es1e", as.double(data), as.double(mu), as.double(sigmasq), as.double(pro), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[8:9] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(names(data),NULL) structure(list(modelName = "E", n = n, d = 1, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensEEV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EEV", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") temp <- .Fortran("eseev", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(aperm(parameters$variance$orientation,c(2,1,3))), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(p), double(p), double(1), double(n * G), PACKAGE = "mclust")[13:14] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EEV", WARNING = WARNING, returnCode = ret) } emEEV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEEV(data, parameters = parameters, warn = warn)$z meEEV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEEV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EEV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") temp <- .Fortran("eseev", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(aperm(parameters$variance$orientation,c(2,1,3))), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(p), double(p), double(1), double(n * K), PACKAGE = "mclust")[13:14] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EEV", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEEV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEV", d = p, G = G, scale = NA, shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EEV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p)) storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meeev", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), as.integer(lwork), double(p * G), double(1), double(p), double(p * p * G), double(K), double(lwork), double(p), PACKAGE = "mclust")[7:16] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEV"), prior[names(prior) !="functionName"])) temp <- .Fortran("meeevp", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), as.integer(lwork), double(p * G), double(1), double(p), double(p * p * G), double(K), double(lwork), double(p), PACKAGE = "mclust")[11:20] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] lapackSVDinfo <- temp[[5]] mu <- matrix(temp[[6]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[7]] shape <- temp[[8]] O <- aperm(array(temp[[9]], c(p, p, G)),c(2,1,3)) pro <- temp[[10]] WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DGESVD" } z[] <- O[] <- shape[] <- NA scale <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) shape[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "a z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- scale * shapeO(shape, O, transpose = FALSE) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "EEV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EEV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEEV <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEV", d = p, G = G, scale = NA, shape = rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EEV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } # shape <- sqrt(rev(sort(shape/exp(sum(log(shape))/p)))) if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop( "improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), G) if(is.null(prior)) { temp <- .Fortran("mseev", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(lwork), as.integer(lwork), double(p * G), double(1), double(p), double(p * p * G), double(G), PACKAGE = "mclust")[7:12] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEV"), prior[names(prior) != "functionName"])) temp <- .Fortran("mseevp", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), double(lwork), as.integer(lwork), double(p * G), double(1), double(p), double(p * p * G), double(G), PACKAGE = "mclust")[11:16] } lapackSVDinfo <- temp[[1]] mu <- matrix(temp[[2]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[3]] shape <- temp[[4]] O <- aperm( array(temp[[5]], c(p, p, G)), c(2,1,3)) pro <- temp[[6]] WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" if(warn) warning(WARNING) ret <- -4 } else { WARNING <- "input error for LAPACK DGESVD" if(warn) warning(WARNING) ret <- -5 } O[] <- shape[] <- scale <- NA sigma <- array(NA, c(p, p, G)) } else if(any(c(abs(scale), shape) > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- scale <- O[] <- shape[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- scale * shapeO(shape, O, transpose = FALSE) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EEV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEEV <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EEV")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) shape <- parameters$variance$shape if(length(shape) != d) stop("shape incompatible with mean") sss <- sqrt(parameters$variance$scale * shape) for(k in 1:G) { m <- ctabel[k] cholSigma <- t(parameters$variance$orientation[, , k]) * sss x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EEV") } emEII <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEII(data, parameters = parameters, warn = warn)$z meEII(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEII <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) p <- ncol(data) n <- nrow(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EII", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) if(warn) warning("variance parameters are missing") if(sigmasq < 0) stop("sigma-squared is negative") if(!sigmasq) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EII", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = -1)) } temp <- .Fortran("eseii", as.double(data), as.double(mu), as.double(sigmasq), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[9:10] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EII", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEII <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] # number of groups if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EII", d = p, G = G, sigmasq = NA) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meeii", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(K), PACKAGE = "mclust")[7:13] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EII"), prior[names(prior) != "functionName"])) temp <- .Fortran("meeiip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(K), PACKAGE = "mclust")[c(11:17, 10)] } mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] sigmasq <- temp[[6]] Sigma <- diag(rep(sigmasq, p)) pro <- temp[[7]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6) || sigmasq <= max(control$eps,0)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- Sigma if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EII", d = p, G = G, sigma = sigma, Sigma = Sigma, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance = variance, Vinv=Vinv) structure(list(modelName = "EII", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEII <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EII", d = p, G = G, sigmasq = NA) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mseii", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p * G), double(1), double(G), PACKAGE = "mclust")[6:8] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EII"), prior[names(prior) !="functionName"])) temp <- .Fortran("mseiip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(1), double(G), PACKAGE = "mclust")[10:12] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) sigmasq <- temp[[2]] pro <- temp[[3]] sigma <- array(0, c(p, p, G)) Sigma <- diag(rep(sigmasq, p)) for(k in 1:G) sigma[, , k] <- Sigma WARNING <- NULL if(sigmasq > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EII", d = p, G = G, sigma = sigma, Sigma = Sigma, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance = variance) structure(list(modelName = "EII", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEII <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d), modelName = "EII")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) sigmasq <- parameters$variance$sigmasq cholSigma <- diag(rep(sqrt(sigmasq), d)) for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EII") } meE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be 1 dimensional") data <- as.vector(data) n <- length(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal length of data") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if (Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "E", d = 1, G = G, sigmasq = NA) parameters <- list(pro=rep(NA,G), mean=rep(NA,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="E", prior=prior, n=n, d=1, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("me1e", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(G), double(1), double(K), PACKAGE = "mclust")[6:12] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "E"), prior[names(prior) != "functionName"])) temp <- .Fortran("me1ep", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(G), double(1), double(K), PACKAGE = "mclust")[c(10:16, 9)] } mu <- temp[[5]] names(mu) <- as.character(1:G) z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] sigmasq <- temp[[6]] pro <- temp[[7]] ## log post <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6) || sigmasq <= max(control$eps,0)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq <- z[] <- loglik <- logprior <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA ret <- if(control$equalPro) -2 else -3 } else if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 info <- c(iterations = its, error = err) dimnames(z) <- list(names(data), NULL) variance <- list(modelName = "E", d = 1, G = G, sigmasq = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "E", prior = prior, n = n, d = 1, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepE <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") # number of groups G <- dimz[2] ## if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName="E", d=1, G=G, sigmasq=NA) parameters <- list(pro=rep(NA,G), mean=rep(NA,G), variance=variance) return(structure(list(modelName="E", prior=prior, n=n, d=1, G=G, z = z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("ms1e", as.double(data), as.double(z), as.integer(n), as.integer(G), double(G), double(1), double(G), PACKAGE = "mclust")[5:7] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "E"), prior[names(prior) != "functionName"])) storage.mode(z) <- "double" temp <- .Fortran("ms1ep", as.double(data), z, as.integer(n), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(G), double(1), double(G), PACKAGE = "mclust")[9:11] } mu <- temp[[1]] names(mu) <- as.character(1:G) sigmasq <- temp[[2]] pro <- temp[[3]] WARNING <- NULL if(sigmasq > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) pro[] <- mu[] <- sigmasq <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(names(data), NULL) variance <- list(modelName = "E", d = 1, G = G, sigmasq = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "E", prior = prior, n = n, d = 1, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simE <- function(parameters, n, seed = NULL, ...) { if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, 2), modelName = "E")) } if(!is.null(seed)) set.seed(seed) mu <- parameters$mean G <- length(mu) pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- rep(0, n) sd <- sqrt(parameters$variance$sigmasq) for(k in 1:G) { x[clabels == k] <- mu[k] + rnorm(ctabel[k], sd = sd) } structure(cbind(group = clabels, "1" = x), modelName = "E") } cdensEVI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- parameters$mean G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EVI", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esevi", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EVI", WARNING = WARNING, returnCode = ret) } emEVI <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEVI(data, parameters = parameters, warn = warn)$z meEVI(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEVI <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EVI", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esevi", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EVI", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEVI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if (Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVI", d = p, G = G, scale = NA, shape = matrix(as.double(NA),p,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EVI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meevi", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(p * G), double(K), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVI"), prior[names(prior) != "functionName"])) temp <- .Fortran("meevip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(p * G), double(K), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) scale <- temp[[6]] shape <- matrix(temp[[7]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { if(warn) warning("z column sum fell below threshold") WARNING <- "z column sum fell below threshold" } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(apply(scale * shape, 2, diag), c(p, p, G)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EVI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEVI <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVI", d = p, G = G, scale = NA, shape = matrix(as.double(NA),p,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EVI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("msevi", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p * G), double(1), double(p * G), double(G), PACKAGE = "mclust")[6:9] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVI"), prior[names( prior) != "functionName"])) temp <- .Fortran("msevip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(1), double(p * G), double(G), PACKAGE = "mclust")[10:13] } mu <- matrix(temp[[1]], p, G) scale <- temp[[2]] shape <- matrix(temp[[3]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[4]] WARNING <- NULL if(any(c(scale, shape) > signif(.Machine$double.xmax, 6)) || any(!c( scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- scale <- shape[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- array(apply(scale * shape, 2, diag), c(p, p, G)) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EVI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEVI <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EVI")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) shape <- as.matrix(parameters$variance$shape) if(!all(dim(shape) == dim(mean))) stop("shape incompatible with mean") sss <- sqrt(parameters$variance$scale * shape) for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% diag(sss[, k]), MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EVI") } mclust1Dplot <- function(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "density", "errors", "uncertainty"), symbols = NULL, colors = NULL, ngrid = length(data), xlab = NULL, xlim = NULL, CEX = 1, main = FALSE, ...) { grid1 <- function (n, range = c(0, 1), edge = TRUE) { if (any(n < 0 | round(n) != n)) stop("n must be nonpositive and integer") G <- rep(0, n) if(edge) { G <- seq(from = min(range), to = max(range), by = abs(diff(range))/(n - 1)) } else { lj <- abs(diff(range)) incr <- lj/(2 * n) G <- seq(from = min(range) + incr, to = max(range) - incr, by = 2 * incr) } G } densNuncer <- function(data, parameters) { cden <- cdensV(data = data, parameters = parameters) if(parameters$variance$G != 1) { z <- sweep(cden, MARGIN = 2, FUN = "*", STATS = parameters$pro) den <- apply(z, 1, sum) z <- sweep(z, MARGIN = 1, FUN = "/", STATS = den) data.frame(density = den, uncertainty = 1 - apply(z, 1, max)) } else { data.frame(density = cden, uncertainty = rep(NA, length(cden))) } } main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) if (is.null(xlab)) xlab <- " " p <- ncol(as.matrix(data)) if (p != 1) stop("for one-dimensional data only") data <- as.vector(data) n <- length(data) if(is.null(classification) && !is.null(z)) classification <- map(z) if(is.null(uncertainty) && !is.null(z)) uncertainty <- 1 - apply(z, 1, max) if (!is.null(parameters)) { mu <- parameters$mean L <- ncol(mu) sigmasq <- parameters$variance$sigmasq haveParams <- !is.null(mu) && !is.null(sigmasq) && !any(is.na(mu)) && !any(is.na(sigmasq)) } else haveParams <- FALSE if (is.null(xlim)) xlim <- range(data) if (haveParams) { G <- length(mu) if ((l <- length(sigmasq)) == 1) { sigmasq <- rep(sigmasq, G) } else if (l != G) { params <- FALSE warning("mu and sigma are incompatible") } } if (!is.null(truth)) { if (is.null(classification)) { classification <- truth truth <- NULL } else { if (length(unique(truth)) != length(unique(classification))) truth <- NULL else truth <- as.character(truth) } } if(!is.null(classification)) { classification <- as.character(classification) U <- sort(unique(classification)) L <- length(U) if(is.null(symbols)) { symbols <- rep("|", L) } else if(length(symbols) == 1) { symbols <- rep(symbols, L) } else if(length(symbols) < L) { warning("more symbols needed to show classification") symbols <- rep("|", L) } if(is.null(colors)) { colors <- mclust.options("classPlotColors")[1:L] } else if(length(colors) == 1) { colors <- rep(colors, L) } else if(length(colors) < L) { warning("more colors needed to show classification") colors <- rep("black", L) } } if (length(what) > 1) what <- what[1] choices <- c("classification", "density", "errors", "uncertainty") m <- charmatch(what, choices, nomatch = 0) if (m) { type <- choices[m] bad <- what == "classification" && is.null(classification) bad <- bad || (what == "uncertainty" && is.null(uncertainty)) bad <- bad || (what == "errors" && (is.null(classification) || is.null(truth))) if (bad) warning("insufficient input for specified plot") } else { bad <- !m warning("what improperly specified") } if (bad) what <- "bad" M <- L switch(EXPR = what, "classification" = { plot(data, seq(from = 0, to = M, length = n), type = "n", xlab = xlab, ylab = "", xlim = xlim, ylim = grDevices::extendrange(r = c(0,M), f = 0.1), yaxt = "n", main = "", ...) axis(side = 2, at = 0:M, labels = c("", sort(unique(classification)))) if(main) title("Classification") for(k in 1:L) { I <- classification == U[k] points(data[I], rep(0, length(data[I])), pch = symbols[k], cex = CEX) points(data[I], rep(k, length(data[I])), pch = symbols[k], col = colors[k], cex = CEX) } }, "errors" = { ERRORS <- classError(classification, truth)$misclassified plot(data, seq(from = 0, to = M, length = n), type = "n", xlab = xlab, ylab = "", xlim = xlim, yaxt = "n", main = "", ...) axis(side = 2, at = 0:M, labels = c("", unique(classification))) if(main) title("Classification Errors") good <- rep(TRUE, length(classification)) good[ERRORS] <- FALSE sym <- "|" for(k in 1:L) { K <- classification == U[k] I <- K & good if(any(I)) { if(FALSE) { sym <- if (L > 4) 1 else if (k == 4) 5 else k - 1 } l <- sum(as.numeric(I)) points(data[I], rep(0, l), pch = sym, col = colors[k], cex = CEX) } I <- K & !good if(any(I)) { if(FALSE) { sym <- if (L > 5) 16 else k + 14 } l <- sum(as.numeric(I)) points(data[I], rep(k, l), pch = sym, col = colors[k], cex = CEX) # points(data[I], rep(0, l), pch = sym, cex = CEX) # points(data[I], rep(-0.5, l), pch = sym, cex = CEX) } } }, "uncertainty" = { # x <- grid1(n = ngrid, range = xlim, edge = TRUE) # lx <- length(x) # Z <- densNuncer(data = x, parameters = parameters) # plot(x, Z$uncertainty, xlab = xlab, ylab = "uncertainty", # xlim = xlim, ylim = c(0,1), type = "l", main = "", ...) u <- (uncertainty - min(uncertainty))/ (max(uncertainty) - min(uncertainty) + sqrt(.Machine$double.eps)) b <- bubble(u, cex = CEX*c(0.3, 2), alpha = c(0.3, 1)) cl <- sapply(classification, function(cl) which(cl == U)) plot(data, uncertainty, type = "h", xlab = xlab, ylab = "Uncertainty", xlim = xlim, ylim = c(0,1), main = "", col = mapply(adjustcolor, col = colors[cl], alpha.f = b$alpha), ...) rug(data, lwd = 1, col = adjustcolor(par("fg"), alpha.f = 0.8)) if(main) title("Uncertainty") }, "density" = { if(is.null(parameters$pro) && parameters$variance$G != 1) stop("mixing proportions missing") x <- grid1(n = ngrid, range = xlim, edge = TRUE) lx <- length(x) Z <- densNuncer(data = x, parameters = parameters) plot(x, Z$density, xlab = xlab, ylab = "density", xlim = xlim, type = "l", main = "", ...) if(main) title("Density") }, { plot(data, rep(0, n), type = "n", xlab = "", ylab = "", xlim = xlim, main = "", ...) points(data, rep(0, n), pch = "|", cex = CEX) if(main) title("Point Plot") # return(invisible()) } ) invisible() } mclust2Dplot <- function(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "uncertainty", "errors"), addEllipses = TRUE, symbols = NULL, colors = NULL, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, scale = FALSE, CEX = 1, PCH = ".", main = FALSE, swapAxes = FALSE, ...) { if(dim(data)[2] != 2) stop("data must be two dimensional") if(is.null(classification) && !is.null(z)) classification <- map(z) if(is.null(uncertainty) && !is.null(z)) uncertainty <- 1 - apply(z, 1, max) if(!is.null(parameters)) { mu <- parameters$mean L <- ncol(mu) sigma <- parameters$variance$sigma haveParams <- !is.null(mu) && !is.null(sigma) && !any(is.na(mu)) && !any(is.na(sigma)) } else haveParams <- FALSE main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) if(is.null(xlim)) xlim <- range(data[, 1]) if(is.null(ylim)) ylim <- range(data[, 2]) if(scale) { par(pty = "s") d <- diff(xlim) - diff(ylim) if(d > 0) { ylim <- c(ylim[1] - d/2, ylim[2] + d/2.) } else { xlim <- c(xlim[1] + d/2, xlim[2] - d/2) } } dnames <- dimnames(data)[[2]] if(is.null(xlab)) { xlab <- if(is.null(dnames)) "" else dnames[1] } if(is.null(ylab)) { ylab <- if(is.null(dnames)) "" else dnames[2] } if(haveParams) { G <- ncol(mu) dimpar <- dim(sigma) if(length(dimpar) != 3) { haveParams <- FALSE warning("covariance must be a 3D matrix") } if(G != dimpar[3]) { haveParams <- FALSE warning("means and variance parameters are incompatible") } mu <- array(mu, c(2, G)) sigma <- array(sigma, c(2, 2, G)) } if(swapAxes) { if(haveParams) { mu <- mu[2:1,] sigma <- sigma[2:1, 2:1,] } data <- data[, 2:1] } if(!is.null(truth)) { if(is.null(classification)) { classification <- truth truth <- NULL } else { if(length(unique(truth)) != length(unique(classification))) truth <- NULL else truth <- as.character(truth) } } if(charmatch("classification", what, nomatch = 0) && is.null(classification) && !is.null(z)) { classification <- map(z) } if(!is.null(classification)) { classification <- as.character(classification) U <- sort(unique(classification)) L <- length(U) noise <- (U[1] == "0") if(is.null(symbols)) { if(L <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols")[1:L] if(noise) { symbols <- c(16,symbols)[1:L] } } else if(L <= 9) { symbols <- as.character(1:9) } else if(L <= 26) { symbols <- LETTERS } } if(is.null(colors)) { if(L <= length(mclust.options("classPlotColors"))) { colors <- mclust.options("classPlotColors")[1:L] if(noise) { colors <- unique(c("black", colors))[1:L] } } } else if(length(colors) == 1) colors <- rep(colors, L) if(length(symbols) < L) { warning("more symbols needed to show classification ") symbols <- rep(16,L) } if(length(colors) < L) { warning("more colors needed to show classification ") colors <- rep("black",L) } } if(length(what) > 1) what <- what[1] choices <- c("classification", "errors", "uncertainty") m <- charmatch(what, choices, nomatch = 0) if(m) { what <- choices[m] bad <- what == "classification" && is.null(classification) bad <- bad || (what == "uncertainty" && is.null(uncertainty)) bad <- bad || (what == "errors" && (is.null(classification) || is.null(truth))) if(bad) warning("insufficient input for specified plot") } else { bad <- !m warning("what improperly specified") } if(bad) what <- "bad" switch(EXPR = what, "classification" = { plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) title("Classification") for(k in 1:L) { I <- classification == U[k] points(data[I, 1], data[I, 2], pch = symbols[k], col = colors[k], cex = if(U[k] == "0") CEX/2 else CEX) } }, "errors" = { ERRORS <- classError(classification, truth)$misclassified plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) title("Classification Errors") CLASSES <- unique(as.character(truth)) symOpen <- c(2, 0, 1, 5) symFill <- c(17, 15, 16, 18) good <- rep(TRUE,length(classification)) good[ERRORS] <- FALSE if(L > 4) { points(data[good, 1], data[good, 2], pch = 1, col = colors, cex = CEX) points(data[!good, 1], data[!good, 2], pch = 16, cex = CEX) } else { for(k in 1:L) { K <- truth == CLASSES[k] points(data[K, 1], data[K, 2], pch = symOpen[k], col = colors[k], cex = CEX) if(any(I <- (K & !good))) { points(data[I, 1], data[I, 2], pch = symFill[k], cex = CEX) } } } }, "uncertainty" = { u <- (uncertainty - min(uncertainty))/ (max(uncertainty) - min(uncertainty) + sqrt(.Machine$double.eps)) b <- bubble(u, cex = CEX*c(0.3, 2), alpha = c(0.3, 0.9)) cl <- sapply(classification, function(cl) which(cl == U)) plot(data[, 1], data[, 2], pch = 19, xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", cex = b$cex, col = mapply(adjustcolor, col = colors[cl], alpha.f = b$alpha), ...) if(main) title("Uncertainty") }, { plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) title("Point Plot") points(data[, 1], data[, 2], pch = PCH, cex = CEX) } ) if(haveParams && addEllipses) { ## plot ellipsoids for(k in 1:G) mvn2plot(mu = mu[,k], sigma = sigma[,,k], k = 15) } invisible() } mvn2plot <- function(mu, sigma, k = 15, alone = FALSE, col = rep("grey30",3), pch = 8, lty = c(1,2), lwd = c(1,1)) { p <- length(mu) if (p != 2) stop("only two-dimensional case is available") if (any(unique(dim(sigma)) != p)) stop("mu and sigma are incompatible") ev <- eigen(sigma, symmetric = TRUE) s <- sqrt(rev(sort(ev$values))) V <- t(ev$vectors[, rev(order(ev$values))]) theta <- (0:k) * (pi/(2 * k)) x <- s[1] * cos(theta) y <- s[2] * sin(theta) xy <- cbind(c(x, -x, -x, x), c(y, y, -y, -y)) xy <- xy %*% V xy <- sweep(xy, MARGIN = 2, STATS = mu, FUN = "+") if(alone) { xymin <- apply(xy, 2, FUN = "min") xymax <- apply(xy, 2, FUN = "max") r <- ceiling(max(xymax - xymin)/2) xymid <- (xymin + xymax)/2 plot(xy[, 1], xy[, 2], type = "n", xlab = "x", ylab = "y", xlim = c(-r, r) + xymid[1], ylim = c(-r, r) + xymid[2]) } l <- length(x) i <- 1:l for(k in 1:4) { lines(xy[i,], col = col[1], lty = lty[1], lwd = lwd[1]) i <- i + l } x <- s[1] y <- s[2] xy <- cbind(c(x, -x, 0, 0), c(0, 0, y, -y)) xy <- xy %*% V xy <- sweep(xy, MARGIN = 2, STATS = mu, FUN = "+") lines(xy[1:2,], col = col[2], lty = lty[2], lwd = lwd[2]) lines(xy[3:4,], col = col[2], lty = lty[2], lwd = lwd[2]) points(mu[1], mu[2], col = col[3], pch = pch) invisible() } # new version plot.Mclust <- function(x, what = c("BIC", "classification", "uncertainty", "density"), dimens = NULL, xlab = NULL, ylab = NULL, ylim = NULL, addEllipses = TRUE, main = TRUE, ...) { object <- x # Argh. Really want to use object anyway if(!inherits(object, "Mclust")) stop("object not of class \"Mclust\"") data <- object$data p <- ncol(data) if(p == 1) colnames(data) <- deparse(x$call$data) if(is.null(dimens)) dimens <- seq(p) else dimens <- dimens[dimens <= p] d <- length(dimens) main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) #################################################################### what <- match.arg(what, several.ok = TRUE) oldpar <- par(no.readonly = TRUE) # on.exit(par(oldpar)) plot.Mclust.bic <- function(...) plot.mclustBIC(object$BIC, xlab = xlab, ylim = ylim, ...) plot.Mclust.classification <- function(...) { if(p == 1) { mclust1Dplot(data = data, # parameters = object$parameters, what = "classification", classification = object$classification, z = object$z, xlab = if(is.null(xlab)) colnames(data)[dimens] else xlab, main = main, ...) } if(p == 2) { mclust2Dplot(data = data, what = "classification", classification = object$classification, parameters = if(addEllipses) object$parameters else NULL, xlab = if(is.null(xlab)) colnames(data)[1] else xlab, ylab = if(is.null(ylab)) colnames(data)[2] else ylab, main = main, ...) } if(p > 2) { if(d == 2) { coordProj(data = data, what = "classification", parameters = object$parameters, classification = object$classification, addEllipses = addEllipses, dimens = dimens, main = main, ...) } else { on.exit(par(oldpar)) par(mfrow = c(d, d), mar = rep(c(0.3,0.3/2),each=2), oma = c(4, 4, 4, 4)) for(i in seq(d)) { for(j in seq(d)) { if(i == j) { plot(data[,c(j,i)],type="n",xlab="",ylab="",axes=FALSE) text(mean(par("usr")[1:2]), mean(par("usr")[3:4]), labels = colnames(data[,dimens])[i], cex=1.5, adj=0.5) box() } else { coordProj(data = data, what = "classification", parameters = object$parameters, classification = object$classification, addEllipses = addEllipses, dimens = dimens[c(j,i)], main = FALSE, xaxt = "n", yaxt = "n", ...) } if(i == 1 && (!(j%%2))) axis(3) if(i == d && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == d && (i%%2)) axis(4) } } } } } plot.Mclust.uncertainty <- function(...) { if(p == 1) { mclust1Dplot(data = data, parameters = object$parameters, z = object$z, what = "uncertainty", xlab = if(is.null(xlab)) colnames(data)[dimens] else xlab, main = main, ...) } if(p == 2) { mclust2Dplot(data = data, what = "uncertainty", parameters = object$parameters, uncertainty = object$uncertainty, classification = object$classification, xlab = if(is.null(xlab)) colnames(data)[1] else xlab, ylab = if(is.null(ylab)) colnames(data)[2] else ylab, addEllipses = addEllipses, main = main, ...) } if(p > 2) { if(d == 2) { coordProj(data = data, what = "uncertainty", parameters = object$parameters, # z = object$z, uncertainty = object$uncertainty, classification = object$classification, dimens = dimens, main = main, addEllipses = addEllipses, ...) } else { on.exit(par(oldpar)) par(mfrow = c(d, d), mar = rep(c(0.3,0.3/2),each=2), oma = c(4, 4, 4, 4)) for(i in seq(d)) { for(j in seq(d)) { if(i == j) { plot(0,0,type="n",xlab="",ylab="",axes=FALSE) text(0,0, colnames(data[,dimens])[i], cex=1.5, adj=0.5) box() } else { coordProj(data = data, what = "uncertainty", parameters = object$parameters, uncertainty = object$uncertainty, classification = object$classification, dimens = dimens[c(j,i)], main = FALSE, addEllipses = addEllipses, xaxt = "n", yaxt = "n", ...) } if(i == 1 && (!(j%%2))) axis(3) if(i == d && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == d && (i%%2)) axis(4) } } } } } plot.Mclust.density <- function(...) { if(p == 1) { mclust1Dplot(data = data, parameters = object$parameters, z = object$z, what = "density", xlab = if(is.null(xlab)) colnames(data)[dimens] else xlab, main = main, ...) } if(p == 2) { surfacePlot(data = data, parameters = object$parameters, what = "density", nlevels = 11, transformation = "log", xlab = if(is.null(xlab)) colnames(data)[1] else xlab, ylab = if(is.null(ylab)) colnames(data)[2] else ylab, main = main, ...) } if(p > 2) { objdens <- object objdens$varname <- colnames(data) objdens$range <- if(objdens$d > 1) apply(data, 2, range) else range(data) plotDensityMclustd(objdens, nlevels = 11, ...) } } if(interactive() & length(what) > 1) { title <- "Model-based clustering plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "BIC") plot.Mclust.bic(...) if(what[choice] == "classification") plot.Mclust.classification(...) if(what[choice] == "uncertainty") plot.Mclust.uncertainty(...) if(what[choice] == "density") plot.Mclust.density(...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "BIC")) plot.Mclust.bic(...) if(any(what == "classification")) plot.Mclust.classification(...) if(any(what == "uncertainty")) plot.Mclust.uncertainty(...) if(any(what == "density")) plot.Mclust.density(...) } invisible() } plot.mclustBIC <- function(x, G = NULL, modelNames = NULL, symbols = NULL, colors = NULL, xlab = NULL, ylab = "BIC", ylim = NULL, legendArgs = list(x = "bottomright", ncol = 2, cex = 1, inset = 0.01), ...) { if(is.null(xlab)) xlab <- "Number of components" fill <- FALSE subset <- !is.null(attr(x, "initialization")$subset) noise <- !is.null(attr(x, "initialization")$noise) ret <- attr(x, "returnCodes") == -3 legendArgsDefault <- list(x = "bottomright", ncol = 2, cex = 1, inset = 0.01) legendArgs <- append(as.list(legendArgs), legendArgsDefault) legendArgs <- legendArgs[!duplicated(names(legendArgs))] n <- ncol(x) dnx <- dimnames(x) x <- matrix(as.vector(x), ncol = n) dimnames(x) <- dnx if(is.null(modelNames)) modelNames <- dimnames(x)[[2]] if(is.null(G)) G <- as.numeric(dimnames(x)[[1]]) # BIC <- x[as.character(G), modelNames, drop = FALSE] # X <- is.na(BIC) # nrowBIC <- nrow(BIC) # ncolBIC <- ncol(BIC) if(is.null(symbols)) { colNames <- dimnames(x)[[2]] m <- length(modelNames) if(is.null(colNames)) { symbols <- if(m > 9) LETTERS[1:m] else as.character(1:m) names(symbols) <- modelNames } else { symbols <- mclust.options("bicPlotSymbols")[modelNames] } } if(is.null(colors)) { colNames <- dimnames(x)[[2]] if(is.null(colNames)) { colors <- 1:m names(colors) <- modelNames } else { colors <- mclust.options("bicPlotColors")[modelNames] } } x <- x[,modelNames, drop = FALSE] if(is.null(ylim)) ylim <- range(as.vector(x[!is.na(x)])) matplot(as.numeric(dnx[[1]]), x, type = "b", xaxt = "n", xlim = range(G), ylim = ylim, pch = symbols, col = colors, lty = 1, xlab = xlab, ylab = ylab, main = "") axis(side = 1, at = as.numeric(dnx[[1]])) if(!is.null(legendArgs)) { do.call("legend", c(list(legend = modelNames, col = colors, pch = symbols), legendArgs)) } invisible(symbols) } # old version: LS 20150317 sigma2decomp <- function(sigma, G = NULL, tol = sqrt(.Machine$double.eps), ...) { dimSigma <- dim(sigma) if(is.null(dimSigma)) stop("sigma improperly specified") d <- dimSigma[1] if(dimSigma[2] != d) stop("sigma improperly specified") l <- length(dimSigma) if(l < 2 || l > 3) stop("sigma improperly specified") if(is.null(G)) { if(l == 2) { G <- 1 sigma <- array(sigma, c(dimSigma, 1)) } else { G <- dimSigma[3] } } else { if(l == 3 && G != dimSigma[3]) stop("sigma and G are incompatible") if(l == 2 && G != 1) sigma <- array(sigma, c(d,d,G)) } # angle between subspaces subspace <- function(A, B) { for(k in 1:ncol(A)) { B <- B - A[,k,drop=FALSE] %*% (t(A[,k,drop=FALSE]) %*% B) } norm(B, type = "2") } # check equality of values uniq <- function(x) { abs(max(x) - min(x)) < tol } decomp <- list(d = d, G = G, scale = rep(0, G), shape = matrix(0, d, G), orientation = array(0, c(d, d, G))) for(k in 1:G) { ev <- eigen(sigma[,,k], symmetric = TRUE) temp <- log(ev$values); temp[!is.finite(temp)] <- 0 logScale <- sum(temp)/d decomp$scale[k] <- exp(logScale) decomp$shape[,k] <- exp(temp - logScale) decomp$orientation[,,k] <- ev$vectors } scaleName <- "V" shapeName <- "V" orientName <- "V" # check scale/volume if(uniq(decomp$scale)) { decomp$scale <- decomp$scale[1] scaleName <- "E" } # check shape if(all(apply(decomp$shape, 1, uniq))) { decomp$shape <- decomp$shape[, 1] if(all(uniq(decomp$shape))) { shapeName <- "I" decomp$shape <- rep(1, d) } else { shapeName <- "E" } } # check orientation eqOrientation <- { if(d == 2) all(apply(matrix(decomp$orientation, nrow = d * d, ncol = G), 1, uniq)) else all(apply(decomp$orientation[,,-1,drop=FALSE], 3, function(o) subspace(decomp$orientation[,,1],o)) < tol) } if(eqOrientation) { decomp$orientation <- decomp$orientation[,,1] if(all(apply(cbind(decomp$orientation, diag(d)), 1, uniq))) { orientName <- "I" decomp$orientation <- NULL } else { orientName <- "E" } } decomp$modelName <- paste0(scaleName, shapeName, orientName) decomp$sigma <- sigma orderedNames <- c("sigma", "d", "modelName", "G", "scale", "shape", "orientation") return(decomp[orderedNames]) } sigma2decomp <- function(sigma, G = NULL, tol = sqrt(.Machine$double.eps), ...) { dimSigma <- dim(sigma) if(is.null(dimSigma)) stop("sigma improperly specified") d <- dimSigma[1] if(dimSigma[2] != d) stop("sigma improperly specified") l <- length(dimSigma) if(l < 2 || l > 3) stop("sigma improperly specified") if(is.null(G)) { if(l == 2) { G <- 1 sigma <- array(sigma, c(dimSigma, 1)) } else { G <- dimSigma[3] } } else { if(l == 3 && G != dimSigma[3]) stop("sigma and G are incompatible") if(l == 2 && G != 1) sigma <- array(sigma, c(d,d,G)) } # angle between subspaces subspace <- function(A, B) { for(k in 1:ncol(A)) { B <- B - A[,k,drop=FALSE] %*% (t(A[,k,drop=FALSE]) %*% B) } norm(B, type = "2") } # check equality of values uniq <- function(x) { abs(max(x) - min(x)) < tol } decomp <- list(d = d, G = G, scale = rep(0, G), shape = matrix(0, d, G), orientation = array(0, c(d, d, G))) for(k in 1:G) { ev <- eigen(sigma[,,k], symmetric = TRUE) temp <- log(ev$values); temp[!is.finite(temp)] <- 0 logScale <- sum(temp)/d decomp$scale[k] <- exp(logScale) decomp$shape[,k] <- exp(temp - logScale) decomp$orientation[,,k] <- ev$vectors } scaleName <- "V" shapeName <- "V" orientName <- "V" # check scale/volume if(uniq(decomp$scale)) { decomp$scale <- decomp$scale[1] scaleName <- "E" } # check shape if(all(apply(decomp$shape, 1, uniq))) { decomp$shape <- decomp$shape[, 1] if(all(uniq(decomp$shape))) { shapeName <- "I" decomp$shape <- rep(1, d) } else { shapeName <- "E" } } # check orientation D <- decomp$orientation eqOrientation <- all(apply(D, 3, function(d) any(apply(d, 2, function(x) cor(D[,,1], x)^2) > (1-tol)))) if(eqOrientation) { decomp$orientation <- decomp$orientation[,,1] orientName <- "E" if(sum(abs(svd(decomp$orientation)$v) - diag(d)) < tol) { orientName <- "I" # decomp$orientation <- NULL } } decomp$modelName <- paste0(scaleName, shapeName, orientName) decomp$sigma <- sigma orderedNames <- c("sigma", "d", "modelName", "G", "scale", "shape", "orientation") return(decomp[orderedNames]) } decomp2sigma <- function(d, G, scale, shape, orientation = NULL, ...) { nod <- missing(d) noG <- missing(G) lenScale <- length(scale) if(lenScale != 1) { if(!noG && G != lenScale) stop("scale incompatibile with G") G <- lenScale } shape <- as.matrix(shape) p <- nrow(shape) if(!nod && p != d) stop("shape incompatible with d") d <- p g <- ncol(shape) if(g != 1) { if(!is.null(G) && g != G) stop("shape incompatible with scale") if(!noG && g != G) stop("shape incompatible with G") G <- g } if(is.null(orientation)) { orientName <- "I" if(is.null(G)) { G <- if(noG) 1 else G } orientation <- array(diag(d), c(d, d, G)) } else { dimO <- dim(orientation) l <- length(dimO) if(is.null(dimO) || l < 2 || l > 3 || dimO[1] != dimO[2]) stop("orientation improperly specified") if(dimO[1] != d) stop("orientation incompatible with shape") if(l == 3) { orientName <- "V" if(is.null(G)) { if(!noG && dimO[3] != G) stop("orientation incompatible with G") G <- dimO[3] } else if(G != dimO[3]) stop("orientation incompatible with scale and/or shape" ) } else { orientName <- "E" if(is.null(G)) { G <- if(noG) 1 else G } orientation <- array(orientation, c(d, d, G)) } } if(G == 1) { scaleName <- shapeName <- "X" } else { scaleName <- if(lenScale == 1) "E" else "V" shapeName <- if(g == 1) "E" else "V" scale <- rep(scale, G) shape <- matrix(shape, nrow = d, ncol = G) } sigma <- array(0, c(d, d, G)) for(k in 1:G) { sigma[,,k] <- crossprod(t(orientation[,,k]) * sqrt(scale[k] * shape[,k])) } structure(sigma, modelName = paste0(scaleName, shapeName, orientName)) } grid1 <- function (n, range = c(0, 1), edge = TRUE) { if (any(n < 0 | round(n) != n)) stop("n must be nonpositive and integer") G <- rep(0, n) if (edge) { G <- seq(from = min(range), to = max(range), by = abs(diff(range))/(n - 1)) } else { lj <- abs(diff(range)) incr <- lj/(2 * n) G <- seq(from = min(range) + incr, to = max(range) - incr, by = 2 * incr) } G } grid2 <- function (x, y) { lx <- length(x) ly <- length(y) xy <- matrix(0, nrow = lx * ly, ncol = 2) l <- 0 for (j in 1:ly) { for (i in 1:lx) { l <- l + 1 xy[l,] <- c(x[i], y[j]) } } xy } hypvol <- function (data, reciprocal = FALSE) { dimdat <- dim(data) oneD <- (is.null(dimdat) || length(dimdat[dimdat > 1]) == 1) if (oneD) { n <- length(as.vector(data)) ans <- if (reciprocal) 1/diff(range(data)) else diff(range(data)) return(ans) } if (length(dimdat) != 2) stop("data must be a vector or a matrix") data <- as.matrix(data) sumlogdifcol <- function(x) sum(log(apply(x, 2, function(colm) diff(range(colm))))) bdvolog <- sumlogdifcol(data) pcvolog <- sumlogdifcol(princomp(data)$scores) volog <- min(bdvolog, pcvolog) if(reciprocal) { minlog <- log(.Machine$double.xmin) if (-volog < minlog) { warning("hypervolume smaller than smallest machine representable positive number") ans <- 0 } else ans <- exp(-volog) } else { maxlog <- log(.Machine$double.xmax) if (volog > maxlog) { warning("hypervolume greater than largest machine representable number") ans <- Inf } else ans <- exp(volog) } return(ans) } "[.mclustBIC" <- function (x, i, j, drop = FALSE) { ATTR <- attributes(x)[c("G", "modelNames", "prior", "control", "initialization", "Vinv", "warn", "n", "d", "oneD", "returnCodes", "class")] oldClass(x) <- NULL x <- NextMethod("[") if (is.null(dim(x))) return(x) ATTR$G <- as.numeric(dimnames(x)[[1]]) ATTR$modelNames <- dimnames(x)[[2]] ATTR$returnCodes <- ATTR$returnCodes[dimnames(x)[[1]],dimnames(x)[[2]], drop=FALSE] do.call("structure", c(list(.Data = x), ATTR)) } bic <- function(modelName, loglik, n, d, G, noise = FALSE, equalPro = FALSE, ...) { nparams <- nMclustParams(modelName = modelName, d = d, G = G, noise = noise, equalPro = equalPro) 2 * loglik - nparams * log(n) } checkModelName <- function(modelName) { switch(EXPR = modelName, "X" = , "E" = , "V" = , "XII" = , "XXI" = , "XXX" = , "EII" = , "VII" = , "EEI" = , "VEI" = , "EVI" = , "VVI" = , "EEE" = , "EVE" = , "VEE" = , "VVE" = , "EEV" = , "VEV" = , "EVV" = , "VVV" = TRUE, stop("invalid model name")) } em <- function(modelName, data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { checkModelName(modelName) funcName <- paste("em", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } estep <- function(modelName, data, parameters, warn = NULL, ...) { checkModelName(modelName) funcName <- paste("estep", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } ############################################################################# mclustVariance <- function(modelName, d=NULL, G=2) { x <- -1 if (nchar(modelName) == 1) { if (!is.null(d) && d != 1) stop("modelName and d are incompatible") varList <- switch(EXPR = modelName, "X" = list(sigmasq = x), "E" = list(sigmasq = x), "V" = list(sigmasq = rep(x,G)), stop("modelName not recognized")) } else { if (nchar(modelName) != 3) stop("modelName is misspecified") if (is.null(d)) d <- 3 varList <- switch(EXPR = modelName, "XII" = list(sigmasq = x), "EII" = list(sigmasq = x, scale = x, shape = rep(x,d)), "VII" = list(sigmasq = rep(x,G), scale = rep(x,G), shape = rep(x,d)), "XXI" = list(scale = x, shape = rep(x,d)), "EEI" = list(scale = x, shape = rep(x,d)), "EVI" = list(scale = x, shape = matrix(x,d,G)), "VEI" = list(scale = rep(x,G), shape = rep(x,d)), "VVI" = list(scale = rep(x,G), shape = matrix(x,d,G)), "XXX" = { M <- matrix(x,d,d); M[row(M) > col(M)] <- 0; list(cholSigma = M) }, "EEE" = { M <- matrix(x,d,d); M[row(M) > col(M)] <- 0; list(cholSigma = M) }, "VEE" = list(scale = rep(x,G), shape = rep(x,d), orientation = matrix(x,d,d)), "VVE" = list(scale = rep(x,G), shape = matrix(x,d,G), orientation = matrix(x,d,d)), "EVV" = list(scale = x, shape = matrix(x,d,G), orientation = array(x,c(d,d,G))), "EVE" = list(scale = x, shape = matrix(x,d,G), orientation = matrix(x,d,d)), "EEV" = list(scale = x, shape = rep(x,d), orientation = array(x,c(d,d,G))), "VEV" = list(scale = x, shape = matrix(x,d,G), orientation = array(x,c(d,d,G))), "VVV" = { A <- array(x,c(d,d,G)); I <- row(A[,,1]) > col(A[,,1]) for (k in 1:G) A[,,k][I] <- 0 list(cholsigma = A)}, stop("modelName not recognized")) } c(modelName = modelName, d = d, G = G, varList) } me <- function(modelName, data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { checkModelName(modelName) funcName <- paste("me", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } mstep <- function(modelName, data, z, prior = NULL, warn = NULL, ...) { checkModelName(modelName) funcName <- paste("mstep", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } mvn <- function(modelName, data, prior = NULL, warn = NULL, ...) { modelName <- switch(EXPR = modelName, "E" = "X", "V" = "X", "X" = "X", "Spherical" = "XII", "EII" = "XII", "VII" = "XII", "XII" = "XII", "Diagonal" = "XXI", "EEI" = "XXI", "VEI" = "XXI", "EVI" = "XXI", "VVI" = "XXI", "XXI" = "XXI", "Ellipsoidal" = "XXX", "EEE" = "XXX", "VEE" = "XXX", "EVE" = "XXX", "EVV" = "XXX", "VVE" = "XXX", "EEV" = "XXX", "VEV" = "XXX", "VVV" = "XXX", "XXX" = "XXX", stop("invalid model name")) funcName <- paste("mvn", modelName, sep = "") mc <- match.call() mc[[1]] <- as.name(funcName) mc[[2]] <- NULL out <- eval(mc, parent.frame()) varnames <- colnames(as.matrix(data)) if(!all(is.null(varnames))) { rownames(out$parameters$mean) <- varnames dimnames(out$parameters$variance$Sigma) <- list(varnames, varnames) dimnames(out$parameters$variance$sigma) <- list(varnames, varnames, NULL) } return(out) } nVarParams <- function(modelName, d, G, ...) { modelName <- switch(EXPR = modelName, X = "E", XII = "EII", XXI = "EEI", XXX = "EEE", modelName) # checkModelName(modelName) switch(EXPR = modelName, "E" = 1, "V" = G, "EII" = 1, "VII" = G, "EEI" = d, "VEI" = G + (d-1), "EVI" = 1 + G * (d-1), "VVI" = G * d, "EEE" = d*(d+1)/2, "EVE" = 1 + G*(d-1) + d*(d-1)/2, "VEE" = G + (d-1) + d*(d-1)/2, "VVE" = G + G * (d-1) + d*(d-1)/2, "EEV" = 1 + (d-1) + G * d*(d-1)/2, "VEV" = G + (d-1) + G * d*(d-1)/2, "EVV" = 1 - G + G * d*(d+1)/2, "VVV" = G * d*(d+1)/2, stop("invalid model name")) } nMclustParams <- function(modelName, d, G, noise = FALSE, equalPro = FALSE, ...) { modelName <- switch(EXPR = modelName, X = "E", XII = "EII", XXI = "EEI", XXX = "EEE", modelName) checkModelName(modelName) if(G == 0) { ## one noise cluster case if(!noise) stop("undefined model") nparams <- 1 } else { nparams <- nVarParams(modelName, d = d, G = G) + G*d if(!equalPro) nparams <- nparams + (G - 1) if(noise) nparams <- nparams + 2 } return(nparams) } sim <- function(modelName, parameters, n, seed = NULL, ...) { modelName <- switch(EXPR = modelName, X = "E", XII = "EII", XXI = "EEI", XXX = "EEE", modelName) checkModelName(modelName) funcName <- paste("sim", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } cdensVEI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VEI", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esvei", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],dimnames(mu)[[2]]) structure(z, logarithm = logarithm, modelName = "VEI", WARNING = WARNING, returnCode = ret) } emVEI <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVEI(data, parameters = parameters, warn = warn)$z meVEI(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVEI <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VEI", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esvei", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VEI", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVEI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEI", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VEI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevei", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax), as.double(control$tol), as.double(control$eps), double(p * G), double(G), double(p), double(K), double(G), double(p), double(p * G), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEI"), prior[names(prior) != "functionName"])) temp <- .Fortran("meveip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax), as.double(control$tol), as.double(control$eps), double(p * G), double(G), double(p), double(K), double(G), double(p), double(p * G), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]][1] inner <- temp[[2]][2] err <- temp[[3]][1] inerr <- temp[[3]][2] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) scale <- temp[[6]] shape <- temp[[7]] dimnames(mu) <- list(NULL, as.character(1:G)) pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- diag(scale[k] * shape) if(inner >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) inner <- - inner ret <- 2 } else if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) attr(info, "inner") <- c(iterations = inner, error = inerr) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VEI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VEI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVEI <- function(data, z, prior = NULL, warn = NULL, control = NULL,...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEI", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VEI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$ itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] if(is.null(prior)) { temp <- .Fortran("msvei", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.integer(itmax), as.double(tol), double(p * G), double(G), double(p), double(G), double(G), double(p), double(p * G), PACKAGE = "mclust")[6:11] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEI"), prior[names( prior) != "functionName"])) temp <- .Fortran("msveip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), as.integer(itmax), as.double(tol), double(p * G), double(G), double(p), double(G), double(G), double(p), double(p * G), PACKAGE = "mclust")[10:15] } inner <- temp[[1]] inerr <- temp[[2]] mu <- matrix(temp[[3]], p, G) scale <- temp[[4]] shape <- temp[[5]] dimnames(mu) <- list(NULL, as.character(1:G)) pro <- temp[[6]] WARNING <- NULL if(any(c(scale, shape) > signif(.Machine$double.xmax, 6)) || any(! c(scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- shape <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { ret <- 0 sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- diag(scale[k] * shape) if(inner >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) inner <- - inner } } info <- c(iterations = inner, error = inerr) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VEI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VEI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } simVEI <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VEI")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if(length(rtshape) != d) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if(length(rtscale) != G) stop("scale incompatible with mean") for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% diag(rtscale[k] * rtshape), MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VEI") } cdensV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one-dimensional") data <- drop(data) n <- length(data) mu <- drop(parameters$mean) G <- length(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(names(data), NULL) return(structure(z, logarithm = logarithm, modelName = "V", WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(any(sigmasq < 0)) stop("sigma-squared is negative") if(any(!sigmasq)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(names(data), NULL) return(structure(z, logarithm = logarithm, modelName = "V", WARNING = WARNING, returnCode = 9)) } if (length(sigmasq) == 1) sigmasq <- rep(sigmasq,G) temp <- .Fortran("es1v", as.double(data), as.double(mu), as.double(sigmasq), as.double(-1), as.integer(n), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[8:9] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(names(data),NULL) structure(z, logarithm = logarithm, modelName = "V", WARNING = WARNING, returnCode = ret) } emV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepV(data, parameters = parameters, warn = warn)$z meV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one-dimensional") data <- drop(data) n <- length(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- drop(parameters$mean) G <- length(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(names(data), NULL) return(structure(list(modelName = "V", n=n, d=1, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(any(sigmasq < 0)) stop("sigma-squared is negative") if(any(!sigmasq)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(names(data), NULL) return(structure(list(modelName = "V", n=n, d=1, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = -1)) } temp <- .Fortran("es1v", as.double(data), as.double(mu), as.double(sigmasq), as.double(pro), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[8:9] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(names(data),NULL) structure(list(modelName = "V", n = n, d = 1, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensVEV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VEV", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") temp <- .Fortran("esvev", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(aperm(parameters$variance$orientation,c(2,1,3))), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(p), double(p), double(1), double(n * G), PACKAGE = "mclust")[13:14] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VEV", WARNING = WARNING, returnCode = ret) } emVEV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVEV(data, parameters = parameters, warn = warn)$z meVEV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVEV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VEV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") temp <- .Fortran("esvev", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(aperm(parameters$variance$orientation,c(2,1,3))), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(p), double(p), double(1), double(n * K), PACKAGE = "mclust")[13:14] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VEV", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVEV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEV", d = p, G = G, scale=rep(NA,G), shape=rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VEV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevev", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax), as.double(control$tol), as.double(control$eps), as.integer(lwork), double(p * G), double(G), double(p), double(p * p * G), double(K), double(lwork), double(p), PACKAGE = "mclust")[7:16] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEV"), prior[names(prior) != "functionName"])) temp <- .Fortran("mevevp", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax), as.double(control$tol), as.double(control$eps), as.integer(lwork), double(p * G), double(G), double(p), double(p * p * G), double(K), double(lwork), double(p), PACKAGE = "mclust")[11:20] } z <- temp[[1]] its <- temp[[2]][1] inner <- temp[[2]][2] err <- temp[[3]][1] inerr <- temp[[3]][2] loglik <- temp[[4]] lapackSVDinfo <- temp[[5]] mu <- matrix(temp[[6]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[7]] shape <- temp[[8]] O <- aperm( array(temp[[9]], c(p, p, G)), c(2,1,3)) pro <- temp[[10]] WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- shapeO(shape, O, transpose = FALSE) sigma <- sweep(sigma, MARGIN = 3, STATS = scale, FUN = "*") if(inner >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) inner <- - inner ret <- 2 } else if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- structure(c(iterations = its, error = err), inner = c( iterations = inner, error = inerr)) dimnames(z) <- list(dimnames(data)[[1]],NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "VEV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VEV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVEV <- function(data, z, prior = NULL, warn = NULL, control = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEV", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VEV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) WARNING <- "z is missing" if(warn) warning(WARNING) return(structure(list(n = n, d = p, G = G, mu = matrix(as.double(NA), p, G), sigma = array(NA, c(p, p, G)), decomp = list( d = p, G = G, scale = rep(NA, G), shape = rep(NA, p), orientation = array(NA, c(p, p, G))), pro = rep(NA, G), modelName = "VEV", prior = prior), WARNING = WARNING)) } # shape <- sqrt(rev(sort(shape/exp(sum(log(shape))/p)))) if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop( "improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$ itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) if(is.null(prior)) { temp <- .Fortran("msvev", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(lwork), as.integer(lwork), as.integer(itmax), as.double(tol), double(p * G), double(G), double(p), double(p * p * G), double(G), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEV"), prior[names(prior) != "functionName"])) temp <- .Fortran("msvevp", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), double(lwork), as.integer(lwork), as.integer(itmax), as.double(tol), double(p * G), double(G), double(p), double(p * p * G), double(G), PACKAGE = "mclust")[11:18] } lapackSVDinfo <- temp[[1]] inner <- temp[[2]] inerr <- temp[[3]] mu <- matrix(temp[[4]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[5]] shape <- temp[[6]] O <- aperm(array(temp[[7]], c(p, p, G)),c(2,1,3)) pro <- temp[[8]] WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" if(warn) warning(WARNING) } else { WARNING <- "input error for LAPACK DGESVD" if(warn) warning(WARNING) } O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 } else if(any(c(scale, shape) > signif(.Machine$double.xmax, 6)) || any( !c(scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- sweep(shapeO(shape, O, transpose = FALSE), MARGIN = 3, STATS = scale, FUN = "*") if(inner >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) inner <- - inner } ret <- 2 } info <- c(iteration = inner, error = inerr) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VEV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VEV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } simVEV <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VEV")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if(length(rtshape) != d) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if(length(rtscale) != G) stop("scale incompatible with mean") for(k in 1:G) { m <- ctabel[k] sss <- rtscale[k] * rtshape cholSigma <- t(parameters$variance$orientation[, , k]) * sss x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VEV") } cdensVII <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VII", WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(any(sigmasq < 0)) stop("sigma-squared is negative") if(any(!sigmasq)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VII", WARNING = WARNING, returnCode = 9)) } temp <- .Fortran("esvii", as.double(data), as.double(mu), as.double(sigmasq), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[9:10] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VII", WARNING = WARNING, returnCode = ret) } emVII <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVII(data, parameters = parameters, warn = warn)$z meVII(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVII <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VII", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(any(sigmasq < 0)) stop("sigma-squared is negative") if(any(!sigmasq)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VII", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = -1)) } temp <- .Fortran("esvii", as.double(data), as.double(mu), as.double(sigmasq), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[9:10] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VII", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVII <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) > 2) stop("data must be in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VII", d=p, G=G, sigmasq=rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevii", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(K), PACKAGE = "mclust")[7:13] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VII"), prior[names(prior) != "functionName"])) storage.mode(z) <- "double" temp <- .Fortran("meviip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(K), PACKAGE = "mclust")[c(11:17, 10)] } mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] sigmasq <- temp[[6]] pro <- temp[[7]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6) || any(sigmasq <= max(control$eps, 0))) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- diag(rep(sigmasq[k], p)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VII", d = p, G = G, sigma = sigma, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VII", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } meVVI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVI", d = p, G = G, scale = rep(NA,G), shape = matrix(as.double(NA),p,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VVI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevvi", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(p * G), double(K), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVI"), prior[names(prior) != "functionName"])) temp <- .Fortran("mevvip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(p * G), double(K), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) scale <- temp[[6]] shape <- matrix(temp[[7]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(apply(sweep(shape, MARGIN = 2, STATS = scale, FUN = "*"), 2, diag), c(p, p, G)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVII <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal number of observations") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VII", d=p, G=G, sigmasq=rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("msvii", as.double(data), z, as.integer(n), as.integer(p), as.integer(G), double(p * G), double(G), double(G), PACKAGE = "mclust")[6:8] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VII"), prior[names(prior) != "functionName"])) temp <- .Fortran("msviip", as.double(data), z, as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(G), double(G), PACKAGE = "mclust")[10:12] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) sigmasq <- temp[[2]] pro <- temp[[3]] sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- diag(rep(sigmasq[k], p)) WARNING <- NULL if(any(sigmasq > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VII", d = p, G = G, sigma = sigma, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VII", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simVII <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d), modelName = "VII")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) sigmasq <- parameters$variance$sigmasq for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% diag(rep(sqrt(sigmasq[k]), d)), MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VII") } meV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal length of data") K <- dimz[2] if(!is.null(Vinv)) { G <- K - 1 if (Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "V", d=1, G=G, sigmasq = rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=rep(NA,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="V", prior=prior, n=n, d=1, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("me1v", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(G), as.double(if(is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(G), double(G), double(K), PACKAGE = "mclust")[6:12] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "V"), prior[names(prior) != "functionName"])) temp <- .Fortran("me1vp", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(G), double(G), double(K), PACKAGE = "mclust")[c(10:16, 9)] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- temp[[5]] names(mu) <- as.character(1:G) sigmasq <- temp[[6]] pro <- temp[[7]] ## logpost <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6) || any(sigmasq <= max(control$eps, 0))) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq[] <- z[] <- loglik <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- sigmasq[] <- z[] <- loglik <- NA ret <- if(control$equalPro) -2 else -3 } else if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 info <- c(iterations = its, error = err) dimnames(z) <- list(names(data),NULL) variance = list(modelName = "V", d = 1, G = G, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "V", prior = prior, n = n, d = 1, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepV <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") # number of groups G <- dimz[2] ## if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "V", d=1, G=G, sigmasq=rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=rep(NA,G), variance=variance) return(structure(list(modelName="V", prior=prior, n=n, d=1, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("ms1v", as.double(data), as.double(z), as.integer(n), as.integer(G), double(G), double(G), double(G), PACKAGE = "mclust")[5:7] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "V"), prior[names(prior) != "functionName"])) storage.mode(z) <- "double" temp <- .Fortran("ms1vp", as.double(data), z, as.integer(n), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(G), double(G), double(G), PACKAGE = "mclust")[9:11] } mu <- temp[[1]] names(mu) <- as.character(1:G) sigmasq <- temp[[2]] pro <- temp[[3]] WARNING <- NULL if(any(sigmasq > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq[] <- z[] <- loglik <- NA print(G) print(sigmasq) ret <- -1 } else ret <- 0 dimnames(z) <- list(names(data),NULL) variance = list(modelName = "V", d = 1, G = G, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "V", prior = prior, n = n, d = 1, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simV <- function(parameters, n, seed = NULL, ...) { if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, 2), modelName = "V")) } if(!is.null(seed)) set.seed(seed) mu <- parameters$mean G <- length(mu) pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- rep(0, n) sd <- sqrt(parameters$variance$sigmasq) for(k in 1:G) { x[clabels == k] <- mu[k] + rnorm(ctabel[k], sd = sd[k]) } structure(cbind(group = clabels, "1" = x), modelName = "V") } cdensVVI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mu", "variance")]))) || any(is.null(parameters[c("pro", "mu", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VVI", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esvvi", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VVI", WARNING = WARNING, retrinCode = ret) } emVVI <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVVI(data, parameters = parameters, warn = warn)$z meVVI(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVVI <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if (is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mu", "variance")]))) || any(is.null(parameters[c("pro", "mu", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VVI", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esvvi", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VVI", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVVI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVI", d = p, G = G, scale = rep(NA,G), shape = matrix(as.double(NA),p,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VVI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevvi", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(p * G), double(K), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVI"), prior[names(prior) != "functionName"])) temp <- .Fortran("mevvip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(p * G), double(K), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) scale <- temp[[6]] shape <- matrix(temp[[7]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(apply(sweep(shape, MARGIN = 2, STATS = scale, FUN = "*"), 2, diag), c(p, p, G)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVVI <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VII", d=p, G=G, sigmasq=rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("msvvi", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p * G), double(G), double(p * G), double(G), PACKAGE = "mclust")[6:9] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVI"), prior[names( prior) != "functionName"])) temp <- .Fortran("msvvip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(G), double(p * G), double(G), PACKAGE = "mclust")[10:13] } mu <- matrix(temp[[1]], p, G) scale <- temp[[2]] shape <- matrix(temp[[3]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[4]] WARNING <- NULL if(any(c(scale, shape) > signif(.Machine$double.xmax, 6)) || any(! c(scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- shape <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- array(apply(sweep(shape, MARGIN = 2, STATS = scale, FUN = "*"), 2, diag), c(p, p, G)) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVI", d = p, G = G, sigma = sigma, sigmasq = scale, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simVVI <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VVI")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if(!all(dim(rtshape) == dim(mu))) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if(length(rtscale) != G) stop("scale incompatible with mean") for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% diag(rtscale[k] * rtshape[, k]), MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VVI") } cdensVVV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VVV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$cholsigma)) stop("variance parameters are missing") temp <- .Fortran("esvvv", as.logical(1), as.double(data), as.double(mu), as.double(parameters$variance$cholsigma), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(p), double(1), double(n * G), PACKAGE = "mclust")[10:12] lapackCholInfo <- temp[[1]][1] loglik <- temp[[2]] z <- matrix(temp[[3]], n, G) WARNING <- NULL if(lapackCholInfo) { if(lapackCholInfo > 0) { WARNING <- "sigma is not positive definite" if(warn) warning(WARNING) } else { WARNING <- "input error for LAPACK DPOTRF" if(warn) warning(WARNING) } z[] <- NA ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VVV", WARNING = WARNING, returnCode = ret) } emVVV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVVV(data, parameters = parameters, warn = warn)$z meVVV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVVV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VVV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$cholsigma)) stop("variance parameters are missing") temp <- .Fortran("esvvv", as.logical(1), as.double(data), as.double(mu), as.double(parameters$variance$cholsigma), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(p), double(1), double(n * K), PACKAGE = "mclust")[10:12] lapackCholInfo <- temp[[1]][1] loglik <- temp[[2]] z <- matrix(temp[[3]], n, K) WARNING <- NULL if(lapackCholInfo) { if(lapackCholInfo > 0) { WARNING <- "sigma is not positive definite" if(warn) warning(WARNING) } else { WARNING <- "input error for LAPACK DPOTRF" if(warn) warning(WARNING) } z[] <- loglik <- NA ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VVV", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVVV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVV", d = p, G = G, sigma = array(NA, c(p,p,G)), cholsigma = array(NA, c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VVV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevvv", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(p * p * G), double(K), double(p), double(p*p), PACKAGE = "mclust")[7:13] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVV"), prior[names(prior) != "functionName"])) temp <- .Fortran("mevvvp", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(p * p * G), double(K), double(p), double(p*p), PACKAGE = "mclust")[c(11:17, 10)] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) cholsigma <- array(temp[[6]], c(p, p, G)) pro <- temp[[7]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(apply(cholsigma, 3, unchol, upper = TRUE), c(p,p,G)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = abs(err)) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(cholsigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVV", d = p, G = G, sigma = sigma, cholsigma = cholsigma) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VVV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVVV <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVV", d = p, G = G, sigma <- array(NA, c(p,p, G)), cholsigma = array(NA, c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VVV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("msvvv", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p), double(p * G), double(p * p * G), double(G), double(p * p), PACKAGE = "mclust")[7:9] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVV"), prior[names(prior) != "functionName"])) temp <- .Fortran("msvvvp", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), double(p), double(p * G), double(p * p * G), double(G), double(p * p), PACKAGE = "mclust")[11:13] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) cholsigma <- array(temp[[2]], c(p, p, G)) pro <- temp[[3]] WARNING <- NULL if(any(c(mu, cholsigma) > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- sigma[] <- cholsigma[] <- NA ret <- -1 } else { sigma <- array(apply(cholsigma, 3, unchol, upper = TRUE), c(p,p,G)) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) dimnames(cholsigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVV", d = p, G = G, sigma = sigma, cholsigma= cholsigma) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VVV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simVVV <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VVV")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) if(is.null(cholsigma <- parameters$variance$cholsigma)) { if(is.null(sigma <- parameters$variance$sigma)) { stop("variance parameters must inlcude either sigma or cholsigma" ) } cholsigma <- apply(sigma, 3, chol) for(k in 1:ncol(cholsigma)) sigma[, , k] <- cholsigma[, k] cholsigma <- sigma } if(dim(cholsigma)[3] != G) stop("variance incompatible with mean") for(k in 1:G) { m <- ctabel[k] x[clabels == k,] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholsigma[,,k], MARGIN = 2, STATS = mu[,k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VVV") } # single component univariate case mvnX <- function(data, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one dimensional") data <- as.vector(data) n <- length(data) if(is.null(prior)) { temp <- .Fortran("mvn1d", as.double(data), as.integer(n), double(1), double(1), double(1), PACKAGE = "mclust")[3:5] logpost <- NULL } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = 1, modelName = "X"), prior[names(prior) != "functionName"])) temp <- .Fortran("mvn1p", as.double(data), as.integer(n), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(1), double(1), double(1), PACKAGE = "mclust")[c(7:9, 6)] logpost <- temp[[4]] } mu <- temp[[1]] sigmasq <- temp[[2]] loglik <- temp[[3]] ret <- 0 WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) loglik <- NA ret <- -1 } variance = list(modelName= "X", d = 1, G = 1, sigmasq = sigmasq) parameters <- list(pro = 1, mean = mu, variance = variance) structure(list(modelName = "X", prior = prior, n = n, d = 1, G = 1, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensX <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { call <- match.call() mc <- match.call(expand.dots = FALSE) mc[[1]] <- as.name("cdensE") z <- eval(mc, parent.frame()) attr(z, "modelName") <- "X" return(z) } emX <- function(data, prior = NULL, warn = NULL, ...) { mvnX(data, prior = prior, warn = warn, ...) } meX <- emX # single component multivariate case with diagonal common variance mvnXII <- function(data, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD) stop("for multidimensional data only") if(length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) if(is.null(prior)) { temp <- .Fortran("mvnxii", as.double(data), as.integer(n), as.integer(p), double(p), double(1), double(1), PACKAGE = "mclust")[4:6] logpost <- NULL } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = 1, modelName = "XII"), prior[names(prior) != "functionName"])) temp <- .Fortran("mnxiip", as.double(data), as.integer(n), as.integer(p), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p), double(1), double(1), PACKAGE = "mclust")[c(8:10, 7)] logpost <- temp[[4]] } mu <- temp[[1]] sigmasq <- temp[[2]] loglik <- temp[[3]] Sigma <- sigmasq * diag(p) ret <- 0 WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) loglik <- NA ret <- -1 } variance <- list(modelName = "XII", d = p, G = 1, sigmasq = sigmasq, Sigma = Sigma, sigma = array(Sigma, c(p, p, 1)), scale = sigmasq) parameters <- list(pro = 1, mean = matrix(mu, ncol = 1), variance = variance) structure(list(modelName = "XII", prior = prior, n = n, d = p, G = 1, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensXII <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { call <- match.call() mc <- match.call(expand.dots = FALSE) mc[[1]] <- as.name("cdensEII") z <- eval(mc, parent.frame()) attr(z, "modelName") <- "XII" return(z) } emXII <- function(data, prior = NULL, warn = NULL, ...) { mvnXII(data, prior = prior, warn = warn, ...) } meXII <- emXII # single component multivariate case with diagonal different variances mvnXXI <- function(data, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD) stop("for multidimensional data only") if(length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) if(is.null(prior)) { temp <- .Fortran("mvnxxi", as.double(data), as.integer(n), as.integer(p), double(p), double(1), double(p), double(1), PACKAGE = "mclust")[4:7] logpost <- NULL } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = 1, modelName = "XXI"), prior[names(prior) != "functionName"])) temp <- .Fortran("mnxxip", as.double(data), as.integer(n), as.integer(p), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p), double(1), double(p), double(1), PACKAGE = "mclust")[c(8:11, 7)] logpost <- temp[[5]] } mu <- temp[[1]] scale <- temp[[2]] shape <- temp[[3]] loglik <- temp[[4]] Sigma <- diag(scale * shape) ret <- 0 WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) loglik <- NA ret <- -1 } variance <- list(modelName = "XXI", d = p, G = 1, Sigma = Sigma, sigma = array(Sigma, c(p, p, 1)), scale = scale, shape = shape) parameters <- list(pro = 1, mean = matrix(mu, ncol = 1), variance = variance) structure(list(modelName = "XXI", prior = prior, n = n, d = p, G = 1, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensXXI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { call <- match.call() mc <- match.call(expand.dots = FALSE) mc[[1]] <- as.name("cdensEEI") z <- eval(mc, parent.frame()) attr(z, "modelName") <- "XXI" return(z) } emXXI <- function(data, prior = NULL, warn = NULL, ...) { mvnXXI(data, prior = prior, warn = warn, ...) } meXXI <- emXXI # single component multivariate case with full covariance matrix mvnXXX <- function(data, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD) stop("for multidimensional data only") if(length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) if(is.null(prior)) { temp <- .Fortran("mvnxxx", as.double(data), as.integer(n), as.integer(p), double(p), double(p * p), double(1), PACKAGE = "mclust")[c(4:6)] logpost <- NULL } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = 1, modelName = "XXX"), prior[names(prior) != "functionName"])) temp <- .Fortran("mnxxxp", as.double(data), as.integer(n), as.integer(p), double(p), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$scale) else priorParams$scale), as.double(priorParams$dof), double(p), double(p * p), double(1), PACKAGE = "mclust")[c(9:11, 8)] logpost <- temp[[4]] } mu <- temp[[1]] cholSigma <- matrix(temp[[2]], p, p) Sigma <- unchol(cholSigma, upper = TRUE) loglik <- temp[[3]] ## Sigma = t(cholSigma) %*% cholSigma ret <- 0 WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) loglik <- NA ret <- -1 } variance <- list(modelName = "XXX", d = p, G = 1, Sigma = Sigma, cholSigma = cholSigma, cholsigma = cholSigma, sigma = array(Sigma, c(p, p, 1))) parameters <- list(pro = 1, mean = matrix(mu, ncol = 1), variance = variance) structure(list(modelName = "XXX", prior = prior, n = n, d = p, G = 1, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensXXX <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { call <- match.call() mc <- match.call(expand.dots = FALSE) mc[[1]] <- as.name("cdensEEE") z <- eval(mc, parent.frame()) attr(z, "modelName") <- "XXX" return(z) } emXXX <- function(data, prior = NULL, warn = NULL, ...) { mvnXXX(data, prior = prior, warn = warn, ...) } meXXX <- emXXXmclust/R/mclustdr.R0000644000176200001440000011557413107361005013753 0ustar liggesusers###################################################### ## ## ## Dimension reduction for model-based ## ## clustering and classification ## ## ## ## Author: Luca Scrucca ## ###################################################### # GMMDR dimension reduction ----------------------------------------------- MclustDR <- function(object, normalized = TRUE, Sigma, lambda = 0.5, tol = sqrt(.Machine$double.eps)) { # Dimension reduction for model-based clustering and classification # # object = a object of class '"Mclust' # data = the data used to produce object. # normalized = normalize direction coefs to have unit norm call <- match.call() if(!any(class(object) %in% c("Mclust", "MclustDA"))) stop("object must be of class 'Mclust' or 'MclustDA'") x <- data.matrix(object$data) p <- ncol(x) n <- nrow(x) #----------------------------------------------------------------- # overall parameters mu <- colMeans(x) if(missing(Sigma)) Sigma <- var(x)*(n-1)/n # within-cluster parameters based on fitted mixture model if(inherits(object, "Mclust")) { type <- "Mclust" G <- object$G modelName <- object$modelName y <- object$classification cl2mc <- seq(G) class <- as.factor(y) par <- object$parameters f <- par$pro if(is.null(f)) f <- 1 if(!is.na(object$hypvol)) f <- f[-length(f)] # within-group means mu.G <- matrix(par$mean,p,G) # within-group covars if(p == 1) { Sigma.G <- array(par$variance$sigmasq, c(p,p,G)) } else { Sigma.G <- par$variance$sigma } } else if(inherits(object, "MclustDA")) { type <- object$type modelName <- sapply(object$models, function(m) m$modelName) class <- object$class class <- factor(class, levels = names(object$models)) y <- rep(NA, length(class)) for(i in 1:nlevels(class)) { y[class == levels(class)[i]] <- paste(levels(class)[i], object$models[[i]]$classification, sep =":") } y <- as.numeric(factor(y)) cl2mc <- rep(seq(length(object$models)), sapply(object$models, function(m) m$G)) m <- sapply(object$models, function(mod) mod$n) ncomp <- sapply(object$models, function(mod) mod$G) G <- sum(ncomp) f <- vector(length = G) mu.G <- matrix(as.double(NA), nrow = p, ncol = G) Sigma.G <- array(NA, dim = c(p,p,G)) for(i in 1:length(object$models)) { ii <- seq(c(0,cumsum(ncomp))[i]+1,c(0,cumsum(ncomp))[i+1]) par <- object$models[[i]]$parameters if(is.null(par$pro)) par$pro <- 1 f[ii] <- par$pro * m[i]/sum(m) # within-group means mu.G[,ii] <- par$mean # within-group covars if(p == 1) { Sigma.G[,,ii] <- array(par$variance$sigmasq, c(p,p,1)) } else { Sigma.G[,,ii] <- par$variance$sigma } } } #----------------------------------------------------------------- SVD <- svd(Sigma) pos <- (SVD$d > max(tol*SVD$d[1], 0)) # in case of not full rank covar matrix if(all(pos)) { inv.Sigma <- SVD$v %*% (1/SVD$d * t(SVD$u)) inv.sqrt.Sigma <- SVD$v %*% (1/sqrt(SVD$d) * t(SVD$u)) } else { inv.Sigma <- SVD$v[,pos,drop=FALSE] %*% (1/SVD$d[pos] * t(SVD$u[,pos,drop=FALSE])) inv.sqrt.Sigma <- SVD$v[,pos,drop=FALSE] %*% (1/sqrt(SVD$d[pos]) * t(SVD$u[,pos,drop=FALSE])) } #----------------------------------------------------------------- # pooled within-group covariance S <- matrix(0, p, p) for(j in 1:G) S <- S + f[j]*Sigma.G[,,j] #----------------------------------------------------------------- # kernel matrix M.I <- crossprod(t(sweep(mu.G, 1, FUN="-", STATS=mu))*sqrt(f)) M.II <- matrix(0, p, p) for(j in 1:G) M.II <- M.II + f[j]*crossprod(inv.sqrt.Sigma%*%(Sigma.G[,,j]-S)) # convex combiation of M_I and M_II M <- 2*lambda*crossprod(inv.sqrt.Sigma %*% M.I) + 2*(1-lambda)*M.II # regularize the M_II # M <- M.I + lambda*M.II # M <- crossprod(inv.sqrt.Sigma %*% M.I) + # (1-lambda)*M.II + lambda/p * diag(p) # SVD <- eigen.decomp(M, Sigma) l <- SVD$l; l <- (l+abs(l))/2 numdir <- min(p, sum(l > sqrt(.Machine$double.eps))) basis <- as.matrix(SVD$v)[,seq(numdir),drop=FALSE] sdx <- diag(Sigma) std.basis <- as.matrix(apply(basis, 2, function(x) x*sdx)) if(normalized) { basis <- as.matrix(apply(basis, 2, normalize)) std.basis <- as.matrix(apply(std.basis, 2, normalize)) } dimnames(basis) <- list(colnames(x), paste("Dir", 1:ncol(basis), sep="")) dimnames(std.basis) <- dimnames(basis) Z <- scale(x, scale = FALSE) %*% basis # out = list(call = call, type = type, x = x, Sigma = Sigma, class = class, mixcomp = y, class2mixcomp = cl2mc, G = G, modelName = modelName, mu = mu.G, sigma = Sigma.G, pro = f, M = M, M.I = M.I, M.II = M.II, lambda = lambda, evalues = l, raw.evectors = as.matrix(SVD$v), basis = basis, std.basis = std.basis, numdir = numdir, dir = Z) class(out) = "MclustDR" return(out) } print.MclustDR <- function(x, ...) { cat(paste("\'", class(x), "\' object for ", x$type, " mixture model:\n\n", sep = "")) tab <- rbind(x$basis[,seq(x$numdir),drop=FALSE], "-----------" = rep(NA, x$numdir), Eigenvalues = x$evalues[seq(x$numdir)]) print(tab, na.print = "", ...) invisible() } summary.MclustDR <- function(object, numdir, std = FALSE, ...) { if(missing(numdir)) numdir <- object$numdir dim <- seq(numdir) if(object$type == "Mclust") { n <- as.vector(table(object$class)) G <- object$G } else { n <- as.vector(table(object$class)) G <- as.vector(table(object$class2mixcomp)) } obj <- list(type = object$type, modelName = object$modelName, classes = levels(object$class), n = n, G = G, basis = object$basis[,seq(dim),drop=FALSE], std = std, std.basis = object$std.basis[,seq(dim),drop=FALSE], evalues = object$evalues[seq(dim)], evalues.cumperc = with(object, { evalues <- evalues[seq(numdir)] cumsum(evalues)/sum(evalues)*100 }) ) class(obj) <- "summary.MclustDR" return(obj) } print.summary.MclustDR <- function(x, digits = max(5, getOption("digits") - 3), ...) { title <- paste("Dimension reduction for model-based clustering and classification") cat(rep("-", nchar(title)),"\n",sep="") cat(title, "\n") cat(rep("-", nchar(title)),"\n",sep="") if(x$type == "Mclust") { tab <- data.frame(n = x$n) rownames(tab) <- x$classes tab <- as.matrix(tab) names(dimnames(tab)) <- c("Clusters", "") cat(paste("\nMixture model type: ", x$type, " (", x$modelName, ", ", x$G, ")\n", sep = "")) print(tab, quote = FALSE, right = TRUE) } else if(x$type == "MclustDA" | x$type == "EDDA") { tab <- data.frame(n = x$n, Model = x$modelName, G = x$G) rownames(tab) <- x$classes tab <- as.matrix(tab) names(dimnames(tab)) <- c("Classes", "") cat(paste("\nMixture model type:", x$type, "\n")) print(tab, quote = FALSE, right = TRUE) } else stop("invalid model type") if(x$std) { cat("\nStandardized basis vectors", "using predictors", "scaled to have", "std.dev. equal to one:\n", fill = TRUE) print(x$std.basis, digits = digits) } else { cat("\nEstimated basis vectors:\n") print(x$basis, digits = digits) } cat("\n") evalues <- rbind("Eigenvalues" = x$evalues, "Cum. %" = x$evalues.cumperc) colnames(evalues) <- colnames(x$basis) print(evalues, digits=digits) invisible() } projpar.MclustDR <- function(object, dim, center = TRUE, raw = FALSE) { # Transform estimated parameters to projection subspace given by # 'dim' directions x <- object$x p <- ncol(x) n <- nrow(x) G <- object$G numdir <- object$numdir if(missing(dim)) dim <- seq(numdir) numdir <- length(dim) if(raw) V <- object$raw.evectors[,dim,drop=FALSE] else V <- object$basis[,dim,drop=FALSE] # mu <- t(object$mu) if(center) mu <- scale(mu, center = apply(x,2,mean), scale = FALSE) Mu <- mu %*% V # sigma <- object$sigma cho <- array(apply(sigma, 3, chol), c(p, p, G)) Sigma <- array(apply(cho, 3, function(R) crossprod(R %*% V)), c(numdir, numdir, G)) # return(list(mean = Mu, variance = Sigma)) } predict.MclustDR <- function(object, dim = 1:object$numdir, newdata, eval.points, ...) { dim <- dim[dim <= object$numdir] if(missing(newdata) & missing(eval.points)) { dir <- object$dir[,dim,drop=FALSE] } else if(!missing(newdata)) { newdata <- as.matrix(newdata) newdata <- scale(newdata, center = colMeans(object$x), scale = FALSE) dir <- newdata %*% object$basis[,dim,drop=FALSE] } else if(!missing(eval.points)) { dir <- as.matrix(eval.points) } n <- nrow(dir) G <- object$G # num. components nclass <- nlevels(object$class) # num. classes par <- projpar.MclustDR(object, dim) Mu <- par$mean Sigma <- par$variance # old version # cden <- array(NA, c(n, G)) # for(j in 1:G) # { cden[,j] <- mvdnorm(dir, Mu[j,], Sigma[,,j], log = FALSE) } # z <- sweep(cden, 2, FUN = "*", STATS = object$pro) # den <- apply(z, 1, sum) # z <- sweep(z, 1, FUN = "/", STATS = den) # new version: more efficient and accurate z <- array(NA, c(n, G)) for(j in 1:G) { z[,j] <- mvdnorm(dir, Mu[j,], Sigma[,,j], log = TRUE) } z <- sweep(z, 2, FUN = "+", STATS = log(object$pro)) logden <- apply(z, 1, logsumexp) z <- sweep(z, 1, FUN = "-", STATS = logden) z <- exp(z) # zz <- matrix(0, n, nclass) for(j in seq(nclass)) { zz[,j] <- rowSums(z[,object$class2mixcomp == j,drop=FALSE]) } z <- zz; rm(zz) class <- factor(apply(z,1,which.max), levels = 1:nclass, labels = levels(object$class)) out <- list(dir = dir, density = exp(logden), z = z, uncertainty = 1 - apply(z,1,max), classification = class) return(out) } predict2D.MclustDR <- function(object, dim = 1:2, ngrid = 100, xlim, ylim) { dim <- dim[1:2] dir <- object$dir[,dim,drop=FALSE] G <- object$G par <- projpar.MclustDR(object, dim) Mu <- par$mean Sigma <- par$variance if(missing(xlim)) xlim <- range(dir[,1]) # +c(-1,1)*0.05*diff(range(x))) if(missing(ylim)) ylim <- range(dir[,2]) # +c(-1,1)*0.05*diff(range(x))) xygrid <- cbind(seq(xlim[1], xlim[2], length = ngrid), seq(ylim[1], ylim[2], length = ngrid)) grid <- expand.grid(xygrid[,1], xygrid[,2]) pred <- predict.MclustDR(object, dim = dim, eval.points = grid) out <- list(x = xygrid[,1], y = xygrid[,2], density = matrix(pred$density, ngrid, ngrid), z = array(pred$z, c(ngrid, ngrid, ncol(pred$z))), uncertainty = matrix(pred$uncertainty, ngrid, ngrid), classification = matrix(pred$classification, ngrid, ngrid)) return(out) } plot.MclustDR <- function(x, dimens, what = c("scatterplot", "pairs", "contour", "classification", "boundaries", "density", "evalues"), symbols, colors, col.contour = gray(0.7), col.sep = grey(0.4), ngrid = 100, nlevels = 5, asp = NULL, ...) { object <- x x <- object$x p <- ncol(x) n <- nrow(x) G <- object$G y <- object$mixcomp class <- as.numeric(object$class) nclass <- length(table(class)) dir <- object$dir numdir <- object$numdir dimens <- if(missing(dimens)) seq(numdir) else intersect(as.numeric(dimens), seq(numdir)) if(missing(symbols)) { if(G <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols") } else if(G <= 26) { symbols <- LETTERS } } if(length(symbols) == 1) symbols <- rep(symbols,nclass) if(length(symbols) < nclass) { warning("more symbols needed to show classification") symbols <- rep(16, nclass) } if(missing(colors)) { colors <- mclust.options("classPlotColors") } if(length(colors) == 1) colors <- rep(colors,nclass) if(length(colors) < nclass) { warning("more colors needed to show mixture components") colors <- rep("black", nclass) } niceRange <- function (x, f = 0.04) { r <- range(x) d <- diff(r) out <- c(r[1] - d*f, r[2] + d*f) return(out) } #################################################################### what <- match.arg(what, several.ok = TRUE) oldpar <- par(no.readonly = TRUE) # on.exit(par(oldpar)) if(any(i <- (what == "pairs")) & (length(dimens) == 2)) { what[i] <- "scatterplot" } if(length(dimens) == 1) { what[!(what == "density" | what == "evalues")] <- "density" } what <- unique(what) plot.MclustDR.scatterplot <- function(...) { dir <- dir[,dimens,drop=FALSE] plot(dir, col = colors[class], pch = symbols[class], xlab = colnames(dir)[1], ylab = colnames(dir)[2], asp = asp, ...) } plot.MclustDR.pairs <- function(...) { dir <- dir[,dimens,drop=FALSE] pairs(dir, col = colors[class], pch = symbols[class], gap = 0.2, asp = asp, ...) } plot.MclustDR.density <- function(...) { dimens <- dimens[1] dir <- object$dir[,dimens,drop=FALSE] par <- projpar.MclustDR(object, dimens) Mu <- par$mean Sigma <- par$variance q <- seq(min(dir), max(dir), length=2*ngrid) dens <- matrix(as.double(NA), length(q), G) for(j in 1:G) dens[,j] <- dnorm(q, Mu[j,], sqrt(Sigma[,,j])) # if(object$type == "MclustDA") { d <- t(apply(dens, 1, function(x, p = object$pro) p*x)) dens <- matrix(as.double(NA), length(q), nclass) tab <- table(y, class) for(i in 1:ncol(tab)) { j <- which(tab[,i] > 0) dens[,i] <- apply(d[,j,drop=FALSE],1,sum) } } # oldpar <- par(mar = c(0,5.1,1,1), mfrow = par("mfrow"), no.readonly = TRUE) on.exit(par(oldpar)) layout(matrix(1:2,2,1), heights = c(2,1)) plot(0, 0, type = "n", xlab = colnames(dir), ylab = "Density", xlim = range(q, dir), ylim = range(0, dens*1.1), xaxt = "n") for(j in 1:ncol(dens)) lines(q, dens[,j], col = colors[j]) dir.class <- split(dir, class) par(mar = c(4.1,5.1,0,1)) boxplot(dir.class, col = adjustcolor(colors[1:nclass], alpha.f = 0.3), border = colors[1:nclass], horizontal = TRUE, pars = list(boxwex = 0.6, staplewex = 0.8, medlwd = 2, whisklty = 3, outlty = 1, outpch = NA), ylim = range(q,dir), yaxt = "n", xlab = colnames(dir)) axis(2, at = 1:nclass, labels = levels(object$class), tick = FALSE, cex = 0.8, las = 2) } plot.MclustDR.contour <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] par <- projpar.MclustDR(object, dimens) Mu <- par$mean Sigma <- par$variance # draw contours for each class or cluster plot(dir, type = "n", asp = asp) for(k in seq(nclass)) { i <- which(object$class2mixcomp == k) parameters <- list(pro = object$pro[i]/sum(object$pro[i]), mean = t(par$mean[i,,drop=FALSE]), variance = list(G = length(i), d = 2, sigma = par$variance[,,i,drop=FALSE])) surfacePlot(dir, parameters, col = col.contour, nlevels = nlevels, grid = ngrid, xlim = par("usr")[1:2], ylim = par("usr")[3:4], asp = asp, add = TRUE) } points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.classification.Mclust <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] pred <- predict2D.MclustDR(object, dimens, ngrid, xlim = niceRange(dir[,1]), ylim = niceRange(dir[,2])) pred$classification <- apply(pred$z, 1:2, which.max) # image(pred$x, pred$y, pred$classification, col = adjustcolor(colors[1:G], alpha.f = 0.1), xlab = colnames(dir)[1], ylab = colnames(dir)[2], xaxs = "i", yaxs = "i", asp = asp) # for(j in 1:G) # { z <- ifelse(pred$classification == j, 1, -1) # contour(pred$x, pred$y, z, col = col.sep, # add = TRUE, levels = 0, drawlabels = FALSE) # } points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.classification.MclustDA <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] pred <- predict2D.MclustDR(object, dimens, ngrid, xlim = niceRange(dir[,1]), ylim = niceRange(dir[,2])) pred$classification <- apply(pred$z, 1:2, which.max) # image(pred$x, pred$y, pred$classification, col = adjustcolor(colors[1:nclass], alpha.f = 0.1), xlab = colnames(dir)[1], ylab = colnames(dir)[2], xaxs = "i", yaxs = "i", asp = asp) # for(j in 1:nclass) # { z <- ifelse(pred$classification == j, 1, -1) # contour(pred$x, pred$y, z, col = col.sep, # add = TRUE, levels = 0, drawlabels = FALSE) # } points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.boundaries.Mclust <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] pred <- predict2D.MclustDR(object, dimens, ngrid, xlim = niceRange(dir[,1]), ylim = niceRange(dir[,2])) image(pred$x, pred$y, pred$uncertainty, col = rev(gray.colors(10, start = 0, end = 1)), breaks = seq(0, 1-1/nclass, length = 11), xlab = colnames(dir)[1], ylab = colnames(dir)[2], xaxs = "i", yaxs = "i", asp = asp) points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.boundaries.MclustDA <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] pred <- predict2D.MclustDR(object, dimens, ngrid, xlim = niceRange(dir[,1]), ylim = niceRange(dir[,2])) levels <- seq(0, 1-1/nclass, length = 11) col <- rev(gray.colors(10, start = 0, end = 1)) image(pred$x, pred$y, pred$uncertainty, col = col, breaks = levels, xlab = colnames(dir)[1], ylab = colnames(dir)[2], xaxs = "i", yaxs = "i", asp = asp) points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.evalues <- function(...) { plotEvalues.MclustDR(object, numdir = max(dimens), plot = TRUE) } if(interactive() & length(what) > 1) { title <- "Dimension reduction for model-based clustering and classification plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "scatterplot") plot.MclustDR.scatterplot(...) if(what[choice] == "pairs") plot.MclustDR.pairs(...) if(what[choice] == "contour") plot.MclustDR.contour(...) if(what[choice] == "classification" & object$type == "Mclust") plot.MclustDR.classification.Mclust(...) if(what[choice] == "classification" & (object$type == "EDDA" | object$type == "MclustDA")) plot.MclustDR.classification.MclustDA(...) if(what[choice] == "boundaries" & object$type == "Mclust") plot.MclustDR.boundaries.Mclust(...) if(what[choice] == "boundaries" & (object$type == "EDDA" | object$type == "MclustDA")) plot.MclustDR.boundaries.MclustDA(...) if(what[choice] == "density") plot.MclustDR.density(...) if(what[choice] == "evalues") plot.MclustDR.evalues(...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "scatterplot")) plot.MclustDR.scatterplot(...) if(any(what == "pairs")) plot.MclustDR.pairs(...) if(any(what == "contour")) plot.MclustDR.contour(...) if(any(what == "classification" & object$type == "Mclust")) plot.MclustDR.classification.Mclust(...) if(any(what == "classification" & (object$type == "EDDA" | object$type == "MclustDA"))) plot.MclustDR.classification.MclustDA(...) if(any(what == "boundaries" & object$type == "Mclust")) plot.MclustDR.boundaries.Mclust(...) if(any(what == "boundaries" & (object$type == "EDDA" | object$type == "MclustDA"))) plot.MclustDR.boundaries.MclustDA(...) if(any(what == "density")) plot.MclustDR.density(...) if(any(what == "evalues")) plot.MclustDR.evalues(...) } invisible() } plotEvalues.MclustDR <- function(x, numdir, plot = FALSE, legend = TRUE, ylim, ...) { object <- x G <- object$G f <- object$pro lambda <- object$lambda # dim <- if(missing(numdir)) seq(object$numdir) else seq(numdir) if(missing(numdir)) numdir <- object$numdir dim <- seq(numdir) d <- length(dim) par <- projpar.MclustDR(object, dim = dim, center = TRUE, raw = TRUE) mu <- par$mean Sigma.G <- par$variance # M1 <- t(mu) %*% diag(f) %*% mu l1 <- 2*lambda*diag(crossprod(M1)) # S <- matrix(0, d, d) for(j in seq(G)) S <- S + f[j]*Sigma.G[,,j] M2 <- matrix(0, d, d) for(j in 1:G) { C <- (Sigma.G[,,j]-S) M2 <- M2 + f[j] * tcrossprod(C) } l2 <- 2*(1-lambda)*diag(M2) # l <- object$evalues[dim] # if(plot) { if(missing(ylim)) ylim <- range(0, max(l)+diff(range(l))*0.05) plot(dim, l, type="b", lty = 1, pch = 16, cex = 1.5, xaxt = "n", ylim = ylim, xlab = "MclustDR directions", ylab = "Eigenvalues", panel.first = { abline(v = dim, col = "lightgray", lty = "dotted") abline(h = axTicks(2,par("yaxp")), col = "lightgray", lty = "dotted") } ) axis(1, at = dim, labels = dim) lines(dim, l1, type="b", lty = 2, pch = 22, cex = 1.5) lines(dim, l2, type="b", lty = 2, pch = 2, cex = 1.5) if(legend) { legend("topright", lty = c(1,2,2), pch = c(16,22,2), legend = c("Eigenvalues", "Means contrib.", "Vars contrib."), bg = ifelse(par("bg")=="transparent", "white", par("bg")), inset = 0.01, pt.cex = 1.5) } } out <- list(dim = dim, evalues = l, mean.contrib = l1, var.contrib = l2) if(plot) invisible(out) else return(out) } # Auxiliary functions ----------------------------------------------------- mvdnorm <- function(x, mu, sigma, log = FALSE, tol = sqrt(.Machine$double.eps)) { if(is.vector(x)) { x <- matrix(x, ncol = length(x)) } else { x <- as.matrix(x) } SVD <- svd(sigma) pos <- (SVD$d > max(tol*SVD$d[1], 0)) # in case of not full rank covar matrix inv.sigma <- SVD$v[,pos,drop=FALSE] %*% (1/SVD$d[pos] * t(SVD$u[,pos,drop=FALSE])) z <- mahalanobis(x, center = mu, cov = inv.sigma, inverted = TRUE) # logdet <- sum(log(eigen(sigma, symmetric = TRUE, only.values = TRUE)$values)) logdet <- sum(log(SVD$d[pos])) logdens <- -(ncol(x) * log(2 * pi) + logdet + z)/2 if(log) return(logdens) else return(exp(logdens)) } ellipse <- function(c, M, r, npoints = 100) { # Returns the cartesian coordinates of points x on the ellipse # (x-c)'M(x-c) = r^2, # where x = x(theta) and theta varies from 0 to 2*pi radians in npoints steps. # local functions circle <- function(theta, r) r*c(cos(theta),sin(theta)) ellip <- function(theta, r, lambda) lambda*circle(theta, r) point <- function(theta) c+c(gamma %*% ellip(theta, r, lam)) # SVD <- svd(M) lam <- 1/sqrt(SVD$d) gamma <- SVD$v coord <- t(sapply(seq(0, 2*pi, length=npoints), function(th) point(th))) return(coord) } eigen.decomp <- function(A, B) { # # Generalized eigenvalue decomposition of A with respect to B # # A generalized eigenvalue problem AV = BLV is said to be symmetric positive # definite if A is symmetric and B is positive definite. V is the matrix of # generalized eigenvectors, and L is the diagonal matrix of generalized # eigenvalues (Stewart, 2001, pag. 229-230). # # Properties: # V'AV = L # V'BV = I # # The algorithm implemented is described in Stewart (2001, pag. 234) and used # by Li (2000). # # References: # Li, K.C., 2000. High dimensional data analysis via the SIR-PHD approach, # Stewart, G.W., 2001. Matrix Algorithms: vol II Eigensystems, SIAM. svd <- svd(B, nu=0) p <- length(svd$d) # Computes inverse square root matrix such that: # t(inv.sqrt.B) %*% inv.sqrt.B = inv.sqrt.B %*% t(inv.sqrt.B) = solve(B) inv.sqrt.B <- svd$v %*% diag(1/sqrt(svd$d),p,p) %*% t(svd$v) # Compute B^(-1/2)' A B^(-1/2) = UDU' # evectors = B^(-1/2) U # evalues = D A <- t(inv.sqrt.B) %*% A %*% inv.sqrt.B svd <- svd(A, nu=0) list(l = svd$d, v = inv.sqrt.B %*% svd$v) } # Subset selection of GMMDR/GMMDRC directions ----------------------------- MclustDRsubsel <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), ..., bic.stop = 0, bic.cutoff = 0, mindir = 1, verbose = interactive()) { # Subset selection for GMMDR directions based on bayes factors. # # object = a MclustDR object # G = a vector of cluster sizes for searching # modelNames = a vector of models for searching # ... = further arguments passed through Mclust/MclustDA # bic.stop = criterion to stop the search. If maximal BIC difference is # less than bic.stop the algorithm stops. # Two tipical values are: # 0 = stops when BIC difference becomes negative (default) # -Inf = stops when all directions have been selected # bic.cutoff = select simplest ``best'' model within bic.cutoff from the # maximum value achieved. Setting this to 0 (default) simply # select the model with the largest BIC difference. # mindir = the minimum number of diretions to be estimated # verbose = if 0 no trace info is shown; if 1 a trace of each step # of the search is printed; if 2 a detailed trace info is # is shown. if(class(object) != "MclustDR") stop("Not a 'MclustDR' object") hcUse <- mclust.options("hcUse") mclust.options("hcUse" = "VARS") on.exit(mclust.options("hcUse" = hcUse)) mc <- match.call(expand.dots = TRUE) mc[[1]] <- switch(object$type, "Mclust" = as.name("MclustDRsubsel_cluster"), "EDDA" = as.name("MclustDRsubsel_classif"), "MclustDA" = as.name("MclustDRsubsel_classif"), stop("Not allowed 'MclustDR' type!")) eval(mc, parent.frame()) } MclustDRsubsel_cluster <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), ..., bic.stop = 0, bic.cutoff = 0, mindir = 1, verbose = interactive()) { drmodel <- object mclustType <- drmodel$type lambda <- drmodel$lambda numdir <- drmodel$numdir numdir0 <- numdir+1 dir <- drmodel$dir[,seq(numdir),drop=FALSE] mindir <- max(1, as.numeric(mindir), na.rm = TRUE) verbose <- as.numeric(verbose) ncycle <- 0 while(numdir < numdir0) { ncycle <- ncycle+1 if(verbose > 0) cat("\nCycle", ncycle, "...\n") out <- MclustDRsubsel1cycle(drmodel, G, modelNames, bic.stop = bic.stop, bic.cutoff = bic.cutoff, verbose = if(verbose > 1) TRUE else FALSE) if(verbose > 0) { cat("\n"); print(out$tab) } mod <- do.call("Mclust", list(data = dir[,out$subset,drop=FALSE], G = G, modelNames = if(length(out$subset) > 1) modelNames else c("E", "V"), verbose = FALSE, ...)) numdir0 <- numdir drmodel0 <- MclustDR(mod, lambda = lambda) if(drmodel0$numdir < mindir) break drmodel <- drmodel0 numdir <- drmodel$numdir dir <- drmodel$dir[,seq(numdir),drop=FALSE] } # format object using original data obj <- drmodel obj$basisx <- MclustDRrecoverdir(obj, data = object$x, std = FALSE) obj$std.basisx <- MclustDRrecoverdir(obj, data = object$x, std = TRUE) class(obj) <- c("MclustDRsubsel", class(obj)) return(obj) } MclustDRsubsel1cycle <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), bic.stop = 0, bic.cutoff = 0, verbose = interactive()) { # Single cycle of subset selection for GMMDR directions based on bayes factors. if(class(object) != "MclustDR") stop("Not a 'MclustDR' object") d <- object$numdir dir <- object$dir[,seq(d),drop=FALSE] n <- nrow(dir) if(is.null(colnames(dir))) colnames(dir) = paste("[,", 1:d, "]", sep="") dir.names <- colnames(dir) BIC <- Model1 <- Model2 <- tab <- NULL; Model1$bic <- 0 bic.stop <- bic.stop + sqrt(.Machine$double.eps) bic.cutoff <- bic.cutoff + sqrt(.Machine$double.eps) inc <- NULL; excl <- seq(1,d) model1D <- if(any(grep("V", modelNames))) c("E", "V") else "E" # hskip <- paste(rep(" ",4),collapse="") if(verbose) cat("\n", hskip, "Start greedy search...\n", sep="") while(length(excl)>0) { if(verbose) { cat(hskip, rep("-",15), "\n", sep="") cat(paste(hskip, "Step", length(inc)+1, "\n")) } for(j in excl) { # Select simplest model with smallest num. of components # within bic.cutoff bic <- mclustBIC(dir[,sort(c(inc, j)),drop=FALSE], G = G, modelNames = if(length(inc)>0) modelNames else model1D, verbose = FALSE) bic.tab <- (as.matrix(max(bic, na.rm=TRUE) - bic) < bic.cutoff)*1 bestG <- which(rowSums(bic.tab, na.rm=TRUE) > 0)[1] bestmod <- which(bic.tab[bestG,,drop=FALSE] > 0)[1] out <- data.frame(Variable = dir.names[j], model = colnames(bic.tab)[bestmod], G = G[bestG], bic = c(bic[bestG,bestmod]), bic.diff = c(bic[bestG,bestmod] - Model1$bic - MclustDRBICreg(dir[,j], dir[,inc])) ) # Model2 <- rbind(Model2, out) } if(verbose) print(cbind(" " = hskip, Model2), row.names = FALSE) # stop if max BIC difference is < than cut-off bic.stop if(max(Model2$bic.diff) < bic.stop & length(inc) > 0) { break } # otherwise keep selecting i <- which.max(Model2$bic.diff) inc <- append(inc, excl[i]) excl <- setdiff(excl, excl[i]) tab <- rbind(tab, Model2[i,]) Model1 <- Model2[i,] Model2 <- NULL } rownames(tab) <- 1:nrow(tab) colnames(tab) <- c("Variable", "Model", "G", "BIC", "BIC.dif") subsets <- sapply(1:nrow(tab), function(x) list(inc[1:x])) # return(list(subset = subsets[[length(subsets)]], tab = tab)) } MclustDRsubsel_classif <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), ..., bic.stop = 0, bic.cutoff = 0, mindir = 1, verbose = interactive()) { drmodel <- object mclustType <- drmodel$type lambda <- drmodel$lambda numdir <- drmodel$numdir numdir0 <- numdir+1 dir <- drmodel$dir[,seq(numdir),drop=FALSE] mindir <- max(1, as.numeric(mindir), na.rm = TRUE) verbose <- as.numeric(verbose) ncycle <- 0 while(numdir < numdir0) { ncycle <- ncycle+1 if(verbose > 0) cat("\nCycle", ncycle, "...\n") out <- MclustDRCsubsel1cycle(drmodel, G, modelNames, bic.stop = bic.stop, bic.cutoff = bic.cutoff, verbose = if(verbose > 1) TRUE else FALSE) if(verbose > 0) { cat("\n"); print(out$tab) } mod <- do.call("MclustDA", list(data = dir[,out$subset,drop=FALSE], class = object$class, G = G, modelNames = if(length(out$subset) > 1) modelNames else if(any(grep("V", modelNames))) c("E", "V") else "E", modelType = mclustType, verbose = FALSE, ...)) numdir0 <- numdir drmodel0 <- MclustDR(mod, lambda = lambda) if(drmodel0$numdir < mindir) break drmodel <- drmodel0 numdir <- drmodel$numdir dir <- drmodel$dir[,seq(numdir),drop=FALSE] } # format object using original data obj <- drmodel obj$basisx <- MclustDRrecoverdir(obj, data = object$x, std = FALSE) obj$std.basisx <- MclustDRrecoverdir(obj, data = object$x, std = TRUE) class(obj) <- c("MclustDRsubsel", class(obj)) return(obj) } MclustDRCsubsel1cycle <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), bic.stop = 0, bic.cutoff = 0, verbose = TRUE) { # Single cycle of subset selection for GMMDRC directions based on bayes factors. if(class(object) != "MclustDR") stop("Not a 'MclustDR' object") d <- object$numdir dir <- object$dir[,seq(d),drop=FALSE] n <- nrow(dir) if(is.null(colnames(dir))) colnames(dir) = paste("[,", seq(d), "]", sep="") dir.names <- colnames(dir) BIC <- Model1 <- Model2 <- tab <- NULL; Model1$bic <- 0 bic.stop <- bic.stop + sqrt(.Machine$double.eps) bic.cutoff <- bic.cutoff + sqrt(.Machine$double.eps) inc <- NULL; excl <- seq(d) model1D <- if(any(grep("V", modelNames))) c("E", "V") else "E" # hskip <- paste(rep(" ",4),collapse="") if(verbose) cat("\n", hskip, "Start greedy search...\n", sep="") while(length(excl)>0) { if(verbose) { cat(hskip, rep("-",15), "\n", sep="") cat(paste(hskip, "Step", length(inc)+1, "\n")) } for(j in excl) { # Select simplest model with smallest num. of components # within bic.cutoff mod <- MclustDA(dir[,sort(c(inc, j)),drop=FALSE], class = object$class, G = G, modelNames = if(length(inc)>0) modelNames else model1D, modelType = object$type, verbose = FALSE) out <- data.frame(Variable = dir.names[j], model = paste(sapply(mod$models, function(m) m$modelName),collapse="|"), G = paste(sapply(mod$models, function(m) m$G),collapse="|"), bic = mod$bic, bic.diff = c(mod$bic - # (Model1$bic + bic.reg(z2, z1)) Model1$bic - MclustDRBICreg(dir[,j], dir[,inc])) ) # Model2 <- rbind(Model2, out) } if(verbose) print(cbind(" " = hskip, Model2), row.names = FALSE) # stop if max BIC difference is < than cut-off bic.stop if(max(Model2$bic.dif) < bic.stop & length(inc) > 0) { break } # otherwise keep selecting i <- which.max(Model2$bic.dif) inc <- append(inc, excl[i]) excl <- setdiff(excl, excl[i]) tab <- rbind(tab, Model2[i,]) Model1 <- Model2[i,] Model2 <- NULL } rownames(tab) <- 1:nrow(tab) colnames(tab) <- c("Variable", "Model", "G", "BIC", "BIC.dif") subsets <- sapply(1:nrow(tab), function(x) list(inc[1:x])) # return(list(subset = subsets[[length(subsets)]], tab = tab)) } # BICreg <- function(y, x) # { n <- length(y) # mod <- lm.fit(cbind(rep(1,n), x), y) # rss <- sum(mod$residuals^2) # -n*log(2*pi) - n*log(rss/n) - n - (n - mod$df.residual + 1) * log(n) # } MclustDRBICreg <- function(y, x, stepwise = TRUE) { x <- as.matrix(x) y <- as.vector(y) n <- length(y) mod0 <- lm(y ~ 1) if(ncol(x) >= 1) { mod1 <- lm(y ~ 1+x) if(stepwise) { mod <- step(mod0, k = log(n), trace = 0, scope = list(lower = formula(mod0), upper = formula(mod1)), direction = "forward") } else mod <- mod1 } else mod <- mod0 rss <- sum(mod$residuals^2) p <- (n - mod$df.residual + 1) -n*log(2*pi) - n*log(rss/n) - n - p*log(n) } normalize <- function(x) { # Normalize the vector x to have unit length x <- as.vector(x) x <- x/sqrt(as.vector(crossprod(x))) return(x) } MclustDRrecoverdir <- function(object, data, normalized = TRUE, std = FALSE) { # Recover coefficients of the linear combination defining the MclustDR # directions. This is useful if the directions are obtained from other # directions if(!any(class(object) == "MclustDR")) stop("object must be of class mclustsir") if(missing(data)) x <- object$x else x <- as.matrix(data) x <- scale(x, center=TRUE, scale=FALSE) numdir <- object$numdir dir <- object$dir[,seq(numdir),drop=FALSE] # B <- as.matrix(coef(lm(dir ~ x)))[-1,,drop=FALSE] # ok but old B <- qr.solve(x, dir) if(std) { sdx <- sd(x) B <- apply(B, 2, function(x) x*sdx) } if(normalized) { B <- as.matrix(apply(B, 2, normalize)) } rownames(B) <- colnames(x) return(B) } ## Define print and summary methods for showing basis coefs ## in the original scale of variables print.MclustDRsubsel <- function(x, ...) { x$basis <- x$basisx class(x) <- class(x)[2] NextMethod() } summary.MclustDRsubsel <- function(object, ...) { object$basis <- object$basisx object$std.basis <- object$std.basisx class(object) <- class(object)[2] NextMethod() } mclust/R/bootstrap.R0000644000176200001440000004356713175655120014146 0ustar liggesusers## ## Resampling methods ## # # Bootstrap Likelihood Ratio Test # mclustBootstrapLRT <- function(data, modelName = NULL, nboot = 999, level = 0.05, maxG = NULL, verbose = interactive(), ...) { if(is.null(modelName)) stop("A 'modelName' must be provided. Please see help(mclustModelNames) which describes the available models.") modelName <- modelName[1] if(is.null(maxG)) G <- seq.int(1, 9) else { maxG <- as.numeric(maxG); G <- seq.int(1, maxG+1) } Bic <- mclustBIC(data, G = G, modelNames = modelName, warn = FALSE, verbose = FALSE, ...) if(!(modelName %in% attr(Bic, "modelNames"))) stop("'modelName' not compatibile with data. Please see help(mclustModelNames) which describes the available models.") if(all(is.na(Bic))) stop(paste("no model", modelName, "can be fitted.")) # select only models that can be fit G <- which(!is.na(Bic[, attr(Bic, "modelNames") == modelName])) if(verbose) { cat("bootstrapping LRTS ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = (max(G)-1)*nboot, style = 3) on.exit(close(pbar)) } obsLRTS <- p.value <- vector("numeric", length = max(G)-1) bootLRTS <- matrix(as.double(NA), nrow = nboot, ncol = max(G)-1) g <- 0; continue <- TRUE while(g < (max(G)-1) & continue) { g <- g + 1 # fit model under H0 Mod0 <- summary(Bic, data, G = g, modelNames = modelName) # fit model under H1 Mod1 <- summary(Bic, data, G = g+1, modelNames = modelName) # observed LRTS obsLRTS[g] <- 2*(Mod1$loglik - Mod0$loglik) # bootstrap b <- 0 while(b < nboot) { b <- b + 1 # generate 'parametric' bootstrap sample under H0 bootSample <- sim(Mod0$modelName, Mod0$parameters, n = Mod0$n) # fit model under H0 bootMod0 <- em(data = bootSample[,-1], modelName = Mod0$modelName, parameters = Mod0$parameters, warn = FALSE, ...) # fit model under H1 bootMod1 <- em(data = bootSample[,-1], modelName = Mod1$modelName, parameters = Mod1$parameters, warn = FALSE, ...) # compute bootstrap LRT LRTS <- 2*(bootMod1$loglik - bootMod0$loglik) if(is.na(LRTS)) { b <- b - 1; next() } bootLRTS[b,g] <- LRTS if(verbose) setTxtProgressBar(pbar, (g-1)*nboot+b) } p.value[g] <- (1 + sum(bootLRTS[,g] >= obsLRTS[g]))/(nboot+1) # check if not-significant when no maxG is provided if(is.null(maxG) & p.value[g] > level) { continue <- FALSE if(verbose) setTxtProgressBar(pbar, (max(G)-1)*nboot) } } out <- list(G = 1:g, modelName = modelName, obs = obsLRTS[1:g], boot = bootLRTS[,1:g], p.value = p.value[1:g]) class(out) <- "mclustBootstrapLRT" return(out) } print.mclustBootstrapLRT <- function(x, ...) { cat("Bootstrap sequential LRT for the number of mixture components\n") cat(rep("-", 61), "\n", sep = "") cat(formatC("Model", flag = "-", width = 12), "=", x$modelName, "\n") cat(formatC("Replications", flag = "-", width = 12), "=", nrow(x$boot), "\n") df <- data.frame(x$obs, x$p.value) colnames(df) <- c("LRTS", "bootstrap p-value") rownames(df) <- formatC(paste(x$G, "vs", x$G+1), flag = "-", width = 8) print(df, ...) } plot.mclustBootstrapLRT <- function(x, G = 1, hist.col = "grey", hist.border = "lightgrey", breaks = "Scott", col = "forestgreen", lwd = 2, lty = 3, main = NULL, ...) { if(!any(G == x$G)) { warning(paste("bootstrap LRT not available for G =", G)) return() } G <- as.numeric(G)[1] h <- hist(x$boot[,G], breaks = breaks, plot = FALSE) xlim <- range(h$breaks, x$boot[,G], x$obs[G]*1.1, na.rm = TRUE) xlim <- c(xlim[1] - diff(xlim) * 0.1, xlim[2] + diff(xlim) * 0.1) plot(h, xlab = "LRTS", freq = FALSE, xlim = xlim, col = hist.col, border = hist.border, main = NULL) abline(v = x$obs[G], lty = lty, lwd = lwd, col = col) if(is.null(main) | is.character(main)) { if(is.null(main)) main <- paste("Bootstrap LRT for model", x$modelName, "with", G, "vs", G+1, "components") title(main = main, cex.main = 1) } invisible() } # # Bootstrap inference (standard errors and percentile confidence intervals) # MclustBootstrap <- function(object, nboot = 999, type = c("bs", "wlbs", "jk"), verbose = interactive(), ...) { if(!any(class(object) %in% c("Mclust", "densityMclust"))) stop("object must be of class 'Mclust' or 'densityMclust'") if(any(type %in% c("nonpara", "wlb"))) { type <- gsub("nonpara", "bs", type) type <- gsub("wlb", "wlbs", type) warning("resampling type converted to \"", type, "\"") } type <- match.arg(type, choices = eval(formals(MclustBootstrap)$type)) data <- object$data n <- object$n d <- object$d G <- object$G if(type == "jk") nboot <- n varnames <- rownames(object$parameters$mean) # model parameters par <- summary(object)[c("pro", "mean", "variance")] if(d == 1) { par$mean <- array(par$mean, dim = c(d, G)) par$variance <- array(par$variance, dim = c(d, d, G)) } # boostrapped parameters pro.boot <- array(NA, c(nboot,G), dimnames = list(NULL, seq.int(G))) mean.boot <- array(NA, c(nboot,d,G), dimnames = list(NULL, varnames, seq.int(G))) var.boot <- array(NA, c(nboot,d,d,G), dimnames = list(NULL, varnames, varnames, seq.int(G))) if(verbose) { cat("resampling ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = nboot, style = 3) on.exit(close(pbar)) } b <- nonfit <- 0 while(b < nboot) { b <- b + 1 obj <- object switch(type, "bs" = { idx <- sample(seq_len(n), size = n, replace = TRUE) obj$data <- object$data[idx,] obj$z <- object$z[idx,] obj$warn <- FALSE mod.boot <- try(do.call("me", obj), silent = TRUE) }, "wlbs" = { w <- rexp(n) # w <- w/mean(w) w <- w/max(w) mod.boot <- try(do.call("me.weighted", c(list(weights = w, warn = FALSE), obj)), silent = TRUE) }, "jk" = { idx <- seq_len(n)[-b] obj$data <- object$data[idx,] obj$z <- object$z[idx,] obj$warn <- FALSE mod.boot <- try(do.call("me", obj), silent = TRUE) } ) # check model convergence if(inherits(mod.boot, "try-error")) { if(type != "jk") b <- b - 1 nonfit <- nonfit + 1 next() } if(is.na(mod.boot$loglik)) { if(type != "jk") b <- b - 1 nonfit <- nonfit + 1 next() } if(type == "jk") { # pseudovalues ... # pro.boot[b,] <- n*par$pro - (n-1)*mod.boot$parameters$pro # mean.boot[b,,] <- n*par$mean - (n-1)*mod.boot$parameters$mean # var.boot[b,,,] <- n*par$variance - (n-1)*mod.boot$parameters$variance$sigma pro.boot[b,] <- mod.boot$parameters$pro mean.boot[b,,] <- mod.boot$parameters$mean var.boot[b,,,] <- mod.boot$parameters$variance$sigma } else { # bootstrap values pro.boot[b,] <- mod.boot$parameters$pro mean.boot[b,,] <- mod.boot$parameters$mean var.boot[b,,,] <- mod.boot$parameters$variance$sigma } if(verbose) setTxtProgressBar(pbar, b) } out <- list(G = G, modelName = object$modelName, parameters = par, nboot = nboot, type = type, nonfit = nonfit, pro = pro.boot, mean = mean.boot, variance = var.boot) class(out) <- "MclustBootstrap" return(out) } print.MclustBootstrap <- function(x, digits = getOption("digits"), ...) { cat("\'", class(x)[1], "\' model object:\n", sep = "") str(x,1) invisible() } summary.MclustBootstrap <- function(object, what = c("se", "ci"), conf.level = 0.95, ...) { what <- match.arg(what, choices = c("se", "ci")) dims <- dim(object$mean) varnames <- dimnames(object$mean)[[2]] nboot <- dims[1] d <- dims[2] G <- dims[3] if(what == "se") { out <- list(pro = apply(object$pro, 2, sd, na.rm=TRUE), mean = apply(object$mean, c(2,3), sd, na.rm=TRUE), variance = apply(object$variance, c(2,3,4), sd, na.rm=TRUE)) if(object$type == "jk") out <- lapply(out, function(x) sqrt(x^2*(nboot-object$nonfit-1)^2/(nboot-object$nonfit))) } else if(what == "ci") { levels <- c((1-conf.level)/2, (1 + conf.level)/2) if(object$type == "jk") { # bias-corrected ci based on normal-approximation ave <- list(pro = apply(object$pro, 2, mean, na.rm=TRUE), mean = apply(object$mean, c(2,3), mean, na.rm=TRUE), variance = t(sapply(seq.int(d), function(j) apply(object$variance[,j,j,], 2, mean, na.rm=TRUE), simplify = "array"))) se <- list(pro = apply(object$pro, 2, sd, na.rm=TRUE), mean = apply(object$mean, c(2,3), sd, na.rm=TRUE), variance = t(sapply(seq.int(d), function(j) apply(object$variance[,j,j,], 2, sd, na.rm=TRUE), simplify = "array"))) se <- lapply(se, function(x) sqrt(x^2*(nboot-object$nonfit-1)^2/(nboot-object$nonfit))) zq <- qnorm(max(levels)) lnames <- paste0(formatC(levels * 100, format = "fg", width = 1, digits = getOption("digits")), "%") # the code above mimic stats:::format_perc(levels) which can't be used # because format_perc is not exported from stats out <- list(pro = array(as.double(NA), c(2,G), dimnames = list(lnames, 1:G)), mean = array(as.double(NA), dim = c(2,d,G), dimnames = list(lnames, 1:d, 1:G)), variance = array(as.double(NA), dim = c(2,d,G), dimnames = list(lnames, 1:d, 1:G))) out$pro[1,] <- ave$pro - zq*se$pro out$pro[2,] <- ave$pro + zq*se$pro out$mean[1,,] <- ave$mean - zq*se$mean out$mean[2,,] <- ave$mean + zq*se$mean out$variance[1,,] <- ave$variance - zq*se$variance out$variance[2,,] <- ave$variance + zq*se$variance } else { # percentile-based ci out <- list(pro = apply(object$pro, 2, quantile, probs = levels, na.rm=TRUE), mean = apply(object$mean, c(2,3), quantile, probs = levels, na.rm=TRUE)) v <- array(as.double(NA), dim = c(2,d,G), dimnames = dimnames(out$mean)) for(j in seq.int(d)) v[,j,] <- apply(object$variance[,j,j,], 2, quantile, probs = levels, na.rm=TRUE) out$variance <- v } } obj <- append(object[c("modelName", "G", "nboot", "type")], list(d = d, what = what)) if(what == "ci") obj$conf.level <- conf.level obj <- append(obj, out) class(obj) <- "summary.MclustBootstrap" return(obj) } # summary.MclustBootstrap <- function(object, what = c("se", "ci"), conf.level = 0.95, ...) # { # what <- match.arg(what, several.ok = FALSE) # dims <- dim(object$mean) # varnames <- dimnames(object$mean)[[2]] # nboot <- dims[1] # d <- dims[2] # G <- dims[3] # # if(what == "se") # { out <- list(pro = apply(object$pro, 2, sd), # mean = apply(object$mean, c(2,3), sd), # variance = apply(object$variance, c(2,3,4), sd)) # } else # if(what == "ci") # { levels <- c((1-conf.level)/2, (1 + conf.level)/2) # out <- list(pro = apply(object$pro, 2, quantile, probs = levels), # mean = apply(object$mean, c(2,3), quantile, probs = levels)) # v <- array(as.double(NA), dim = c(2,d,G), # dimnames = dimnames(out$mean)) # for(j in seq.int(d)) # v[,j,] <- apply(object$variance[,j,j,], 2, quantile, probs = levels) # out$variance <- v # } # # obj <- append(object[c("modelName", "G", "nboot", "type")], # list(d = d, what = what)) # if(what == "ci") obj$conf.level <- conf.level # obj <- append(obj, out) # class(obj) <- "summary.MclustBootstrap" # return(obj) # } print.summary.MclustBootstrap <- function(x, digits = getOption("digits"), ...) { cat(rep("-", 58), "\n", sep="") cat("Resampling", if(x$what == "se") "standard errors" else "confidence intervals", "\n") cat(rep("-", 58), "\n", sep="") # cat(formatC("Model", flag = "-", width = 26), "=", x$modelName, "\n") cat(formatC("Num. of mixture components", flag = "-", width = 26), "=", x$G, "\n") cat(formatC("Replications", flag = "-", width = 26), "=", x$nboot, "\n") cat(formatC("Type", flag = "-", width = 26), "=", switch(x$type, "bs" = "nonparametric bootstrap", "wlbs" = "weighted likelihood bootstrap", "jk" = "jackknife"), "\n") if(x$what == "ci") cat(formatC("Confidence level", flag = "-", width = 26), "=", x$conf.level, "\n") # cat("\nMixing probabilities:\n") print(x$pro, digits = digits) # cat("\nMeans:\n") if(x$d == 1) { if(x$what == "se") print(x$mean[1,], digits = digits) else print(x$mean[,1,], digits = digits) } else if(x$what == "se") print(x$mean, digits = digits) else { for(g in seq.int(x$G)) { cat("[,,", g, "]\n", sep = "") print(x$mean[,,g], digits = digits) } } # cat("\nVariances:\n") if(x$d == 1) { print(x$variance[,1,], digits = digits) } else { for(g in seq.int(x$G)) { cat("[,,", g, "]\n", sep = "") print(x$variance[,,g], digits = digits) } } invisible(x) } # print.summary.MclustBootstrap <- function(x, digits = getOption("digits"), ...) # { # cat(rep("-", 58),"\n",sep="") # if(x$what == "se") # cat("Bootstrap standard errors\n") # else # cat("Bootstrap confidence intervals\n") # cat(rep("-", 58),"\n",sep="") # cat(formatC("Model", flag = "-", width = 26), "=", x$modelName, "\n") # cat(formatC("Num. of mixture components", flag = "-", width = 26), # "=", x$G, "\n") # cat(formatC("Replications", flag = "-", width = 26), "=", x$nboot, "\n") # cat(formatC("Type", flag = "-", width = 26), "=", # ifelse(x$type == "nonpara", "nonparametric bootstrap", # "weighted likelihood bootstrap"), "\n") # if(x$what == "ci") # cat(formatC("Confidence level", flag = "-", width = 26), # "=", x$conf.level, "\n") # # cat("\nMixing probabilities:\n") # print(x$pro, digits = digits) # # # cat("\nMeans:\n") # if(x$d == 1) # { if(x$what == "se") print(x$mean[1,], digits = digits) # else print(x$mean[,1,], digits = digits) # } else # if(x$what == "se") print(x$mean, digits = digits) # else { for(g in seq.int(x$G)) # { cat("[,,", g, "]\n", sep = "") # print(x$mean[,,g], digits = digits) } # } # # # cat("\nVariances:\n") # if(x$d == 1) # { print(x$variance[,1,], digits = digits) } # else # { for(g in seq.int(x$G)) # { cat("[,,", g, "]\n", sep = "") # print(x$variance[,,g], digits = digits) } # } # # invisible(x) # } plot.MclustBootstrap <- function(x, what = c("pro", "mean", "var"), hist.col = "grey", hist.border = "lightgrey", breaks = "Sturges", col = "forestgreen", lwd = 2, lty = 3, xlab = NULL, xlim = NULL, ylim = NULL, ...) { object <- x # Argh. Really want to use object anyway what <- match.arg(what, choices = eval(formals(plot.MclustBootstrap)$what)) par <- object$parameters d <- dim(object$mean)[2] varnames <- rownames(par$mean) histBoot <- function(boot, stat, breaks, xlim, ylim, xlab, ...) { hist(boot, breaks = breaks, xlim = xlim, ylim = ylim, main = "", xlab = xlab, ylab = "", border = hist.border, col = hist.col) abline(v = stat, col = col, lwd = lwd, lty = lty) } switch(what, "pro" = { xlim <- range(if(is.null(xlim)) pretty(object$pro) else xlim) for(k in 1:object$G) histBoot(object$pro[,k], par$pro[k], breaks, xlim = xlim, ylim = ylim, xlab = paste(ifelse(is.null(xlab), "Mix. prop. for comp.", xlab), k)) }, "mean" = { isNull_xlim <- is.null(xlim) for(j in 1:d) { xlim <- range(if(isNull_xlim) pretty(object$mean[,j,]) else xlim) for(k in 1:object$G) histBoot(object$mean[,j,k], par$mean[j,k], breaks, xlim = xlim, ylim = ylim, xlab = paste(varnames[j], ifelse(is.null(xlab), "mean for comp.", xlab), k)) } }, "var" = { isNull_xlim <- is.null(xlim) for(j in 1:d) { xlim <- range(if(isNull_xlim) pretty(object$variance[,j,j,]) else xlim) for(k in 1:object$G) histBoot(object$variance[,j,j,k], par$variance[j,j,k], breaks, xlim = xlim, ylim = ylim, xlab = paste(varnames[j], ifelse(is.null(xlab), "var for comp.", xlab), k)) } } ) invisible() } mclust/R/mclustaddson.R0000644000176200001440000023632413056750002014615 0ustar liggesusers############################################################################## ### EVV model #### ############################################################################## emEVV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEVV(data, parameters = parameters, warn = warn)$z meEVV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } #### meEVV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K # if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVV", d = p, G = G, scale = NA, shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EVV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p)) storage.mode(z) <- "double" # # # MICHAEL from here------------------------------------------ # # without prior specification if(is.null(prior)) { temp <- .Fortran( "meevv", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = double(p*G), O = double(p*p*G), shape.o = double(p*p*G), scale = double(G), shape = double(p*G), pro = double(K), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eqpro = as.logical(control$equalPro), itmax = as.integer(control$itmax[1]), tol = as.double(control$tol[1]), eps = as.double(control$eps), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = as.integer(0), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVV"), prior[names(prior) != "functionName"])) # temp <- .Fortran("meevvp", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), O = double(p*p*G), shape.o = double(p*p*G), scale = double(G), shape = double(p*G), pro = double(K), loglik = NA, eqpro = as.logical(control$equalPro), itmax = as.integer(control$itmax[1]), tol = as.double(control$tol[1]), eps = as.double(control$eps), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = FALSE) WARNING <- "EVV model is not available with prior" if(warn) warning(WARNING) temp <- structure(temp, info = NA, WARNING = WARNING, returnCode = -1) return(temp) } # z <- matrix(temp$z, n,K) loglik <- temp$loglik mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale[1] shape <- matrix(temp$shape, p,G) O <- aperm( array(temp$O, c(p,p,G)), c(2,1,3) ) shape.o <- array( temp$shape.o, c(p,p,G) ) pro <- temp$pro niterout <- temp$niterout errout <- temp$errout lapackSVDinfo <- temp$info if( !is.finite(loglik) | any(scale > signif(.Machine$double.xmax, 6)) | any(shape > signif(.Machine$double.xmax, 6)) | any(O > signif(.Machine$double.xmax, 6)) | any(is.nan(scale)) | any(is.nan(shape)) | any(is.nan(O)) ) { loglik <- .Machine$double.xmax } # WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DGESVD" } if(warn) warning(WARNING) z[] <- O[] <- shape[] <- NA scale <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if( loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) shape[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "a z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { # scale <- sum(scale)/n sigma <- scale * shape.o if(niterout >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) niterout <- - niterout ret <- 1 } else ret <- 0 } info <- list(iterations = niterout, error = errout) # info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "EVV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EVV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } #### mstepEVV <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVV", d = p, G = G, scale = NA, shape = rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="EVV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), G) # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran( "msevv", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mu = double(p*G), O = double(p*p*G), shape.o = double(p*p*G), scale = double(G), shape = double(p*G), pro = double(G), lwork = as.integer(lwork), info = as.integer(0), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVV"), prior[names(prior) != "functionName"])) # temp <- .Fortran("meevvp", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), O = double(p*p*G), shape.o = double(p*p*G), scale = double(G), shape = double(p*G), pro = double(G), lwork = as.integer(lwork), info = FALSE, eps = as.double(.Machine$double.eps)) WARNING <- "EVV model is not available with prior" if(warn) warning(WARNING) } # lapackSVDinfo <- temp$info mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale[1] # lambda O <- aperm( array(temp$O, c(p,p,G)), c(2,1,3) ) shape.o <- array( temp$shape.o, c(p,p,G) ) shape <- matrix(temp$shape, p,G) pro <- temp$pro WARNING <- NULL # if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" ret <- -4 } else { WARNING <- "input error for LAPACK DGESVD" ret <- -5 } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) # } else if( any(abs(c(scale, shape)) > signif(.Machine$double.xmax, 6)) ) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- scale <- shape[] <- O[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { # scale <- sum(scale)/n # scale <- sum(scale)/sum(z) # lambda --> if noise, see help(mstep) sigma <- scale * shape.o ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- dimnames(shape) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EVV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EVV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } #### estepEVV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation G <- ncol(mu) noise <- l == G + 1 if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EVV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here---------------------------------------------- # temp <- .Fortran( "esevv", x = as.double(data), z = double(n*K), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = as.double(mu), O = as.double( aperm(O, c(2,1,3)) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(pro), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EVV", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } #### cdensEVV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation p <- ncol(data) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EVV", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # temp <- .Fortran( "esevv", x = as.double(data), z = double(n*G), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(G), mu = as.double(mu), O = as.double( aperm(O, c(2,1,3)) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(-1), Vinv = as.double(-1), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EVV", WARNING = WARNING, returnCode = ret) } ### simEVV <- function(parameters, n, seed = NULL, ...) { if (!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if (any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warn <- "parameters are missing" warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EVV")) } pro <- parameters$pro if (is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins = G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if (dim(rtshape)[1] != d | dim(rtshape)[2] != G) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) for (k in 1:G) { m <- ctabel[k] sss <- rtscale * rtshape[,k] cholSigma <- t(parameters$variance$orientation[,,k]) * sss x[clabels == k, ] <- sweep( matrix(rnorm(m*d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+" ) } dimnames(x) <- list(NULL, 1:d) structure(cbind(group = clabels, x), modelName = "EVV") } ############################################################################## ### VEE model #### ############################################################################## emVEE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVEE(data, parameters = parameters, warn = warn)$z meVEE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } #### meVEE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEE", d = p, G = G, scale=rep(NA,G), shape=rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="VEE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) storage.mode(z) <- "double" # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("mevee", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = double(p*G), C = double(p*p), U = double(p*p*G), scale = double(G), shape = double(p), pro = double(K), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = as.integer(0), PACKAGE = "mclust") # } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEE"), prior[names(prior) != "functionName"])) # temp <- .Fortran("meveep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), C = double(p*p), U = double(p*p*G), scale = double(G), shape = double(p), pro = double(K), loglik = NA, eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = FALSE) WARNING <- "VEE model is not available with prior" if(warn) warning(WARNING) temp <- structure(temp, info = NA, WARNING = WARNING, returnCode = -1) return(temp) } z <- matrix(temp$z, n,K) niterin <- temp$niterin errin <- temp$errin niterout <- temp$niterout errout <- temp$errout loglik <- temp$loglik lapackSVDinfo <- temp$info mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale shape <- temp$shape shape.o <- matrix(temp$C, p,p) O <- if(any(is.nan(shape.o))) shape.o else svd(shape.o, nu = 0)$v pro <- temp$pro if( !is.finite(loglik) | any(scale > signif(.Machine$double.xmax, 6)) | any(shape > signif(.Machine$double.xmax, 6)) | any(O > signif(.Machine$double.xmax, 6)) | any(is.nan(scale)) | any(is.nan(shape)) | any(is.nan(O)) ) { loglik <- .Machine$double.xmax } # WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DPOTRI fails to converge" } else { WARNING <- "input error for LAPACK DPOTRF, DSYEV or DPOTRI" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- sweep( array(shape.o, c(p,p,G)), 3, FUN = "*", STATS = scale ) if(niterin >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin ret <- 2 } else if(niterout >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) niterout <- - niterout ret <- 1 } else ret <- 0 } info <- structure(c(niterout = niterout, errout = errout), inner = c(niterin = niterin, errin = errin)) # info <- structure(c(iterations = its, error = err), # inner = c(iterations = inner, error = inerr)) dimnames(z) <- list(dimnames(data)[[1]],NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "VEE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VEE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } #### mstepVEE <- function(data, z, prior = NULL, warn = NULL, control = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEE", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="VEE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) WARNING <- "z is missing" if(warn) warning(WARNING) return(structure(list( n = n, d = p, G = G, mu = matrix(NA,p, G), sigma = array(NA, c(p, p, G)), decomp = list(d = p, G = G, scale = rep(NA, G), shape = rep(NA, p), orientation = array(NA, c(p, p, G))), pro = rep(NA,G), modelName = "VEE", prior = prior), WARNING = WARNING)) } # shape <- sqrt(rev(sort(shape/exp(sum(log(shape))/p)))) if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran( "msvee", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mu = double(p*G), U = double(p*p*G), C = double(p*p), scale = as.double( rep(1,G) ), pro = double(G), lwork = as.integer(lwork), info = as.integer(0), itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEE"), prior[names(prior) != "functionName"])) # # temp <- .Fortran("msveep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), U = double(p*p*G), C = double(p*p), scale = double(G), pro = double(G), lwork = as.integer(lwork), info = FALSE, itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(control$eps)) WARNING <- "VEE model is not available with prior" if(warn) warning(WARNING) } lapackSVDinfo <- temp$info errin <- temp$errin niterin <- temp$niterin mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale shape.o <- matrix(temp$C, p,p) SVD <- svd(shape.o, nu = 0) shape <- SVD$d O <- SVD$v pro <- temp$pro WARNING <- NULL # if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DPOTRI fails to converge" } else { WARNING <- "input error for LAPACK DPOTRF, DSYEV or DPOTRI" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(any(c(scale, shape) > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- sweep( array(shape.o, c(p,p,G)), 3, FUN = "*", STATS = scale ) if(niterin >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin } ret <- 2 } info <- c(iteration = niterin, error = errin) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) variance <- list(modelName = "VEE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VEE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } ### estepVEE <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation G <- ncol(mu) noise <- l == G + 1 if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VEE", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "esvee", x = as.double(data), z = double(n*K), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(pro), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VEE", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } #### cdensVEE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation p <- ncol(data) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VEE", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "esvee", x = as.double(data), z = double(n*G), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(G), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(-1), Vinv = as.double(-1), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VEE", WARNING = WARNING, returnCode = ret) } ### simVEE <- function(parameters, n, seed = NULL, ...) { if (!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if (any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warn <- "parameters are missing" warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VEE")) } pro <- parameters$pro if (is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins = G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if (length(rtshape) != d) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if (length(rtscale) != G) stop("scale incompatible with mean") for (k in 1:G) { m <- ctabel[k] sss <- rtscale[k] * rtshape cholSigma <- t(parameters$variance$orientation) * sss x[clabels == k, ] <- sweep( matrix(rnorm(m*d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+" ) } dimnames(x) <- list(NULL, 1:d) structure(cbind(group = clabels, x), modelName = "VEE") } ############################################################################## ### EVE model #### ############################################################################## emEVE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEVE(data, parameters = parameters, warn = warn)$z meEVE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } #### meEVE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVE", d = p, G = G, scale=rep(NA,G), shape=rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="EVE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) storage.mode(z) <- "double" # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("meeve", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = double(p*G), O = as.double( diag(p) ), U = double(p*p*G), scale = double(1), shape = as.double( matrix(1, p,G) ), pro = double(K), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = as.integer(0), PACKAGE = "mclust") # } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVE"), prior[names(prior) != "functionName"])) # temp <- .Fortran("meevep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), O = double(p*p), U = double(p*p*G), scale = double(1), shape = double(p*G), pro = double(G), loglik = NA, eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = FALSE) WARNING <- "EVE model is not available with prior" if(warn) warning(WARNING) temp <- structure(temp, info = NA, WARNING = WARNING, returnCode = -1) return(temp) } z <- matrix(temp$z, n,K) niterin <- temp$niterin errin <- temp$errin niterout <- temp$niterout errout <- temp$errout loglik <- temp$loglik lapackSVDinfo <- temp$info mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale shape <- matrix(temp$shape, p,G) O <- t( matrix(temp$O, p,p) ) pro <- temp$pro if( !is.finite(loglik) | any(scale > signif(.Machine$double.xmax, 6)) | any(shape > signif(.Machine$double.xmax, 6)) | any(O > signif(.Machine$double.xmax, 6)) | any(is.nan(scale)) | any(is.nan(shape)) | any(is.nan(O)) ) { loglik <- .Machine$double.xmax } # WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DSYEV or DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array( apply(shape, 2, function(sh) scale * O%*%diag(sh)%*%t(O)), c(p,p,G) ) if(niterin >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin ret <- 2 } else if(niterout >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) niterout <- - niterout ret <- 1 } else ret <- 0 } info <- structure(c(niterout = niterout, errout = errout), inner = c(niterin = niterin, errin = errin)) # info <- structure(c(iterations = its, error = err), # inner = c(iterations = inner, error = inerr)) dimnames(z) <- list(dimnames(data)[[1]],NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "EVE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EVE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } #### mstepEVE <- function(data, z, prior = NULL, warn = NULL, control = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVE", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="EVE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) WARNING <- "z is missing" if(warn) warning(WARNING) return(structure(list( n = n, d = p, G = G, mu = matrix(NA,p, G), sigma = array(NA, c(p, p, G)), decomp = list(d = p, G = G, scale = rep(NA, G), shape = rep(NA, p), orientation = array(NA, c(p, p, G))), pro = rep(NA,G), modelName = "EVE", prior = prior), WARNING = WARNING)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("mseve", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mu = double(p*G), U = double(p*p*G), O = as.double( diag(p) ), scale = as.double(1), shape = as.double( matrix(1, p,G) ), pro = double(G), lwork = as.integer(lwork), info = as.integer(0), itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(.Machine$double.eps), # d = 100000, # trgtvec = as.double(100000), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVE"), prior[names(prior) != "functionName"])) # # temp <- .Fortran("msevep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), U = double(p*p*G), O = double(p*p), scale = double(1), pro = double(G), shape = double(p*G), lwork = as.integer(lwork), info = FALSE, itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(.Machine$double.eps)) WARNING <- "EVE model is not available with prior" if(warn) warning(WARNING) } lapackSVDinfo <- temp$info errin <- temp$errin niterin <- temp$niterin mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale O <- t( matrix(temp$O, p,p) ) shape <- matrix(temp$shape, p,G) pro <- temp$pro WARNING <- NULL # if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DSYEV or DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if( any(c(scale, shape) > signif(.Machine$double.xmax, 6)) ) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- array( apply(shape, 2, function(sh) scale * O%*%diag(sh)%*%t(O)), c(p,p,G) ) if(niterin >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin } ret <- 2 } info <- c(iteration = niterin, error = errin) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) variance <- list(modelName = "EVE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EVE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } ### estepEVE <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation G <- ncol(mu) noise <- l == G + 1 if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EVE", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "eseve", x = as.double(data), z = double(n*K), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(pro), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EVE", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } #### cdensEVE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation p <- ncol(data) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EVE", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "eseve", x = as.double(data), z = double(n*G), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(G), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(-1), Vinv = as.double(-1), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EVE", WARNING = WARNING, returnCode = ret) } ### simEVE <- function(parameters, n, seed = NULL, ...) { if (!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if (any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warn <- "parameters are missing" warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EVE")) } pro <- parameters$pro if (is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins = G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if (dim(rtshape)[1] != d | dim(rtshape)[2] != G) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) for (k in 1:G) { m <- ctabel[k] sss <- rtscale * rtshape[,k] cholSigma <- t(parameters$variance$orientation) * sss x[clabels == k, ] <- sweep( matrix(rnorm(m*d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+" ) } dimnames(x) <- list(NULL, 1:d) structure(cbind(group = clabels, x), modelName = "EVE") } ############################################################################## ### VVE model #### ############################################################################## emVVE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVVE(data, parameters = parameters, warn = warn)$z meVVE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } #### meVVE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVE", d = p, G = G, scale=rep(NA,G), shape=rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="VVE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) storage.mode(z) <- "double" # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("mevve", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = double(p*G), O = as.double( diag(p) ), U = double(p*p*G), scale = as.double( rep(1, G) ), shape = as.double( matrix(1, p,G) ), pro = double(K), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = as.integer(0), PACKAGE = "mclust") # } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVE"), prior[names(prior) != "functionName"])) # temp <- .Fortran("mevvep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), O = double(p*p), U = double(p*p*G), scale = as.double(rep(1, G)), shape = double(p*G), pro = double(G), loglik = NA, eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = FALSE) WARNING <- "VVE model is not available with prior" if(warn) warning(WARNING) temp <- structure(temp, info = NA, WARNING = WARNING, returnCode = -1) return(temp) } z <- matrix(temp$z, n,K) niterin <- temp$niterin errin <- temp$errin niterout <- temp$niterout errout <- temp$errout loglik <- temp$loglik lapackSVDinfo <- temp$info mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale shape <- matrix(temp$shape, p,G) O <- t( matrix(temp$O, p,p) ) pro <- temp$pro if( !is.finite(loglik) | any(scale > signif(.Machine$double.xmax, 6)) | any(shape > signif(.Machine$double.xmax, 6)) | any(O > signif(.Machine$double.xmax, 6)) | any(is.nan(scale)) | any(is.nan(shape)) | any(is.nan(O)) ) { loglik <- .Machine$double.xmax } # WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DSYEV or DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(NA, c(p,p,G)) for ( g in 1:G ) sigma[,,g] <- scale[g] * O %*% diag(shape[,g]) %*% t(O) if(niterin >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin ret <- 2 } else if(niterout >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) niterout <- - niterout ret <- 1 } else ret <- 0 } info <- structure(c(niterout = niterout, errout = errout), inner = c(niterin = niterin, errin = errin)) # info <- structure(c(iterations = its, error = err), # inner = c(iterations = inner, error = inerr)) dimnames(z) <- list(dimnames(data)[[1]],NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "VVE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VVE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } #### mstepVVE <- function(data, z, prior = NULL, warn = NULL, control = NULL, ...) { if (is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVE", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="VVE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) WARNING <- "z is missing" if(warn) warning(WARNING) return(structure(list( n = n, d = p, G = G, mu = matrix(NA,p, G), sigma = array(NA, c(p, p, G)), decomp = list(d = p, G = G, scale = rep(NA, G), shape = rep(NA, p), orientation = array(NA, c(p, p, G))), pro = rep(NA,G), modelName = "VVE", prior = prior), WARNING = WARNING)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("msvve", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mu = double(p*G), U = double(p*p*G), O = as.double( diag(p) ), scale = as.double( rep(1, G) ), shape = as.double( matrix(1, p,G) ), pro = double(G), lwork = as.integer(lwork), info = as.integer(0), itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVE"), prior[names(prior) != "functionName"])) # # temp <- .Fortran("msvvep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), U = double(p*p*G), O = double(p*p), scale = double(1), pro = double(G), shape = double(p*G), lwork = as.integer(lwork), info = FALSE, itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(.Machine$double.eps)) WARNING <- "VVE model is not available with prior" if(warn) warning(WARNING) } lapackSVDinfo <- temp$info errin <- temp$errin niterin <- temp$niterin mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) O <- t( matrix(temp$O, p,p) ) shape <- matrix(temp$shape, p,G) scale <- temp$scale pro <- temp$pro WARNING <- NULL # if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DSYEV or DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(any(c(scale, shape) > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { # sigma <- array( apply(shape, 2, function(sh) O%*%diag(sh)%*%t(O)), c(p,p,G) ) sigma <- array(NA, c(p,p,G)) for ( g in 1:G ) sigma[,,g] <- scale[g] * O %*% diag(shape[,g]) %*% t(O) if(niterin >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin } ret <- 2 } info <- c(iteration = niterin, error = errin) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) variance <- list(modelName = "VVE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VVE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } ### estepVVE <- function(data, parameters, warn = NULL, ...) { if (is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation G <- ncol(mu) noise <- l == G + 1 if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VVE", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "esvve", x = as.double(data), z = double(n*K), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(pro), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VVE", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } #### cdensVVE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation p <- ncol(data) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VVE", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "esvve", x = as.double(data), z = double(n*G), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(G), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(-1), Vinv = as.double(-1), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VVE", WARNING = WARNING, returnCode = ret) } ### simVVE <- function(parameters, n, seed = NULL, ...) { if (!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if (any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warn <- "parameters are missing" warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VVE")) } pro <- parameters$pro if (is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins = G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if (dim(rtshape)[1] != d | dim(rtshape)[2] != G) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if (length(rtscale) != G) stop("scale incompatible with mean") for (k in 1:G) { m <- ctabel[k] sss <- rtscale[k] * rtshape[,k] cholSigma <- t(parameters$variance$orientation) * sss x[clabels == k, ] <- sweep( matrix(rnorm(m*d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+" ) } dimnames(x) <- list(NULL, 1:d) structure(cbind(group = clabels, x), modelName = "VVE") } ############################################################################# # Examples of some simple R wrapper functions fcrossprod <- function(X, Y, ...) { out <- .Fortran("crossprodf", X = as.matrix(X), Y = as.matrix(Y), n = as.integer(nrow(X)), p = as.integer(ncol(X)), q = as.integer(ncol(Y)), XTY = matrix(0, ncol(X), ncol(Y)), PACKAGE = "mclust") return(out$XTY) }mclust/R/impute.R0000644000176200001440000002031312734434424013416 0ustar liggesusersimputeData <- function(data, categorical = NULL, seed = NULL, verbose = interactive()) { if(!requireNamespace("mix", quietly = TRUE)) stop("imputeData function require 'mix' package to be installed!") fac <- apply(data, 2, is.factor) if(is.null(categorical)) { categorical <- fac } else { if(any(!categorical & fac)) { stop("data has a factor that is not designated as categorical") } if(any(categorical | !fac)) { warning("a categorical is not designated as a factor") for(i in which(categorical | !fac)) data[[i]] <- as.factor(data[[i]]) } } # remove categorical variables and add a dummy variable if(nocat <- !any(categorical)) { data <- cbind(as.factor(1), data) categorical <- c(TRUE, categorical) } ord <- c(which(categorical), which(!categorical)) # do the imputations s <- mix::prelim.mix(data[,ord], p = sum(categorical)) if(is.null(seed)) seed <- runif(1, min = .Machine$integer.max/1024, max = .Machine$integer.max) # find ML estimate thetahat <- mix::em.mix(s, showits = verbose) # set random number generator seed mix::rngseed(seed) # data augmentation from posterior newtheta <- mix::da.mix(s, thetahat, steps = 100, showits = verbose) # impute under newtheta dataImp <- mix::imp.mix(s, newtheta) # there is a bug, so it needs to refix the seed and impute again mix::rngseed(seed) dataImp <- mix::imp.mix(s, newtheta) if(nocat) dataImp[,-1] else dataImp[,order(ord)] } imputePairs <- function(data, dataImp, symbols = c(1, 16), colors = c("black", "red"), labels, panel = points, ..., lower.panel = panel, upper.panel = panel, diag.panel = NULL, text.panel = textPanel, label.pos = 0.5 + has.diag/3, cex.labels = NULL, font.labels = 1, row1attop = TRUE, gap = 0.2) { textPanel <- function(x = 0.5, y = 0.5, txt, cex, font) text(x, y, txt, cex = cex, font = font) localAxis <- function(side, x, y, xpd, bg, col = NULL, main, oma, ...) { if (side%%2 == 1) Axis(x, side = side, xpd = NA, ...) else Axis(y, side = side, xpd = NA, ...) } localPlot <- function(..., main, oma, font.main, cex.main) plot(...) localLowerPanel <- function(..., main, oma, font.main, cex.main) lower.panel(...) localUpperPanel <- function(..., main, oma, font.main, cex.main) upper.panel(...) localDiagPanel <- function(..., main, oma, font.main, cex.main) diag.panel(...) dots <- list(...) nmdots <- names(dots) if (!is.matrix(data)) { data <- as.data.frame(data) for (i in seq_along(names(data))) { if (is.factor(data[[i]]) || is.logical(data[[i]])) data[[i]] <- as.numeric(data[[i]]) if (!is.numeric(unclass(data[[i]]))) stop("non-numeric argument to 'pairs'") } } else if (!is.numeric(data)) stop("non-numeric argument to 'pairs'") panel <- match.fun(panel) if ((has.lower <- !is.null(lower.panel)) && !missing(lower.panel)) lower.panel <- match.fun(lower.panel) if ((has.upper <- !is.null(upper.panel)) && !missing(upper.panel)) upper.panel <- match.fun(upper.panel) if ((has.diag <- !is.null(diag.panel)) && !missing(diag.panel)) diag.panel <- match.fun(diag.panel) if (row1attop) { tmp <- lower.panel lower.panel <- upper.panel upper.panel <- tmp tmp <- has.lower has.lower <- has.upper has.upper <- tmp } nc <- ncol(data) if (nc < 2) stop("only one column in the argument to 'pairs'") has.labs <- TRUE if (missing(labels)) { labels <- colnames(data) if (is.null(labels)) labels <- paste("var", 1:nc) } else if (is.null(labels)) has.labs <- FALSE oma <- if ("oma" %in% nmdots) dots$oma else NULL main <- if ("main" %in% nmdots) dots$main else NULL if (is.null(oma)) { oma <- c(4, 4, 4, 4) if (!is.null(main)) oma[3] <- 6 } opar <- par(mfrow = c(nc, nc), mar = rep.int(gap/2, 4), oma = oma) on.exit(par(opar)) for (i in if (row1attop) 1:nc else nc:1) for (j in 1:nc) { localPlot(dataImp[, j], dataImp[, i], xlab = "", ylab = "", axes = FALSE, type = "n", ...) if (i == j || (i < j && has.lower) || (i > j && has.upper)) { box() if (i == 1 && (!(j%%2) || !has.upper || !has.lower)) localAxis(1 + 2 * row1attop, dataImp[, j], dataImp[, i], ...) if (i == nc && (j%%2 || !has.upper || !has.lower)) localAxis(3 - 2 * row1attop, dataImp[, j], dataImp[, i], ...) if (j == 1 && (!(i%%2) || !has.upper || !has.lower)) localAxis(2, dataImp[, j], dataImp[, i], ...) if (j == nc && (i%%2 || !has.upper || !has.lower)) localAxis(4, dataImp[, j], dataImp[, i], ...) mfg <- par("mfg") if (i == j) { if (has.diag) localDiagPanel(as.vector(dataImp[, i]), ...) if (has.labs) { par(usr = c(0, 1, 0, 1)) if (is.null(cex.labels)) { l.wid <- strwidth(labels, "user") cex.labels <- max(0.8, min(2, 0.9/max(l.wid))) } text.panel(0.5, label.pos, labels[i], cex = cex.labels, font = font.labels) } } else if (i < j) { classification <- as.numeric(apply(data[,c(i,j)], 1, function(x) any(is.na(x)))) + 1 localLowerPanel(as.vector(dataImp[, j]), as.vector(dataImp[,i]), pch = symbols[classification], col = colors[classification], ...) } else { classification <- as.numeric(apply(data[,c(i,j)], 1, function(x) any(is.na(x)))) + 1 localUpperPanel(as.vector(dataImp[, j]), as.vector(dataImp[, i]), pch = symbols[classification], col = colors[classification], ...) } if (any(par("mfg") != mfg)) stop("the 'panel' function made a new plot") } else par(new = FALSE) } if (!is.null(main)) { font.main <- if ("font.main" %in% nmdots) dots$font.main else par("font.main") cex.main <- if ("cex.main" %in% nmdots) dots$cex.main else par("cex.main") mtext(main, 3, 3, TRUE, 0.5, cex = cex.main, font = font.main) } invisible(NULL) } # LS: old to be removed # matchCluster <- function(group, cluster) # { # if(length(group) != length(cluster)) # stop("arguments must be vector of the same length") # group <- as.factor(group) # cluster <- as.factor(cluster) # tab <- table(group,cluster) # j <- apply(tab,2,which.max) # cluster <- factor(cluster, labels = levels(group)[j]) # cluster <- as.character(cluster) # group <- as.character(group) # misclassified <- !(cluster == group) # out <- list(cluster = cluster, misclassified = misclassified, ord = j) # return(out) # } matchCluster <- function(group, cluster) { if(length(group) != length(cluster)) stop("arguments must be vector of the same length") group <- as.factor(group) cluster <- as.factor(cluster) map <- mapClass(as.numeric(group), as.numeric(cluster)) map1 <- unlist(map[[1]]); names(map1) <- NULL map2 <- unlist(map[[2]]); names(map2) <- NULL cl <- cluster levels(cl) <- map2 cl <- as.character(levels(cl)[as.numeric(cl)]) cl <- as.character(cl) group <- as.character(group) misclassified <- !(cluster == group) out <- list(cluster = cl, misclassified = misclassified, ord = map1) return(out) } majorityVote <- function(x) { # local function to find the maximum position in a vector, # breaking ties at random whichMax <- function (x) { m <- seq_along(x)[x == max(x, na.rm = TRUE)] if(length(m) > 1) sample(m, size = 1) else m } x <- as.vector(x) tab <- table(x) m <- whichMax(tab) out <- list(table = tab, ind = m, majority = names(tab)[m]) return(out) } mclust/R/gmmhd.R0000644000176200001440000004371313154564653013225 0ustar liggesusers###################################################### ## ## ## Identifying Connected Components in Gaussian ## ## Finite Mixture Models for Clustering ## ## ## ## Author: Luca Scrucca ## ###################################################### gmmhd <- function(object, ngrid = min(round((log(nrow(data)))*10), nrow(data)), dr = list(d = 3, lambda = 1, cumEvalues = NULL, mindir = 2), classify = list(G = 1:5, modelNames = mclust.options("emModelNames")[-c(8,10)]), ...) { if(!inherits(object, "Mclust")) stop("first argument must be an object of class 'Mclust'") if(!requireNamespace("geometry", quietly = TRUE)) stop("Package 'geometry' is required. Please install it.") data <- object$data n <- nrow(data) if(ngrid > n) { warning("ngrid too large, set equal to n") n.grid <- n } mNames <- attr(object$BIC, "modelNames") if(is.null(dr$d)) dr$d <- 2 if(is.null(dr$lambda)) dr$lambda <- 1 if(is.null(classify$G)) classify$G <- 1:5 if(is.null(classify$modelNames)) classify$modelNames <- mNames classify$modelNames <- intersect(classify$modelNames, mNames) if(is.null(dr$mindir)) dr$mindir <- 2 if(ncol(data) >= dr$d) { # compute GMMDR directions DR <- MclustDR(object, lambda = dr$lambda) # subset selection of GMMDR directions evalues <- DR$evalues[seq(DR$numdir)] if(is.null(dr$cumEvalues)) { # if dr$cumEvalues not provided # perform suset selection of GMMDR directions DR <- MclustDRsubsel(DR, G = attr(object$BIC, "G"), modelNames = mNames, mindir = dr$mindir, verbose = FALSE) dims <- seq(DR$numdir) } else { # select the smallest subset with cumsum eigenvalues > dr$cumEvalues dims <- min(which(cumsum(evalues/sum(evalues)) > dr$cumEvalues)) dims <- seq(min(dr$mindir, dims)) } # estimate the density from Mclust model on the selected directions x <- DR$dir[,dims,drop=FALSE] colnames(x) <- paste("GMMDR dir", 1:ncol(x), sep = "") mc <- object$call mc$data <- x mc$modelNames <- mNames mc$verbose <- FALSE obj <- eval(mc, parent.frame()) DR$parameters <- obj$parameters fdens <- dens(modelName = obj$modelName, data = x, parameters = obj$parameters) } else { x <- data DR <- NULL fdens <- dens(modelName = object$modelName, data = x, parameters = object$parameters) } p <- ncol(x) xscaled <- scale(x, colMeans(x), apply(x,2,sd)) # if to add vertices of convex envelope # xrange <- apply(x, 2, range) # xbound <- do.call("expand.grid", matrix2list(xrange)) # x <- rbind(as.matrix(x), as.matrix(xbound*1.1)) # fdens <- c(fdens, rep(0,nrow(xbound))) # uniform grid of proportions for which quantiles are calculated pn <- seq(0, 1, length = ngrid) qn <- as.numeric(quantile(fdens[1:n], 1-pn)) nc <- pc <- rep(0, length(qn)) con <- vector("list", length = length(qn)) # Delaunay triangulation matrix of dim (m x p+1), where each row provides a # set of indices to the points describing a simplex of dimension p mode(xscaled) <- "double" # delaunayn requires a real matrix DT <- suppressMessages(geometry::delaunayn(xscaled, options="QJ")) # plot(x); for(l in 1:nrow(DT)) polygon(x[DT[l,],], border = grey(.8)) on.exit(unlink("qhull_out.txt")) # Graph of neighborhood for each point NB <- vector(mode = "list", length = n) for(i in seq(n)) { NB[[i]] <- sort(unique(as.vector(DT[rowSums(DT==i)>0,]))) } for(i in seq(length(qn))) { c <- qn[i] Sc <- which(fdens[1:n] > c); names(Sc) <- NULL if(length(Sc) < 1) next() pc[i] <- length(Sc)/n # select neighborhoods of edges with density > c level nb <- NB[Sc] # select within neighborhoods those edges whose density > c level nb <- lapply(nb, function(nb) sort(intersect(nb, Sc))) nb <- nb[!duplicated(nb)] # table(sapply(nb,length)) # remove neighborhoods which do not share any facet, i.e. having # less than p edges/obs # nb <- nb[sapply(nb, length) >= p] # remove neighborhoods which are not simplices of dim (p+1) nb <- nb[sapply(nb, length) > p] # get connected components ConComp <- ConnectComp(nb) # sapply(ConComp,length); ConComp if(length(ConComp) < 1) next() nc[i] <- length(ConComp) con[[i]] <- ConComp # lapply(ConComp, sort) } # obj <- list(Mclust = object, MclustDA = NULL, MclustDR = DR, x = x, # i.e. the input data or GMMDR directions density = fdens[1:n], con = con, nc = structure(nc, names = format(pn, digit = 3)), pc = pc, pn = pn, qn = structure(qn, names = format(pn, digit = 3)), clusterCores = NULL, cluster = NULL, numClusters = NULL) class(obj) <- "gmmhd" # cluster cores obj$clusterCores <- gmmhdClusterCores(obj) # semi-supervised classification modClass <- gmmhdClassify(obj, G = classify$G, modelNames = classify$modelNames, verbose = FALSE) obj$MclustDA <- modClass$model obj$cluster <- modClass$cluster obj$numClusters <- length(tabulate(obj$cluster)) return(obj) } print.gmmhd <- function(x, digits = getOption("digits"), ...) { cat("\'", class(x)[1], "\' model object:\n", sep = "") cat(paste0(" Mclust initial model = (", x$Mclust$modelName, ",", x$Mclust$G, ")\n")) if(!is.null(x$MclustDR)) cat(paste0(" MclustDR projection = (", x$MclustDR$modelName, ",", x$MclustDR$G, ")\n")) cat(paste0(" GMMHD final number of clusters = ", x$numClusters, "\n")) invisible() } summary.gmmhd <- function(object, ...) { title <- paste("GMM with high-density connected components for clustering") out <- with(object, list(title = title, "Mclust" = list("G" = Mclust$G, "modelName" = Mclust$modelName), "MclustDR" = list("G" = MclustDR$G, "modelName" = MclustDR$modelName), "clusterCores" = table(clusterCores, useNA = "ifany", dnn = NULL), "cluster" = table(cluster, useNA = "ifany", dnn = NULL))) if(is.null(object$MclustDR)) out$MclustDR <- NULL class(out) <- "summary.gmmhd" return(out) } print.summary.gmmhd <- function(x, digits = getOption("digits"), ...) { cat(rep("-", nchar(x$title)),"\n",sep="") cat(x$title, "\n") cat(rep("-", nchar(x$title)),"\n",sep="") # cat("\nInitial model: Mclust (", x$Mclust$modelName, ",", x$Mclust$G, ")", "\n", sep = "") # if(!is.null(x$MclustDR)) cat("\nModel on projection subspace: (", x$MclustDR$modelName, ",", x$MclustDR$G, ")", "\n", sep = "") # cat("\nCluster cores:\n") print(x$clusterCores) # cat("\nFinal clustering:\n") print(x$cluster) # invisible() } plot.gmmhd <- function(x, what = c("mode", "cores", "clusters"), ...) { object <- x what <- match.arg(what, choices = eval(formals(plot.gmmhd)$what), several.ok = TRUE) if(interactive() & length(what) > 1) { title <- "GMM high-density connected components:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "mode") plot.gmmhd.mode(object, ...) if(what[choice] == "cores") plot.gmmhd.cores(object, ...) if(what[choice] == "clusters") plot.gmmhd.clusters(object, ...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "mode")) plot.gmmhd.mode(object, ...) if(any(what == "cores")) plot.gmmhd.cores(object, ...) if(any(what == "clusters")) plot.gmmhd.clusters(object, ...) } invisible() } plot.gmmhd.mode <- function(object, ...) { plot(c(object$pc,1), c(object$nc,0), type = "S", xlab = "Proportion of observed data", ylab = "Mode function", yaxt = "n") axis(side = 2, at = seq(0, max(object$nc, na.rm = TRUE))) } plot.gmmhd.cores <- function(object, col = c("grey50", mclust.options("classPlotColors")), pch = c(1, mclust.options("classPlotSymbols")), ...) { x <- object$x p <- ncol(x) n <- nrow(x) clCores <- object$clusterCores numClusters <- object$numClusters colCores <- col[1] col <- col[-1] col <- col[clCores] col[is.na(col)] <- colCores pch <- unique(pch) pchCores <- pch[1] pch <- pch[-1] pch <- pch[clCores] pch[is.na(pch)] <- pchCores cex <- rep(par("cex"), length(pch)) cex[is.na(clCores)] <- par("cex")/2 if(p == 1) { plot(x, object$density, col = col, pch = pch, cex = cex, ylim = range(0,object$density), xlab = colnames(x)[1], ylab = "Density", ...) } else if(p == 2) { plot(x[,1:2,drop=FALSE], col = col, pch = pch, cex = cex, ...) } else if(p > 2) { pairs(x, col = col, pch = pch, cex = cex, gap = 0, ...) } invisible() } plot.gmmhd.clusters <- function(object, col = mclust.options("classPlotColors"), pch = mclust.options("classPlotSymbols"), ...) { x <- object$x p <- ncol(x) n <- nrow(x) cluster <- object$cluster numClusters <- object$numClusters col <- col[cluster] pch <- setdiff(pch,22)[cluster] if(p == 1) { plot(x, object$density, col = col, pch = pch, ylim = range(0,object$density), xlab = colnames(x)[1], ylab = "Density", ...) } else if(p == 2) { plot(x[,1:2,drop=FALSE], col = col, pch = pch, ...) } else if(p > 2) { pairs(x, col = col, pch = pch, cex = 0.8, gap = 0, ...) } invisible() } gmmhdClusterCores <- function(object, tails = FALSE, ...) { # Identify cluster cores as the first subset of connected components # corresponding to the largest local mode n <- nrow(object$x) nc <- object$nc pc <- object$pc conComp <- object$con # select the subset with largest number of modes ... i <- which(diff(c(nc,0)) < 0) # i <- i[which(nc[i] == max(nc[i]))] # no to consider only the highest mode # remove spurius local modes, i.e. those not identified by at least # two consecutive density level # LS:20150107 okmode <- which(nc[i] == nc[i-1])[1] # LS:20150107 i <- if(length(okmode) > 0) i[okmode] else length(nc) # plot(pc, nc); abline(v = pc[i]) # ... and consider multiplicity of modes # LS: 20150107 i <- which(nc == max(nc[i])) # cc <- conComp[i] clusterCores <- matrix(as.double(NA), n, length(i)) for(j in 1:ncol(clusterCores)) for(cl in 1:length(cc[[j]])) { clusterCores[cc[[j]][[cl]],j] <- cl } while(ncol(clusterCores) > 1) { ncl <- length(unique(na.omit(clusterCores[,2]))) tmp <- rep(NA, n) for(cl in 1:ncl) { l <- which(clusterCores[,2] == cl) if(all(is.na(clusterCores[l,1]))) { tmp[l] <- paste(clusterCores[l,2],"*",sep="") } else { if(length(unique(na.omit(clusterCores[l,1]))) > 1) tmp[l] <- clusterCores[l,1] else tmp[l] <- paste(clusterCores[l,2],"*",sep="") } } clusterCores[,2] <- unclass(as.factor(tmp)) clusterCores <- clusterCores[,-1,drop=FALSE] } clusterCores <- as.vector(clusterCores) return(clusterCores) # select the last subset with largest number of modes # i <- max(which(nc == max(nc))) # select the first subset with largest number of modes i <- which(diff(c(nc,0)) < 0) i <- i[which(nc[i] == max(nc[i]))[1]] # select the largest subset with the largest number of modes # i <- i[max(which(nc[i] == max(nc[i])))] conComp <- object$con[[i]] clusterCores <- rep(NA, n) for(cl in 1:length(conComp)) { clusterCores[conComp[[cl]]] <- cl } return(clusterCores) } gmmhdClassify <- function(object, G = 1:5, modelNames = mclust.options("emModelNames"), verbose = TRUE, ...) { if(!inherits(object, "gmmhd")) stop("object is not of class 'gmmhd'") x <- object$x n <- nrow(x) p <- ncol(x) if(p == 1) modelNames <- unique(substr(modelNames, 1, 1)) clusterCores <- object$clusterCores numClusters <- length(tabulate(clusterCores)) con <- object$con # classify unclustered obs based on training cluster cores isCore <- (!is.na(clusterCores)) logRatio <- function(p) { p <- pmax(pmin(p, 1-sqrt(.Machine$double.eps)),sqrt(.Machine$double.eps)) log(p)-log(1-p) } # select num. components G to guarantee at least minSize obs per class numCompClass <- function(class, G, minSize = 10) { classSize <- tabulate(class) Gin <- as.vector(G) Gmax <- classSize %/% minSize Gmax <- pmin(Gmax, max(G)) G <- vector(length = length(Gmax), mode = "list") for(k in 1:length(G)) { G[[k]] <- intersect(Gin, seq(Gmax[k])) } return(G) } inc <- isCore cluster <- clusterCores while(sum(inc) < n) { mod <- MclustDA(data = x[inc,,drop=FALSE], class = as.character(cluster[inc]), G = numCompClass(cluster[inc], G), modelNames = modelNames, verbose = verbose) unallocated <- which(!inc) # remove those obs with density ~ 0 dens <- density.MclustDA(mod, newdata=x[unallocated,,drop=FALSE]) dens <- pmax(dens, .Machine$double.eps) i <- (dens/max(dens) > sqrt(.Machine$double.eps)) if(sum(i) > 0) unallocated <- unallocated[i] # pred <- predict(mod, newdata = x[unallocated,,drop=FALSE]) # questa versione puo' non allocare obs ai clusterCores piccoli # zmax <- apply(pred$z,1,max) # zclass <- apply(pred$z,1,which.max) # log.ratio <- logRatio(zmax) # alloc <- (log.ratio >= quantile(log.ratio, prob = sum(inc)/n)) # questa versione cerca di ctr per dim clusters e alloca alla classe # predicted iff logRatio is larger than sqrt(sum(inc)/n) quantile z <- pred$z zclass <- apply(z,1,which.max) alloc <- matrix(NA, nrow(z), ncol(z)) for(k in seq(ncol(z))) { log.ratio <- logRatio(z[,k]) alloc[,k] <- (log.ratio >= quantile(log.ratio, prob = sqrt(sum(inc)/n))) & (zclass == k) } alloc <- apply(alloc, 1, any) toclass <- unallocated[alloc] cluster[toclass] <- zclass[alloc] inc <- (!is.na(cluster)) } mod <- MclustDA(data = x, class = cluster, G = numCompClass(cluster[inc], G), modelNames = modelNames, verbose = verbose) cluster <- predict(mod, x)$classification out <- list(model = mod, clusterCores = clusterCores, cluster = cluster) return(out) } density.MclustDA <- function(object, newdata, prior, logarithm = FALSE, ...) { # Compute the density based on a MclustDA model # (later it may be included in the 'mclust' package) # or it can be obtained from predict.MclustDA if(!inherits(object, "MclustDA")) stop("object not of class \"MclustDA\"") models <- object$models nclass <- length(models) n <- sapply(1:nclass, function(i) models[[i]]$n) if(missing(newdata)) { newdata <- object$data } if(object$d == 1) newdata <- as.vector(newdata) if(missing(prior)) { prior <- n/sum(n) } else { if(length(prior) != nclass) stop("wrong number of prior probabilities") if(any(prior < 0)) stop("prior must be nonnegative") } # compute on log scale for stability densfun <- function(mod, data) { do.call("dens", c(list(data = data, logarithm = TRUE), mod)) } # cden <- as.matrix(data.frame(lapply(models, densfun, data = newdata))) cden <- sweep(cden, 2, FUN = "+", STATS = log(prior)) maxlog <- apply(cden, 1, max) cden <- sweep(cden, 1, FUN = "-", STATS = maxlog) den <- log(apply(exp(cden), 1, sum)) + maxlog if(!logarithm) den <- exp(den) return(den) } # old version ConnectComp_old <- function(nb) { # Get connected components # Example: # nb <- list(c(1,2,3), c(2,3,4), c(9,10,11), c(9,11,12), c(1,6,5)) # if(length(nb) < 1 | !is.list(nb)) return(NULL) nb <- lapply(nb, function(x) as.integer(x)) n <- length(nb) u <- sort(unique(unlist(nb))) nu <- length(u) cnb <- cnb.old <- nb stable <- FALSE # merge the neighbors until the configuration is stable while(!stable) { i <- 0 while(i < length(cnb)) { i <- i + 1 j <- which(sapply(cnb, function(nbb) any(intersect(cnb[[i]], nbb)))) cnb[[i]] <- sort(unique(unlist(cnb[j]))) cnb[setdiff(j, i)] <- NULL } if(identical(cnb, cnb.old)) stable <- TRUE cnb.old <- cnb } return(cnb) } ConnectComp <- function(nb) { # Get connected components # Example: # nb <- list(c(1,2,3), c(2,3,4), c(9,10,11), c(9,11,12), c(1,6,5)) # ConnectComp(nb) if(length(nb) < 1 | !is.list(nb)) return(NULL) nb <- lapply(nb, function(x) as.integer(x)) n <- length(nb) u <- sort(unique(unlist(nb))) nu <- length(u) cnb <- cnb.old <- nb stable <- FALSE # merge the neighbors until the configuration is stable while(!stable) { i <- 0 while(i < length(cnb)) { i <- i + 1 j <- which(sapply(cnb, function(nbb) any(is.element(cnb[[i]], nbb)))) cnb[[i]] <- sort(unique(unlist(cnb[j]))) cnb[setdiff(j, i)] <- NULL } if(identical(cnb, cnb.old)) stable <- TRUE cnb.old <- cnb } return(cnb) } mclust/R/densityMclust.R0000644000176200001440000003440313201545331014755 0ustar liggesusersdensityMclust <- function(data, ...) { mc <- match.call() obj <- Mclust(data, ...) obj$call <- mc d <- dens(modelName = obj$modelName, data = data, parameters = obj$parameters, logarithm = FALSE) obj$density <- d class(obj) <- c("densityMclust", "Mclust") return(obj) } predict.densityMclust <- function(object, newdata, what = c("dens", "cdens"), logarithm = FALSE, ...) { if(!inherits(object, "densityMclust")) stop("object not of class \"densityMclust\"") if(missing(newdata)) { newdata <- object$data } newdata <- as.matrix(newdata) if(ncol(object$data) != ncol(newdata)) { stop("newdata must match ncol of object data") } what <- match.arg(what) if(what == "dens") { d <- dens(modelName = object$modelName, data = newdata, parameters = object$parameters, logarithm = logarithm) } else { d <- cdens(modelName = object$modelName, data = newdata, parameters = object$parameters, logarithm = logarithm) dim <- dim(d) attributes(d) <- NULL d <- array(d, dim) } return(d) } plot.densityMclust <- function(x, data = NULL, what = c("BIC", "density", "diagnostic"), ...) { object <- x # Argh. Really want to use object anyway what <- match.arg(what, several.ok = TRUE) if(object$d > 1) what <- setdiff(what, "diagnostic") oldpar <- par(no.readonly = TRUE) # on.exit(par(oldpar)) plot.densityMclust.density <- function(...) { if(object$d == 1) plotDensityMclust1(object, data = data, ...) else if(object$d == 2) plotDensityMclust2(object, data = data, ...) else plotDensityMclustd(object, data = data, ...) } plot.densityMclust.bic <- function(...) { # this add right axis for bic diff # oldpar <- par(no.readonly = TRUE) # on.exit(par(oldpar)) # mar <- oldpar$mar # mar[4] <- max(mar[4],3) # par(mar = mar) # plot.mclustBIC(object$BIC, ...) # yaxp <- par("yaxp") # bicdiff <- seq(0, yaxp[1] - object$bic, length = 100) # bicdiff <- pretty(bicdiff, yaxp[3]+1) # axis(4, at = object$bic+bicdiff, labels = signif(bicdiff,2)) plot.mclustBIC(object$BIC, ...) } plot.densityMclust.diagnostic <- function(...) { densityMclust.diagnostic(object, ...) } if(interactive() & length(what) > 1) { title <- "Model-based density estimation plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "BIC") plot.densityMclust.bic(...) if(what[choice] == "density") plot.densityMclust.density(...) if(what[choice] == "diagnostic") plot.densityMclust.diagnostic(...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "BIC")) plot.densityMclust.bic(...) if(any(what == "density")) plot.densityMclust.density(...) if(any(what == "diagnostic")) plot.densityMclust.diagnostic(...) } invisible() } plotDensityMclust1 <- function(x, data = NULL, hist.col = "lightgrey", hist.border = "white", breaks = "Sturges", ...) { object <- x # Argh. Really want to use object anyway mc <- match.call(expand.dots = TRUE) mc$x <- mc$data <- mc$hist.col <- mc$hist.border <- mc$breaks <- NULL xlab <- mc$xlab if(is.null(xlab)) xlab <- deparse(object$call$data) ylab <- mc$ylab if(is.null(ylab)) ylab <- "Density" # xrange <- extendrange(object$data, f = 0.1) xlim <- eval(mc$xlim, parent.frame()) if(!is.null(xlim)) xrange <- range(xlim) ylim <- eval(mc$ylim, parent.frame()) # eval.points <- seq(from = xrange[1], to = xrange[2], length = 1000) d <- predict.densityMclust(object, eval.points) # if(!is.null(data)) { h <- hist(data, breaks = breaks, plot = FALSE) plot(h, freq = FALSE, col = hist.col, border = hist.border, main = "", xlim = range(h$breaks, xrange), # ylim = range(0, ylim, h$density, max(d)+diff(range(d))*0.1), ylim = if(!is.null(ylim)) range(ylim) else range(0, h$density, d), xlab = xlab, ylab = ylab) box() mc[[1]] <- as.name("lines") mc$x <- eval.points mc$y <- d mc$type <- "l" eval(mc, parent.frame()) } else { mc[[1]] <- as.name("plot") mc$x <- eval.points mc$y <- d mc$type <- "l" mc$xlim <- xlim mc$ylim <- if(!is.null(ylim)) range(ylim) else range(0, d) mc$ylab <- ylab mc$xlab <- xlab eval(mc, parent.frame()) } invisible() } plotDensityMclust2 <- function(x, data = NULL, nlevels = 11, levels = NULL, col = grey(0.6), points.pch = 1, points.col = 1, points.cex = 0.8, ...) { # This function call surfacePlot() with a suitable modification of arguments object <- x # Argh. Really want to use object anyway mc <- match.call(expand.dots = TRUE) mc$x <- mc$points.pch <- mc$points.col <- mc$points.cex <- NULL mc$nlevels <- nlevels; mc$levels <- levels mc$col <- col if(is.null(data)) { addPoints <- FALSE mc$data <- object$data } else { addPoints <- TRUE } # set mixture parameters par <- object$parameters # these parameters should be missing par$variance$cholSigma <- par$Sigma <- NULL if(is.null(par$pro)) par$pro <- 1 # LS: bug? par$variance$cholsigma <- par$variance$sigma for(k in seq(par$variance$G)) { par$variance$cholsigma[,,k] <- chol(par$variance$sigma[,,k]) } mc$parameters <- par # now surfacePlot() is called mc[[1]] <- as.name("surfacePlot") out <- eval(mc, parent.frame()) if(addPoints) points(data, pch = points.pch, col = points.col, cex = points.cex) # invisible(out) } plotDensityMclustd <- function(x, data = NULL, nlevels = 11, levels = NULL, col = grey(0.6), points.pch = 1, points.col = 1, points.cex = 0.8, gap = 0.2, ...) { # This function call surfacePlot() with a suitable modification of arguments object <- x # Argh. Really want to use object anyway mc <- match.call(expand.dots = TRUE) mc$x <- mc$points.pch <- mc$points.col <- mc$points.cex <- mc$gap <- NULL mc$nlevels <- nlevels; mc$levels <- levels mc$col <- col if(is.null(data)) { data <- mc$data <- object$data addPoints <- FALSE } else { data <- as.matrix(data) addPoints <- TRUE } nc <- object$d oldpar <- par(mfrow = c(nc, nc), mar = rep(c(gap,gap/2),each=2), oma = c(4, 4, 4, 4), no.readonly = TRUE) on.exit(par(oldpar)) for(i in seq(nc)) { for(j in seq(nc)) { if(i == j) { plot(0,0,type="n",xlab="",ylab="",axes=FALSE) text(0,0, colnames(data)[i], cex=1.5, adj=0.5) box() } else { # set mixture parameters par <- object$parameters if(is.null(par$pro)) par$pro <- 1 par$mean <- par$mean[c(j,i),,drop=FALSE] par$variance$d <- 2 sigma <- array(dim = c(2, 2, par$variance$G)) for(g in seq(par$variance$G)) sigma[,,g] <- par$variance$sigma[c(j,i),c(j,i),g] par$variance$sigma <- sigma par$variance$Sigma <- NULL par$variance$cholSigma <- NULL par$variance$cholsigma <- NULL mc$parameters <- par mc$data <- object$data[,c(j,i)] mc$axes <- FALSE mc[[1]] <- as.name("surfacePlot") out <- eval(mc, parent.frame()) box() if(addPoints) points(data[,c(j,i)], pch = points.pch, col = points.col, cex = points.cex) } if(i == 1 && (!(j%%2))) axis(3) if(i == nc && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == nc && (i%%2)) axis(4) } } # invisible(out) } dens <- function(modelName, data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") aux <- list(...) cden <- cdens(modelName = modelName, data = data, logarithm = TRUE, parameters = parameters, warn = warn) dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 G <- if(oneD) { length(parameters$mean) } else { ncol(as.matrix(parameters$mean)) } pro <- parameters$pro if(is.null(pro)) stop("mixing proportions must be supplied") noise <- (!is.null(parameters$Vinv)) if(G > 1) { if(noise) { # proN <- pro[length(pro)] pro <- pro[-length(pro)] # pro <- pro/sum(pro) } if(any(proz <- pro == 0)) { pro <- pro[!proz] cden <- cden[, !proz, drop = FALSE] } cden <- sweep(cden, 2, FUN = "+", STATS = log(pro)) } # logsumexp maxlog <- apply(cden, 1, max) cden <- sweep(cden, 1, FUN = "-", STATS = maxlog) den <- log(apply(exp(cden), 1, sum)) + maxlog if(noise) den <- den + parameters$pro[G+1]*parameters$Vinv if(!logarithm) den <- exp(den) den } cdens <- function(modelName, data, logarithm = FALSE, parameters, warn = NULL, ...) { modelName <- switch(EXPR = modelName, X = "E", XII = "EII", XXI = "EEI", XXX = "EEE", modelName) checkModelName(modelName) funcName <- paste("cdens", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } densityMclust.diagnostic <- function(object, type = c("cdf", "qq"), col = c("black", "green4"), lwd = c(2,2), lty = c(1,2), legend = TRUE, grid = TRUE, main = TRUE, ...) { # Diagnostic plots for density estimation # (only available for the one-dimensional case) # # Arguments: # object = a 'densityMclust' object # data = the data vector # type = type of diagnostic plot: # cdf = the fitted distribution function vs the empirical distribution function; # qq = the fitted distribution function evaluated over the observed points vs # the quantile from a uniform distribution. # # Reference: # Loader C. (1999), Local Regression and Likelihood. New York, Springer, # pp. 87-90) if(!any(class(object) == "densityMclust")) { stop("first argument must be an object of class 'densityMclust'") } if(object$d > 1) { warning("only available for one-dimensional data") return() } type <- match.arg(type, c("cdf", "qq"), several.ok = TRUE) main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) data <- as.numeric(object$data) n <- length(data) cdf <- cdfMclust(object, data = data, ngrid = min(n*10,1000), ...) oldpar <- par(no.readonly = TRUE) if(interactive() & length(type) > 1) { par(ask = TRUE) on.exit(par(oldpar)) } if(any(type == "cdf")) { # Fitted CDF vs Emprical CDF empcdf <- ecdf(data) plot(empcdf, do.points = FALSE, col = col[2], lwd = lwd[2], lty = lty[2], xlab = deparse(object$call$data), ylab = "Cumulative Distribution Function", panel.first = if(grid) grid(equilogs=FALSE) else NULL, main = NULL, ...) if(main) title(main = "CDF plot", cex.main = 1.1) lines(cdf, col = col[1], lwd = lwd[1], lty = lty[1]) rug(data) if(legend) { legend("bottomright", legend = c("Est.CDF", "Emp.CDF"), ncol = 1, inset = 0.05, cex = 0.8, col = col, lwd = lwd, lty = lty) } } if(any(type == "qq")) { # Q-Q plot q <- quantileMclust(object, p = ppoints(n)) plot(q, sort(data), xlab = "Quantiles from estimated density", ylab = "Sample Quantiles", panel.first = if(grid) grid(equilogs=FALSE) else NULL, main = NULL, ...) if(main) title(main = "Q-Q plot", cex.main = 1.1) with(list(y = sort(data), x = q), { i <- (y > quantile(y, 0.25) & y < quantile(y, 0.75)) abline(lm(y ~ x, subset = i), lty = 2) }) # P-P plot # cdf <- cdfMclust(object, data, ...) # plot(seq(1,n)/(n+1), cdf$y, xlab = "Uniform quantiles", # ylab = "Cumulative Distribution Function", # main = "Diagnostic: P-P plot") # abline(0, 1, lty = 2) } invisible() } cdfMclust <- function(object, data, ngrid = 100, ...) { # Cumulative Density Function # (only available for the one-dimensional case) # # Returns the estimated CDF evaluated at points given by the optional # argument data. If not provided, a regular grid of ngrid points is used. # # Arguments: # object = a 'densityMclust' object # data = the data vector # ngrid = the length of rectangular grid if(!any(class(object) == "densityMclust")) { stop("first argument must be an object of class 'densityMclust'") } if(missing(data)) { eval.points <- extendrange(object$data, f = 0.1) eval.points <- seq(eval.points[1], eval.points[2], length.out = ngrid) } else { eval.points <- sort(as.vector(data)) ngrid <- length(eval.points) } G <- object$G pro <- object$parameters$pro mean <- object$parameters$mean var <- object$parameters$variance$sigmasq if(length(var) < G) var <- rep(var, G) noise <- (!is.null(object$parameters$Vinv)) cdf <- rep(0, ngrid) for(k in seq(G)) { cdf <- cdf + pro[k]*pnorm(eval.points, mean[k], sqrt(var[k])) } if(noise) cdf <- cdf/sum(pro[seq(G)]) out <- list(x = eval.points, y = cdf) return(out) } quantileMclust <- function(object, p, ...) { # Calculate the quantile of a univariate mixture corresponding to cdf equal to p # # Arguments: # object = a 'densityMclust' object # p = vector of probabilities (0 <= p <= 1) if(!any(class(object) == "densityMclust")) { stop("first argument must be an object of class 'densityMclust'") } eval.points <- extendrange(object$data, f = 1) eval.points <- seq(eval.points[1], eval.points[2], length.out = 10000) cdf <- cdfMclust(object, data = eval.points) q <- spline(cdf$y, cdf$x, method = "fmm", xmin = 0, xmax = 1, xout = p)$y q[ p < 0 | p > 1] <- NaN q[ p == 0 ] <- -Inf q[ p == 1 ] <- Inf return(q) }mclust/R/mclustda.R0000644000176200001440000010340613063702734013732 0ustar liggesusersMclustDA <- function(data, class, G = NULL, modelNames = NULL, modelType = c("MclustDA", "EDDA"), prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), verbose = interactive(), ...) { call <- match.call() mc <- match.call(expand.dots = TRUE) # if(missing(data)) stop("no training data provided!") data <- data.matrix(data) n <- nrow(data) p <- ncol(data) oneD <- if(p==1) TRUE else FALSE # if(missing(class)) stop("class labels for training data must be provided!") class <- as.factor(class) classLabel <- levels(class) ncl <- nlevels(class) if(ncl == 1) G <- 1 # modelType <- match.arg(modelType) # if(is.null(G)) { G <- rep(list(1:5), ncl) } else if(is.list(G)) { G <- lapply(G, sort) } else { G <- rep(list(sort(G)), ncl) } if(any(unlist(G) <= 0)) stop("G must be positive") # if(is.null(modelNames)) { if(oneD) modelNames <- c("E", "V") else modelNames <- mclust.options("emModelNames") } if(n <= p) { m <- match(c("EEE","EEV","VEV","VVV"), mclust.options("emModelNames"), nomatch=0) modelNames <- modelNames[-m] } if(!is.list(modelNames)) { modelNames <- rep(list(modelNames), ncl) } # if(modelType == "EDDA") { mc[[1]] <- as.name("mstep") mc$class <- mc$G <- mc$modelNames <- mc$modelType <- NULL mc$warn <- FALSE mc$z <- unmap(as.numeric(class)) G <- 1 modelNames <- unique(unlist(modelNames)) BIC <- rep(NA, length(modelNames)) Model <- NULL if(verbose) { cat("fitting ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = length(modelNames), style = 3) on.exit(close(pbar)) ipbar <- 0 } for(i in seq(modelNames)) { mc$modelName <- as.character(modelNames[i]) mStep <- eval(mc, parent.frame()) eStep <- do.call("estep", c(mStep, list(data = data, warn = FALSE))) BIC[i] <- do.call("bic", c(eStep, list(equalPro = TRUE))) if(!is.na(BIC[i]) && BIC[i] >= max(BIC, na.rm = TRUE)) Model <- eStep if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } if(all(is.na(BIC))) { warning("No model(s) can be estimated!!") return() } names(BIC) <- modelNames bic <- max(BIC, na.rm = TRUE) loglik <- Model$loglik df <- (2*loglik - bic)/log(Model$n) # there are (nclass-1) more df than real needed # equal to logLik(object) but faster Model <- c(Model, list("BIC" = BIC)) Models <- rep(list(Model), ncl) names(Models) <- classLabel for(l in 1:ncl) { I <- (class == classLabel[l]) Models[[l]]$n <- sum(I) Models[[l]]$G <- 1 Models[[l]]$bic <- Models[[l]]$loglik <- NULL par <- Models[[l]]$parameters par$pro <- 1 par$mean <- if(oneD) par$mean[l] else par$mean[,l,drop=FALSE] par$variance$G <- 1 if(oneD) { # par$variance$sigma <- par$variance$sigma[l] if(length(par$variance$sigmasq) > 1) par$variance$sigmasq <- par$variance$sigmasq[l] else par$variance$sigmasq <- par$variance$sigmasq } else { par$variance$sigma <- par$variance$sigma[,,l,drop=FALSE] if(length(par$variance$sigmasq) > 1) par$variance$sigmasq <- par$variance$sigmasq[l] if(length(par$variance$scale) > 1) par$variance$scale <- par$variance$scale[l] if(length(dim(par$variance$shape)) > 1) par$variance$shape <- par$variance$shape[,l] if(length(dim(par$variance$orientation)) > 2) # LS was > 1 par$variance$orientation <- par$variance$orientation[,,l,drop=FALSE] if(length(dim(par$variance$cholSigma)) > 2) par$variance$cholSigma <- par$variance$cholSigma[,,l,drop=FALSE] if(length(dim(par$variance$cholsigma)) > 2) par$variance$cholsigma <- par$variance$cholsigma[,,l,drop=FALSE] } Models[[l]]$parameters <- par Models[[l]]$z <- NULL # z[I,,drop=FALSE] Models[[l]]$classification <- rep(1, sum(I)) # apply(z[I,,drop=FALSE], 1, which.max) Models[[l]]$uncertainty <- NULL # 1 - apply(z[I,], 1, max) Models[[l]]$observations <- which(I) } } else { # modelType == "MclustDA" i.e. different covariance structures for each class Models <- rep(list(NULL), ncl) mc[[1]] <- as.name("mclustBIC") mc$class <- NULL for(l in 1:ncl) { I <- (class == classLabel[l]) mc[[2]] <- data[I,] mc$G <- G[[l]] mc$modelNames <- as.character(modelNames[[l]]) if(verbose) cat(paste0("Class ", classLabel[l], ": ")) BIC <- eval(mc, parent.frame()) # slightly adjust parameters if none of the models can be fitted while(all(is.na(BIC))) { if(length(mc$modelNames) == 1) { j <- which(mc$modelNames == mclust.options("emModelNames")) if(j == 1) mc$G <- mc$G - 1 else mc$modelNames <- mclust.options("emModelNames")[j-1] } else { mc$G <- mc$G - 1 } BIC <- eval(mc, parent.frame()) } SUMMARY <- summary(BIC, data[I,]) SUMMARY$bic <- BIC; names(SUMMARY)[which(names(SUMMARY) == "bic")] <- "BIC" Models[[l]] <- c(SUMMARY, list(observations = which(I))) } # extract info for each model # bic <- sapply(Models, function(mod) max(mod$bic, na.rm=TRUE)) # loglik <- sapply(Models, function(mod) mod$loglik) # df <- (2*loglik - bic)/log(sapply(Models, function(mod) mod$n)) # then sum up # bic <- sum(bic) # loglik <- sum(loglik) # df <- sum(df) bic <- loglik <- df <- NULL } names(Models) <- classLabel Models$Vinv <- NULL out <- list(call = call, data = data, class = class, type = modelType, models = Models, n = n, d = p, bic = bic, loglik = loglik, df = df) out <- structure(out, prior = prior, control = control, class = "MclustDA") if(modelType == "MclustDA") { l <- logLik.MclustDA(out, data) out$loglik <- as.numeric(l) out$df <- attr(l, "df") out$bic <- 2*out$loglik - log(n)*out$df } return(out) } print.MclustDA <- function(x, ...) { cat("\'", class(x)[1], "\' model object:\n", sep = "") models <- x$models nclass <- length(models) n <- sapply(1:nclass, function(i) models[[i]]$n) M <- sapply(1:nclass, function(i) models[[i]]$modelName) G <- sapply(1:nclass, function(i) models[[i]]$G) out <- data.frame(n = n, Model = M, G = G) rownames(out) <- names(models) out <- as.matrix(out) names(dimnames(out)) <- c("Classes", "") print(out, quote = FALSE, right = TRUE) invisible() } summary.MclustDA <- function(object, parameters = FALSE, newdata, newclass, ...) { # collect info models <- object$models nclass <- length(models) classes <- names(models) n <- sapply(1:nclass, function(i) models[[i]]$n) G <- sapply(1:nclass, function(i) models[[i]]$G) modelName <- sapply(1:nclass, function(i) models[[i]]$modelName) prior <- attr(object, "prior") printParameters <- parameters par <- getParameters.MclustDA(object) class <- object$class data <- object$data pred <- predict(object, newdata = data, ...) err <- classError(class, pred$classification)$errorRate tab <- try(table(class, pred$classification)) if(class(tab) == "try-error") { err <- tab <- NA } else names(dimnames(tab)) <- c("Class", "Predicted") tab.newdata <- err.newdata <- NULL if(!missing(newdata)) { pred.newdata <- predict(object, newdata = newdata, ...) if(missing(newclass)) { tab.newdata <- table(pred.newdata$classification) names(dimnames(tab.newdata)) <- "Predicted" } else { tab.newdata <- table(newclass, pred.newdata$classification) names(dimnames(tab.newdata)) <- c("Class", "Predicted") err.newdata <- classError(newclass, pred.newdata$classification)$errorRate } } obj <- list(type = object$type, n = n, d = object$d, loglik = object$loglik, df = object$df, bic = object$bic, nclass = nclass, classes = classes, G = G, modelName = modelName, prior = prior, parameters = par, tab = tab, err = err, tab.newdata = tab.newdata, err.newdata = err.newdata, printParameters = printParameters) class(obj) <- "summary.MclustDA" return(obj) } print.summary.MclustDA <- function(x, digits = getOption("digits"), ...) { title <- paste("Gaussian finite mixture model for classification") cat(rep("-", nchar(title)),"\n",sep="") cat(title, "\n") cat(rep("-", nchar(title)),"\n",sep="") cat("\n", x$type, " model summary:\n", sep="") # tab <- data.frame("log-likelihood" = x$loglik, "n" = sum(x$n), "df" = x$df, "BIC" = x$bic, row.names = "") cat("\n"); print(tab, digits = digits) tab <- data.frame(n = x$n, Model = x$modelName, G = x$G) rownames(tab) <- x$classes tab <- as.matrix(tab) names(dimnames(tab)) <- c("Classes", "") print(tab, quote = FALSE, right = TRUE) if(!is.null(x$prior)) { cat("\nPrior: ") cat(x$prior$functionName, "(", paste(names(x$prior[-1]), x$prior[-1], sep = " = ", collapse = ", "), ")", sep = "") cat("\n") } if(x$printParameters) { cat("\nEstimated parameters:\n") for(i in seq(x$nclass)) { cat("\nClass = ", x$class[i], "\n", sep = "") par <- x$parameters[[i]] cat("\nMixing probabilities: ") cat(round(par$pro, digits = digits), "\n") cat("\nMeans:\n") print(par$mean, digits = digits) cat("\nVariances:\n") if(x$d > 1) { for(g in seq(x$G[i])) { cat("[,,", g, "]\n", sep = "") print(par$variance[,,g], digits = digits) } } else print(par$variance, digits = digits) } } cat("\nTraining classification summary:\n\n") print(x$tab) cat("\nTraining error =", x$err, "\n") if(!is.null(x$tab.newdata)) { cat("\nTest classification summary:\n\n") print(x$tab.newdata) if(!is.null(x$err.newdata)) { cat("\nTest error =", x$err.newdata, "\n") } } invisible(x) } getParameters.MclustDA <- function(object) { # collect info models <- object$models nclass <- length(models) classes <- names(models) n <- sapply(1:nclass, function(i) models[[i]]$n) G <- sapply(1:nclass, function(i) models[[i]]$G) modelName <- sapply(1:nclass, function(i) models[[i]]$modelName) # prior <- attr(object, "prior") par <- vector(mode = "list", length = nclass) for(i in seq(nclass)) { par[[i]] <- models[[i]]$parameters if(is.null(par[[i]]$pro)) par$pro <- 1 if(par[[i]]$variance$d < 2) { sigma <- rep(par[[i]]$variance$sigma, models[[i]]$G)[1:models[[i]]$G] names(sigma) <- names(par[[i]]$mean) par[[i]]$variance$sigma <- sigma } par[[i]]$variance <- par[[i]]$variance$sigma } return(par) } dmvnorm <- function (x, mean, sigma, log = FALSE) { if(is.vector(x)) { x <- matrix(x, ncol = length(x)) } if(missing(mean)) { mean <- rep(0, length = ncol(x)) } if(missing(sigma)) { sigma <- diag(ncol(x)) } if(NCOL(x) != NCOL(sigma)) { stop("x and sigma have non-conforming size") } if(!isSymmetric(sigma, tol = sqrt(.Machine$double.eps), check.attributes = FALSE)) { stop("sigma must be a symmetric matrix") } if(length(mean) != NROW(sigma)) { stop("mean and sigma have non-conforming size") } md <- mahalanobis(x, center = mean, cov = sigma) logdet <- determinant(sigma, logarithm = TRUE)$modulus # SVD <- svd(sigma) # Positive <- (SVD$d > sqrt(.Machine$double.eps)) # invSigma <- SVD$v[,Positive, drop = FALSE] %*% # ((1/SVD$d[Positive]) * t(SVD$u[, Positive, drop = FALSE])) # logdet <- sum(log(SVD$d[Positive])) # md <- mahalanobis(x, center = mean, cov = invSigma, inverted = TRUE) logdens <- -(ncol(x) * log(2 * pi) + logdet + md)/2 if(log) return(logdens) else exp(logdens) } logLik.MclustDA <- function (object, data, ...) { if(missing(data)) data <- object$data n <- object$n d <- object$d par <- getParameters.MclustDA(object) nclass <- length(par) fclass <- sapply(object$models, function(m) m$n)/n logfclass <- log(fclass) G <- sapply(par, function(x) length(x$pro)) if(object$type == "EDDA") { df <- d * nclass + nVarParams(object$models[[1]]$modelName, d = d, G = nclass) } else { df <- sum(sapply(object$models, function(mod) with(mod, (G - 1) + G * d + nVarParams(modelName, d = d, G = G)))) } # ll <- sapply(object$models, function(mod) # { do.call("dens", c(list(data = data, logarithm = FALSE), mod)) }) # l <- sum(log(apply(ll, 1, function(l) sum(fclass*l)))) ll <- sapply(object$models, function(mod) { do.call("dens", c(list(data = data, logarithm = TRUE), mod)) }) l <- sum(apply(ll, 1, function(l) logsumexp(logfclass+l))) attr(l, "nobs") <- n attr(l, "df") <- df class(l) <- "logLik" return(l) } predict.MclustDA <- function(object, newdata, prior, ...) { if(!inherits(object, "MclustDA")) stop("object not of class \"MclustDA\"") models <- object$models nclass <- length(models) n <- sapply(1:nclass, function(i) models[[i]]$n) if(missing(newdata)) { newdata <- object$data } if(object$d == 1) newdata <- as.vector(newdata) if(missing(prior)) { prior <- n/sum(n) } else { if(length(prior) != nclass) stop("wrong number of prior probabilities") if(any(prior < 0)) stop("prior must be nonnegative") } # densfun <- function(mod, data) # { do.call("dens", c(list(data = data), mod)) } # z <- as.matrix(data.frame(lapply(models, densfun, data = newdata))) # z <- sweep(z, MARGIN = 1, FUN = "/", STATS = apply(z, 1, max)) # z <- sweep(z, MARGIN = 2, FUN = "*", STATS = prior/sum(prior)) # z <- sweep(z, MARGIN = 1, STATS = apply(z, 1, sum), FUN = "/") # compute on log scale for stability densfun <- function(mod, data) { do.call("dens", c(list(data = data, logarithm = TRUE), mod)) } z <- as.matrix(data.frame(lapply(models, densfun, data = newdata))) z <- sweep(z, MARGIN = 2, FUN = "+", STATS = log(prior/sum(prior))) z <- sweep(z, MARGIN = 1, FUN = "-", STATS = apply(z, 1, logsumexp)) z <- exp(z) cl <- apply(z, 1, which.max) class <- factor(names(models)[cl], levels = names(models)) out <- list(classification = class, z = z) return(out) } plot.MclustDA <- function(x, what = c("scatterplot", "classification", "train&test", "error"), newdata, newclass, dimens, symbols, colors, ...) { object <- x # Argh. Really want to use object anyway if(!inherits(object, "MclustDA")) stop("object not of class \"MclustDA\"") data <- object$data if(object$d > 1) dataNames <- colnames(data) else dataNames <- deparse(object$call$data) n <- nrow(data) p <- ncol(data) if(missing(newdata)) { newdata <- matrix(as.double(NA), 0, p) } else { newdata <- as.matrix(newdata) } if(ncol(newdata) != p) stop("incompatible newdata dimensionality") if(missing(newclass)) { newclass <- vector(length = 0) } else { if(nrow(newdata) != length(newclass)) stop("incompatible newdata and newclass") } models <- object$models M <- length(models) if(missing(dimens)) dimens <- 1:p trainClass <- object$class nclass <- length(unique(trainClass)) Data <- rbind(data, newdata) predClass <- predict(object, Data)$classification if(missing(symbols)) { if(M <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols") } else if(M <= 26) { symbols <- LETTERS } } if(length(symbols) == 1) symbols <- rep(symbols,M) # if(length(symbols) < M & what != "train&test") if(length(symbols) < M & !any(what == "train&test")) { warning("more symbols needed to show classification") symbols <- rep(16, M) } if(missing(colors)) { colors <- mclust.options("classPlotColors") } if(length(colors) == 1) colors <- rep(colors,M) # if(length(colors) < M & what != "train&test") if(length(colors) < M & !any(what == "train&test")) { warning("more colors needed to show classification") colors <- rep("black", M) } #################################################################### what <- match.arg(what, several.ok = TRUE) oldpar <- par(no.readonly = TRUE) # on.exit(par(oldpar)) plot.MclustDA.scatterplot <- function(...) { if(length(dimens) == 1) { eval.points <- seq(min(data[,dimens]), max(data[,dimens]), length = 1000) d <- matrix(as.double(NA), length(eval.points), nclass) for(i in seq(nclass)) { par <- models[[i]]$parameters if(par$variance$d > 1) { par$d <- 1 par$mean <- par$mean[dimens,,drop=FALSE] par$variance$sigmasq <- par$variance$sigma[dimens,dimens,] par$variance$modelName <- if(par$variance$G == 1) "X" else if(dim(par$variance$sigma)[3] > 1) "V" else "E" } d[,i] <- dens(modelName = par$variance$modelName, data = eval.points, parameters = par) } matplot(eval.points, d, type = "l", lty = 1, col = colors[seq(nclass)], xlab = dataNames[dimens], ylab = "Density") for(i in 1:nclass) { I <- models[[i]]$observations Axis(side = 1, at = data[I,], labels = FALSE, lwd = 0, lwd.ticks = 0.5, col.ticks = colors[i], tck = 0.03) } } scatellipses <- function(data, dimens, nclass, symbols, colors, ...) { m <- lapply(models, function(m) { m$parameters$mean <- array(m$parameters$mean[dimens,], c(2,m$G)) m$parameters$variance$sigma <- array(m$parameters$variance$sigma[dimens,dimens,], c(2,2,m$G)) m }) plot(data[,dimens], type = "n", ...) for(l in 1:nclass) { I <- m[[l]]$observations points(data[I,dimens[1]], data[I,dimens[2]], pch = symbols[l], col = colors[l]) for(k in 1:(m[[l]]$G)) { mvn2plot(mu = m[[l]]$parameters$mean[,k], sigma = m[[l]]$parameters$variance$sigma[,,k], k = 15) } } } if(length(dimens) == 2) { scatellipses(data, dimens, nclass, symbols, colors, ...) } if(length(dimens) > 2) { gap <- 0.2 on.exit(par(oldpar)) par(mfrow = c(p, p), mar = rep(c(gap,gap/2),each=2), oma = c(4, 4, 4, 4)) for(i in seq(p)) { for(j in seq(p)) { if(i == j) { plot(0,0,type="n",xlab="",ylab="",axes=FALSE) text(0,0, dataNames[i], cex=1.5, adj=0.5) box() } else { scatellipses(data, c(j,i), nclass, symbols, colors, xaxt = "n", yaxt = "n") } if(i == 1 && (!(j%%2))) axis(3) if(i == p && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == p && (i%%2)) axis(4) } } } } plot.MclustDA.classification <- function(...) { if(nrow(newdata) == 0 & length(dimens) == 1) { mclust1Dplot(data = data[,dimens], what = "classification", classification = predClass[1:n], colors = colors[1:nclass], xlab = dataNames[dimens], main = FALSE) title("Training data: known classification", cex.main = oldpar$cex.lab) } if(nrow(newdata) == 0 & length(dimens) == 2) { coordProj(data = data[,dimens], what = "classification", classification = predClass[1:n], main = FALSE, colors = colors[1:nclass], symbols = symbols[1:nclass]) title("Training data: known classification", cex.main = oldpar$cex.lab) } if(nrow(newdata) == 0 & length(dimens) > 2) { clPairs(data[,dimens], classification = predClass[1:n], colors = colors[1:nclass], symbols = symbols[1:nclass], gap = 0.2, cex.labels = 1.5, main = "Training data: known classification", cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 & length(dimens) == 1) { mclust1Dplot(data = newdata[,dimens], what = "classification", classification = predClass[-(1:n)], main = FALSE, xlab = dataNames[dimens]) title("Test data: MclustDA classification", cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 & length(dimens) == 2) { coordProj(data = newdata[,dimens], what ="classification", classification = predClass[-(1:n)], main = FALSE, colors = colors[1:nclass], symbols = symbols[1:nclass]) title("Test data: MclustDA classification", cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 & length(dimens) > 2) { on.exit(par(oldpar)) par(oma = c(0,0,10,0)) clPairs(data = newdata[,dimens], classification = predClass[-(1:n)], colors = colors[1:nclass], symbols = symbols[1:nclass], gap = 0.2, cex.labels = 1.5, main = "Test data: MclustDA classification", cex.main = oldpar$cex.lab) } } plot.MclustDA.traintest <- function(...) { if(length(dimens) == 1) { cl <- c(rep("Train", nrow(data)), rep("Test", nrow(newdata))) mclust1Dplot(data = Data[,dimens], what = "classification", classification = cl, main = FALSE, xlab = dataNames[dimens], colors = c("black", "red")) title("Training and Test data", cex.main = oldpar$cex.lab) } if(length(dimens) == 2) { cl <- c(rep("1", nrow(data)), rep("2", nrow(newdata))) coordProj(Data[,dimens], what = "classification", classification = cl, main = FALSE, CEX = 0.8, symbols = c(1,3), colors = c("black", "red")) title("Training (o) and Test (+) data", cex.main = oldpar$cex.lab) } if(length(dimens) > 2) { cl <- c(rep("1", nrow(data)), rep("2", nrow(newdata))) clPairs(Data[,dimens], classification = cl, symbols = c(1,3), colors = c("black", "red"), gap = 0.2, cex.labels = 1.3, CEX = 0.8, main = "Training (o) and Test (+) data", cex.main = oldpar$cex.lab) } } plot.MclustDA.error <- function(...) { if(nrow(newdata) != length(newclass)) stop("incompatible newdata and newclass") if(nrow(newdata) == 0 & length(dimens) == 1) { mclust1Dplot(data = data[,dimens], what = "errors", classification = predClass[1:n], truth = trainClass, xlab = dataNames[dimens], main = FALSE) title("Train Error", cex.main = oldpar$cex.lab) } if(nrow(newdata) == 0 & length(dimens) > 1) { coordProj(data = data[,dimens[1:2]], what = "errors", classification = predClass[1:n], truth = trainClass, main = FALSE) title("Train Error", cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 & length(dimens) == 1) { mclust1Dplot(data = newdata[,dimens], what = "errors", classification = predClass[-(1:n)], truth = newclass, xlab = dataNames[dimens], main = FALSE) title("Test Error", cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 & length(dimens) > 1) { coordProj(data = newdata[,dimens[1:2]], what = "errors", classification = predClass[-(1:n)], truth = newclass, main = FALSE) title("Test Error", cex.main = oldpar$cex.lab) } } if(interactive() & length(what) > 1) { title <- "Model-based discriminant analysis plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "scatterplot") plot.MclustDA.scatterplot(...) if(what[choice] == "classification") plot.MclustDA.classification(...) if(what[choice] == "train&test") plot.MclustDA.traintest(...) if(what[choice] == "error") plot.MclustDA.error(...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "scatterplot")) plot.MclustDA.scatterplot(...) if(any(what == "classification")) plot.MclustDA.classification(...) if(any(what == "train&test")) plot.MclustDA.traintest(...) if(any(what == "error")) plot.MclustDA.error(...) } invisible() } classError <- function(classification, truth) { q <- function(map, len, x) { x <- as.character(x) map <- lapply(map, as.character) y <- sapply(map, function(x) x[1]) best <- y != x if(all(len) == 1) return(best) errmin <- sum(as.numeric(best)) z <- sapply(map, function(x) x[length(x)]) mask <- len != 1 counter <- rep(0, length(len)) k <- sum(as.numeric(mask)) j <- 0 while(y != z) { i <- k - j m <- mask[i] counter[m] <- (counter[m] %% len[m]) + 1 y[x == names(map)[m]] <- map[[m]][counter[m]] temp <- y != x err <- sum(as.numeric(temp)) if(err < errmin) { errmin <- err best <- temp } j <- (j + 1) %% k } best } if (any(isNA <- is.na(classification))) { classification <- as.character(classification) nachar <- paste(unique(classification[!isNA]),collapse="") classification[isNA] <- nachar } MAP <- mapClass(classification, truth) len <- sapply(MAP[[1]], length) if(all(len) == 1) { CtoT <- unlist(MAP[[1]]) I <- match(as.character(classification), names(CtoT), nomatch= 0) one <- CtoT[I] != truth } else { one <- q(MAP[[1]], len, truth) } len <- sapply(MAP[[2]], length) if(all(len) == 1) { TtoC <- unlist(MAP[[2]]) I <- match(as.character(truth), names(TtoC), nomatch = 0) two <- TtoC[I] != classification } else { two <- q(MAP[[2]], len, classification) } err <- if(sum(as.numeric(one)) > sum(as.numeric(two))) as.vector(one) else as.vector(two) bad <- seq(along = classification)[err] list(misclassified = bad, errorRate = length(bad)/length(truth)) } mapClass <- function(a, b) { l <- length(a) x <- y <- rep(NA, l) if(l != length(b)) { warning("unequal lengths") return(x) } aChar <- as.character(a) bChar <- as.character(b) Tab <- table(a, b) Ua <- dimnames(Tab)[[1]] Ub <- dimnames(Tab)[[2]] aTOb <- rep(list(Ub), length(Ua)) names(aTOb) <- Ua bTOa <- rep(list(Ua), length(Ub)) names(bTOa) <- Ub # ------------------------------------------------------------- k <- nrow(Tab) Map <- rep(0, k) Max <- apply(Tab, 1, max) for(i in 1:k) { I <- match(Max[i], Tab[i, ], nomatch = 0) aTOb[[i]] <- Ub[I] } if(is.numeric(b)) aTOb <- lapply(aTOb, as.numeric) k <- ncol(Tab) Map <- rep(0, k) Max <- apply(Tab, 2, max) for(j in (1:k)) { J <- match(Max[j], Tab[, j]) bTOa[[j]] <- Ua[J] } if(is.numeric(a)) bTOa <- lapply(bTOa, as.numeric) list(aTOb = aTOb, bTOa = bTOa) } cvMclustDA <- function(object, nfold = 10, verbose = interactive(), ...) { # nfold-cross validation (CV) prediction error for mclustDA # if nfold=n returns leave-one-out CV # if nfold=3 returns 2:1 CV error call <- object$call data <- object$data class <- as.factor(object$class) n <- length(class) G <- lapply(object$models, function(mod) mod$G) modelName <- lapply(object$models, function(mod) mod$modelName) # if(nfold == n) folds <- lapply(1:n, function(x) x) else folds <- balanced.folds(class, nfolds = nfold) nfold <- length(folds) # err <- rep(NA, nfold) cvclass <- factor(rep(NA, n), levels = levels(class)) if(verbose) { cat("cross-validating ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = nfold, style = 3) on.exit(close(pbar)) } for(i in 1:nfold) { x <- data[-folds[[i]],,drop=FALSE] y <- class[-folds[[i]]] call$data <- x call$class <- y call$G <- G call$modelNames <- modelName call$verbose <- FALSE mod <- eval(call, parent.frame()) modTest <- predict(mod, data[folds[[i]],,drop=FALSE]) classTest <- modTest$classification cvclass[folds[[i]]] <- classTest err[i] <- length(classTest) - sum(classTest == class[folds[[i]]], na.rm = TRUE) if(verbose) setTxtProgressBar(pbar, i) } # cv.error <- sum(err)/n folds.size <- sapply(folds,length) err <- err/folds.size se <- sqrt(var(err)/nfold) # return(list(classification = cvclass, error = cv.error, se = se)) } balanced.folds <- function(y, nfolds = min(min(table(y)), 10)) { # Create 'nfolds' balanced folds conditional on grouping variable 'y'. # Function useful in evaluating a classifier by balanced cross-validation. # Returns a list with 'nfolds' elements containing indexes of each fold. # # From package 'pamr' by T. Hastie, R. Tibshirani, Balasubramanian # Narasimhan, Gil Chu. totals <- table(y) fmax <- max(totals) nfolds <- min(nfolds, fmax) # makes no sense to have more folds than the max class size folds <- as.list(seq(nfolds)) yids <- split(seq(y), y) # nice we to get the ids in a list, split by class ### create a big matrix, with enough rows to get in all the folds per class bigmat <- matrix(as.double(NA), ceiling(fmax/nfolds) * nfolds, length(totals)) for(i in seq(totals)) # { bigmat[seq(totals[i]), i] <- sample(yids[[i]]) } # Luca: this version has a bug if a class has only 1 obs { if (totals[i]==1) bigmat[seq(totals[i]), i] <- yids[[i]] else bigmat[seq(totals[i]), i] <- sample(yids[[i]]) } smallmat <- matrix(bigmat, nrow = nfolds) # reshape the matrix ### Now do a clever sort to mix up the NAs smallmat <- permute.rows(t(smallmat)) res <-vector("list", nfolds) for(j in 1:nfolds) { jj <- !is.na(smallmat[, j]) res[[j]] <- smallmat[jj, j] } return(res) } permute.rows <- function(x) { dd <- dim(x) n <- dd[1] p <- dd[2] mm <- runif(length(x)) + rep(seq(n) * 10, rep(p, n)) matrix(t(x)[order(mm)], n, p, byrow = TRUE) } # Deprecated functions cv1EMtrain <- function(data, labels, modelNames=NULL) { .Deprecated("cvMclustDA", package = "mclust") z <- unmap(as.numeric(labels)) G <- ncol(z) dimDataset <- dim(data) oneD <- is.null(dimDataset) || length(dimDataset[dimDataset > 1]) == 1 if (oneD || length(dimDataset) != 2) { if (is.null(modelNames)) modelNames <- c("E", "V") if (any(!match(modelNames, c("E", "V"), nomatch = 0))) stop("modelNames E or V for one-dimensional data") n <- length(data) cv <- matrix(1, nrow = n, ncol = length(modelNames)) dimnames(cv) <- list(NULL, modelNames) for (m in modelNames) { for (i in 1:n) { mStep <- mstep(modelName = m, data = data[-i], z = z[-i,], warn = FALSE) eStep <- do.call("estep", c(mStep, list(data = data[i], warn = FALSE))) if (is.null(attr(eStep, "warn"))) { k <- (1:G)[eStep$z == max(eStep$z)] l <- (1:G)[z[i,] == max(z[i,])] cv[i, m] <- as.numeric(!any(k == l)) } } } } else { if (is.null(modelNames)) modelNames <- mclust.options("emModelNames") n <- nrow(data) cv <- matrix(1, nrow = n, ncol = length(modelNames)) dimnames(cv) <- list(NULL, modelNames) for (m in modelNames) { for (i in 1:n) { mStep <- mstep(modelName = m, data = data[-i,], z = z[-i,], warn = FALSE) eStep <- do.call("estep", c(mStep, list(data = data[i, , drop = FALSE], warn = FALSE))) if (is.null(attr(eStep, "warn"))) { k <- (1:G)[eStep$z == max(eStep$z)] l <- (1:G)[z[i,] == max(z[i,])] cv[i, m] <- as.numeric(!any(k == l)) } } } } errorRate <- apply(cv, 2, sum) errorRate/n } bicEMtrain <- function(data, labels, modelNames=NULL) { .Deprecated("MclustDA", package = "mclust") z <- unmap(as.numeric(labels)) G <- ncol(z) dimData <- dim(data) oneD <- is.null(dimData) || length(dimData[dimData > 1]) == 1 if (oneD || length(dimData) != 2) { if (is.null(modelNames)) modelNames <- c("E", "V") if (any(!match(modelNames, c("E", "V"), nomatch = 0))) stop("modelNames E or V for one-dimensional data") } else { if (is.null(modelNames)) modelNames <- mclust.options("emModelNames") } BIC <- rep(NA, length(modelNames)) names(BIC) <- modelNames for (m in modelNames) { mStep <- mstep(modelName = m, data = data, z = z, warn = FALSE) eStep <- do.call("estep", c(mStep, list(data=data, warn=FALSE))) if (is.null(attr(eStep, "warn"))) BIC[m] <- do.call("bic", eStep) } BIC } cv.MclustDA <- function(...) { .Deprecated("cvMclustDA", package = "mclust") cvMclustDA(...) } "[.mclustDAtest" <- function (x, i, j, drop = FALSE) { clx <- oldClass(x) oldClass(x) <- NULL NextMethod("[") }mclust/R/icl.R0000644000176200001440000000627113052530310012652 0ustar liggesusers## ## Integrated Complete-data Likelihood (ICL) Criterion ## icl <- function(object, ...) UseMethod("icl") icl.Mclust <- function(object, ...) { n <- object$n # G <- object$G + ifelse(is.na(object$hypvol),0,1) z <- object$z if(is.null(z)) z <- matrix(1, nrow = n, ncol = 1) C <- matrix(0, n, ncol(z)) for(i in 1:n) C[i, which.max(z[i,])] <- 1 object$bic + 2*sum(C * ifelse(z > 0, log(z), 0)) } icl.MclustDA <- function(object, ...) { n <- object$n z <- predict(object)$z df <- object$df if(is.null(z)) z <- matrix(1, nrow = n, ncol = 1) C <- matrix(0, n, ncol(z)) for(i in 1:n) C[i, which.max(z[i,])] <- 1 object$bic + 2*sum(C * ifelse(z > 0, log(z), 0)) } mclustICL <- function(data, G = NULL, modelNames = NULL, initialization = list(hcPairs=NULL, subset=NULL, noise=NULL), x = NULL, ...) { call <- match.call() data <- data.matrix(data) n <- nrow(data) d <- ncol(data) mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name("mclustBIC") mc[[2]] <- data BIC <- eval(mc, parent.frame()) # browser() class(BIC) <- "mclustBIC" G <- attr(BIC, "G") modelNames <- attr(BIC, "modelNames") ICL <- matrix(NA, nrow = length(G), ncol = length(modelNames)) mostattributes(ICL) <- attributes(BIC) if(!is.null(x)) { r <- match(as.character(G), rownames(x), nomatch = 0) c <- match(modelNames, colnames(x), nomatch = 0) ICL[r,c] <- BIC[r,c] } for(i in 1:nrow(ICL)) { for(j in 1:ncol(ICL)) { if(is.na(BIC[i,j])) next() # not fitted if(!is.na(ICL[i,j])) next() # already available Sumry <- summary(BIC, data, G = G[i], modelNames = modelNames[j]) ICL[i,j] <- icl.Mclust(Sumry) } } class(ICL) <- "mclustICL" # "mclustBIC" attr(ICL, "criterion") <- "ICL" return(ICL) } print.mclustICL <- function (x, pick = 3, ...) { subset <- !is.null(attr(x, "subset")) oldClass(x) <- attr(x, "args") <- NULL attr(x, "criterion") <- NULL attr(x, "control") <- attr(x, "initialization") <- NULL attr(x, "oneD") <- attr(x, "warn") <- attr(x, "Vinv") <- NULL attr(x, "prior") <- attr(x, "G") <- attr(x, "modelNames") <- NULL ret <- attr(x, "returnCodes") == -3 n <- attr(x, "n") d <- attr(x, "d") attr(x, "returnCodes") <- attr(x, "n") <- attr(x, "d") <- NULL oldClass(x) <- attr(x, "args") <- attr(x, "criterion") <- NULL cat("Integrated Complete-data Likelihood (ICL) criterion:\n") print(x, ...) cat("\n") cat("Top", pick, "models based on the ICL criterion:\n") print(pickBIC(x, pick), ...) invisible() } summary.mclustICL <- function(object, G, modelNames, ...) { if(!missing(G)) object <- object[rownames(object) %in% G,,drop=FALSE] if(!missing(modelNames)) object <- object[,colnames(object) %in% modelNames,drop=FALSE] structure(pickBIC(object, ...), class = "summary.mclustICL") } print.summary.mclustICL <- function(x, digits = getOption("digits"), ...) { cat("Best ICL values:\n") x <- drop(as.matrix(x)) x <- rbind(ICL = x, "ICL diff" = x - max(x)) print(x, digits = digits) invisible() } plot.mclustICL <- function(x, ylab = "ICL", ...) { plot.mclustBIC(x, ylab = ylab, ...) } mclust/R/weights.R0000644000176200001440000000336512542512526013572 0ustar liggesusers############################################################################### ## Weights for MCLUST ## ## Written by Thomas Brendan Murphy ## Bugs fix by Luca Scrucca ############################################################################# me.weighted <- function(modelName, data, z, weights = NULL, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { data <- as.matrix(data) N <- nrow(data) if(is.null(warn)) warn <- mclust.options("warn") if(is.null(weights)) { weights <- rep(1,N) } if(any(weights<0)|any(!is.finite(weights))) { stop("Weights must be positive and finite") } if(!is.vector(weights)) { stop("Weights must be a vector") } if(max(weights)>1) { if(warn) warning("Weights rescaled to have maximum equal to 1") weights <- weights/max(weights) } zw <- z*weights llold <- -Inf eps <- .Machine$double.eps criterion <- TRUE iter <- 0 while(criterion) { iter <- iter+1 fit.m <- do.call("mstep",list(data=data, z=zw, modelName=modelName, prior=prior, control=control, Vinv=Vinv, warn=warn)) fit.m$parameters$pro <- fit.m$parameters$pro/mean(weights) fit.e <- do.call("estep", c(list(data=data, control=control, Vinv=Vinv, warn=warn), fit.m)) zw <- pmax(fit.e$z*weights, eps) criterion <- criterion & (iter < control$itmax[1]) ldens <- do.call("dens", c(list(data=data, logarithm=TRUE, warn=warn), fit.m)) ll <- sum(weights*ldens) criterion <- criterion & (ll-llold > control$tol[1]) llold <- ll } fit <- fit.m fit$z <- fit.e$z fit$weights <- weights fit$loglik <- ll fit } mclust/R/clustCombi.R0000644000176200001440000004607113105265236014224 0ustar liggesusersclustCombi <- function(object = NULL, data = NULL, ...) { if(is.null(object) & is.null(data)) stop("An object or class 'Mclust' or data as matrix/data.frame must be provided!") if(is.null(object)) { object <- Mclust(data, ...) } else { if(!inherits(object, "Mclust")) stop("object not of class \"Mclust\"") data <- object$data } combiRes <- combi(data, object) return(combiRes) } combMat <- function(K,l1,l2) { l=c(min(l1,l2), max(l1,l2)) if(any(length(l1) == 0, length(l2) == 0)){ l1 = numeric(0) l2 = l[2]} else { l1 = l[1] l2 = l[2]} M <- rbind(cbind(diag(l2-1), matrix(rep(0,(K-l2+1)*(l2-1)), nrow=l2-1, ncol=K-l2+1)), cbind(matrix(rep(0,l2*(K-l2)), nrow=K-l2, ncol=l2), diag(K-l2))) M[l1,l2] <- 1 return(M) } ## Define xlog to handle x*log(x) as x=0 xlog <- function(x) { xlog1d <- function (xi) if (xi == 0) 0 else (xi*log(xi)) if (is.null(dim(x))) { return(sapply(x,xlog1d)) } else { return(matrix(sapply(x,xlog1d),dim(x))) } } combi <- function(data, MclustOutput, n = nrow(data), d = ncol(data)) { combiM <- list() combiM[[MclustOutput$G]] <- diag(MclustOutput$G) tau <- list() tau[[MclustOutput$G]] = MclustOutput$z classif <- list() classif[[MclustOutput$G]] = map(tau[[MclustOutput$G]]) for (K in MclustOutput$G:2) { dEnt <- matrix(0,nrow=K-1, ncol=K) preCombiTau <- tau[[K]] for (l1 in 1:(K-1)) { for (l2 in (l1+1):K) { postCombiTau <- t(combMat(K,l1,l2) %*% t(preCombiTau)) dEnt[l1,l2] <- sum(xlog(postCombiTau[,l1])) - sum(xlog(preCombiTau[,l1])+xlog(preCombiTau[,l2])) } } l1=which(dEnt==max(dEnt),arr.ind=TRUE)[1] l2=which(dEnt==max(dEnt),arr.ind=TRUE)[2] combiM[[K-1]] <- combMat(K,l1,l2) tau[[K-1]] = t(combiM[[K-1]] %*% t(tau[[K]])) classif[[K-1]] = map(tau[[K-1]]) } output <- list(classification = classif, combiM = combiM, combiz = tau, MclustOutput = MclustOutput) class(output) <- "clustCombi" return(output) } plot.clustCombi <- function(x, what = c("classification", "entropy", "tree"), ...) { object <- x # Argh. Really want to use object anyway if(!inherits(object, "clustCombi")) stop("object not of class \"clustCombi\"") data <- object$MclustOutput$data what <- match.arg(what, several.ok = TRUE) oldpar <- par(no.readonly = TRUE) on.exit(par(oldpar)) plot.clustCombi.classification <- function(...) { # Sort z columns so that one of the two combined column is the last one at # each step (prevents the colors and symbols to be mixed as K -> K-1) curr <- 1:object$MclustOutput$G i <- numeric() j <- numeric() for(K in (object$MclustOutput$G):2) { l1 <- which(!object$combiM[[K-1]] %*% rep(1,K) == 1) l2 <- (object$combiM[[K-1]] %*% curr)[l1] - curr[l1] i <- c(curr[l1],i) j <- c(l2,j) curr <- object$combiM[[K-1]] %*% curr - l2*c(rep(0,(l1-1)),1,rep(0,(K-1-l1))) } permutMat <- function(j,K) { M <- diag(K) M[j,j] <- 0 M[K,K] <- 0 M[j,K] <- 1 M[K,j] <- 1 return(M) } combiM <- diag(object$MclustOutput$G) j <- c(1,j) i <- c(0,i) permutz <- object$MclustOutput$z[,j] par(ask=TRUE) for(K in object$MclustOutput$G:1) { curr_title <- if(K == object$MclustOutput$G) paste0("BIC solution (", as.character(K), " clusters)") else paste0("Combined solution with ", as.character(K), " clusters") if(ncol(as.matrix(data)) > 2) { par(oma = c(0,0,2,0), mar = { mar <- oldpar$mar; mar[3] <- 0.1; mar }) } else { par(mar = { mar <- oldpar$mar; mar[3] <- 2.1; mar }) } combiPlot(data = data, z = permutz, combiM = combiM, ...) if(ncol(as.matrix(data)) > 2) { title(curr_title, outer = TRUE, cex.main = 1) } else { title(curr_title, cex.main = 1) } combiM <- combMat(K,which(j==i[K]),K) %*% combiM } par(ask=FALSE) } if(interactive() & length(what) > 1) { title <- "Combined clusterings plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "classification") plot.clustCombi.classification(...) if(what[choice] == "entropy") entPlot(z = object$MclustOutput$z, combiM = object$combiM, ...) if(what[choice] == "tree") combiTree(object, ...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "classification")) plot.clustCombi.classification(...) if(any(what == "entropy")) entPlot(z = object$MclustOutput$z, combiM = object$combiM, ...) if(any(what == "tree")) combiTree(object, ...) } invisible() } combiPlot <- function(data, z, combiM, ...) { p <- ncol(as.matrix(data)) if (p > 2) { clPairs(data[,1:min(5,p)], classification = map(t(combiM %*% t(z))), ...) } else if (p == 2) { mclust2Dplot(data = data, parameters = NULL, classification = map(t(combiM %*% t(z))), what = "classification", ...) } else { mclust1Dplot(data = as.matrix(data), parameters = NULL, classification = map(t(combiM %*% t(z))), what = "classification", ...) } } entPlot <- function(z, combiM, abc = c("standard", "normalized"), reg = 2, ...) { oldpar <- par(no.readonly = TRUE) on.exit(par(oldpar)) if(length(abc) > 1) par(ask=TRUE) ent <- numeric() Kmax <- ncol(z) z0 <- z for(K in Kmax:1) { z0 <- t(combiM[[K]] %*% t(z0)) ent[K] <- -sum(xlog(z0)) } if(any(abc == "normalized")) { mergedn <- numeric() z0 <- z for(K in (Kmax-1):1) { z0 <- t(combiM[[K+1]] %*% t(z0)) mergedn[K] = sum(sapply(map(z0), function(x) any(which(as.logical(combiM[[K]][rowSums(combiM[[K]])==2,]))==x))) } } if(Kmax == 2) reg <- NULL if(any(abc == "standard")) { par(mfrow=c(1,2), oma=c(0,0,3,0), mar = { mar <- oldpar$mar; mar[3] <- 0.1; mar }) plot(1:Kmax, ent, xlab = "Number of clusters", ylab = "Entropy", xaxt = "n", ...) axis(side = 1, at = 1:Kmax) if(any(reg == 2)) { pcwsreg <- pcws2_reg(1:Kmax,ent) lines(1:pcwsreg$c, pcwsreg$a1*(1:pcwsreg$c) + pcwsreg$b1, lty = 2, col = "red") lines(pcwsreg$c:Kmax, pcwsreg$a2*(pcwsreg$c:Kmax) + pcwsreg$b2, lty = 2, col = "red") } if(any(reg == 3)) { pcwsreg <- pcws3_reg(1:Kmax,ent) lines(1:pcwsreg$c1, pcwsreg$a1*(1:pcwsreg$c1) + pcwsreg$b1, lty = 2, col = "blue") lines(pcwsreg$c1:pcwsreg$c2, pcwsreg$a2*(pcwsreg$c1:pcwsreg$c2) + pcwsreg$b2, lty = 2, col = "blue") lines(pcwsreg$c2:Kmax, pcwsreg$a3*(pcwsreg$c2:Kmax) + pcwsreg$b3, lty = 2, col = "blue") } plot(1:(Kmax-1), ent[2:Kmax]-ent[1:(Kmax-1)], xlab = "Number of clusters", ylab = "Difference in entropy", xaxt = "n", ...) axis(side = 1, at = 1:(Kmax-1)) title("Entropy plot", outer=TRUE, cex.main = 1) } if(any(abc == "normalized")) { par(mfrow=c(1,2), oma=c(0,0,3,0), mar = { mar <- oldpar$mar; mar[3] <- 0.1; mar }) plot(cumsum(c(0,mergedn)), ent, xlab = "Cumul. count of merged obs.", ylab = "Entropy", ...) if(any(reg == 2)) { X <- cumsum(c(0,mergedn)) pcwsreg <- pcws2_reg(X,ent) lines(X[1:pcwsreg$c], pcwsreg$a1*(X[1:pcwsreg$c]) + pcwsreg$b1, lty = 2, col = "red") lines(X[pcwsreg$c:Kmax], pcwsreg$a2*(X[pcwsreg$c:Kmax]) + pcwsreg$b2, lty = 2, col = "red") } if(any(reg == 3)) { X <- cumsum(c(0,mergedn)) pcwsreg <- pcws3_reg(X,ent) lines(X[1:pcwsreg$c1], pcwsreg$a1*(X[1:pcwsreg$c1]) + pcwsreg$b1, lty = 2, col = "blue") lines(X[pcwsreg$c1:pcwsreg$c2], pcwsreg$a2*(X[pcwsreg$c1:pcwsreg$c2]) + pcwsreg$b2, lty = 2, col = "blue") lines(X[pcwsreg$c2:Kmax], pcwsreg$a3*(X[pcwsreg$c2:Kmax]) + pcwsreg$b3, lty = 2, col = "blue") } plot(1:(Kmax-1), (ent[2:Kmax]-ent[1:(Kmax-1)])/mergedn, xlab = "Number of clusters", ylab = "Normalized difference in entropy", xaxt = "n", ...) axis(side = 1, at = 1:(Kmax-1)) title("Normalized entropy plot", outer=TRUE, cex.main = 1) } invisible() } combiTree <- function(object, what = c("entropy", "step"), type = c("triangle", "rectangle"), edgePar = list(col = "darkgray", lwd = 2), ...) { if(!inherits(object, "clustCombi")) stop("object not of class \"clustCombi\"") what <- match.arg(what, eval(formals(combiTree)$what), several.ok = FALSE) type <- match.arg(type, eval(formals(combiTree)$type), several.ok = FALSE) G <- object$MclustOutput$G combiM <- object$combiM # combiZ <- object$combiz # define merging pattern: # - negative numbers are leaves, # - positive are merged clusters (defined by row number in merge) curr <- 1:G merged <- -(1:G) merge <- matrix(NA, G-1, 2) for(k in 1:(G-1)) { Kp = G - k + 1 l1 = which(!combiM[[Kp-1]] %*% rep(1,Kp) == 1) l2 = (combiM[[Kp-1]] %*% curr)[l1] - curr[l1] curr <- setdiff(curr, max(l1, l2)) merge[k,] <- merged[c(l1,l2)] merged[merged == merged[l1] | merged == merged[l2]] <- k } # order along the x-axis sel <- function(x) { if(x < 0) return(abs(x)) else return(c(sel(merge[x,1]), sel(merge[x,2]))) } ord <- abs(c(sel(merge[nrow(merge),1]), sel(merge[nrow(merge),2]))) if(what == "step") { # step h <- 1:(G-1) ylab <- "Steps" } else { # entropy entropy <- sapply(rev(object$combiz), function(z) -sum(xlog(z))) # normalized negentropy h <- entropy; h <- 1 - (h - min(h))/(max(h)-min(h)); h <- h[-1] ylab <- "1 - normalised entropy" } # hclust object (see help(hclust)) hc <- list(merge = merge, # mergin matrix height = h, # define merge heights order = ord, # order of leaves labels = 1:G) # labels of leaves class(hc) <- "hclust" # make it an hclust object # plot(hc, hang = -1) # look at the result # convert to a dendrogram object dendro <- as.dendrogram(hc) plot(dendro, type = type, edgePar = edgePar, ylab = ylab, ...) invisible(dendro) } # pcws2_reg computes the piecewise linear regression -- with two pieces -- to (x,y), for any possible change point and chooses the one leading to the smallest least-square error. pcws2_reg <- function(x, y) { C <- length(x) ssBest = Inf for (c in 2:(C-1)) { x1 <- x[1:c] y1 <- y[1:c] x2 <- x[c:C] y2 <- y[c:C] a1 <- sum((x1-mean(x1))*(y1-mean(y1)))/sum((x1-mean(x1))^2) b1 <- -a1 * mean(x1) + mean(y1) a2 <- sum((x2-mean(x2))*(y2-mean(y2)))/sum((x2-mean(x2))^2) b2 <- -a2 * mean(x2) + mean(y2) ss <- sum((a1*x1+b1-y1)^2) + sum((a2*x2+b2-y2)^2) if (ss < ssBest) { ssBest <- ss cBest <- c a1Best <- a1 a2Best <- a2 b1Best <- b1 b2Best <- b2 } } return(list(c=cBest, a1=a1Best, b1=b1Best, a2=a2Best, b2=b2Best, residuals = c(a1*x1+b1-y1,a2*x2+b2-y2))) } # pcws3_reg computes the piecewise linear regression -- with three pieces -- to (x,y), for any possible change points and chooses the ones leading to the smallest least-square error. pcws3_reg <- function(x, y) { C <- length(x) ssBest = Inf for (c1 in 2:(C-2)) { for (c2 in (c1+1):(C-1)) { x1 <- x[1:c1] y1 <- y[1:c1] x2 <- x[c1:c2] y2 <- y[c1:c2] x3 <- x[c2:C] y3 <- y[c2:C] a1 <- sum((x1-mean(x1))*(y1-mean(y1)))/sum((x1-mean(x1))^2) b1 <- -a1 * mean(x1) + mean(y1) a2 <- sum((x2-mean(x2))*(y2-mean(y2)))/sum((x2-mean(x2))^2) b2 <- -a2 * mean(x2) + mean(y2) a3 <- sum((x3-mean(x3))*(y3-mean(y3)))/sum((x3-mean(x3))^2) b3 <- -a3 * mean(x3) + mean(y3) ss <- sum((a1*x1+b1-y1)^2) + sum((a2*x2+b2-y2)^2) + sum((a3*x3+b3-y3)^2) if (ss < ssBest) { ssBest <- ss c1Best <- c1 c2Best <- c2 a1Best <- a1 b1Best <- b1 a2Best <- a2 b2Best <- b2 a3Best <- a3 b3Best <- b3 } } } return(list(c1=c1Best, c2=c2Best, a1=a1Best, b1=b1Best, a2=a2Best, b2=b2Best, a3=a3Best, b3=b3Best, residuals = c(a1*x1+b1-y1,a2*x2+b2-y2,a3*x3+b3-y3))) } # print.clustCombi <- function(x, ...) # { # output <- x # Argh. Really want to use 'output' # cat("\n EM/BIC Solution\n") # cat(" --------------- \n\n") # cat("Number of components: ", as.character(output$MclustOutput$G), "\n", sep = "") # # cat("Model name: ", output$MclustOutput$parameters$var$modelName, "\n\n", sep="") # for (K in 1:output$MclustOutput$G) # { # cat("Component num.", as.character(K),": ", "\n", sep="") # cat(" proportion: ", sprintf(fmt = "%4.2f ", output$MclustOutput$parameters$pro[K]), "\n", sep="") # if (output$Mclust$d == 1) cat(" mean: ", sprintf(fmt = "%4.2f ", output$MclustOutput$parameters$mean[K]), "\n", sep="") else cat(" mean: ", sprintf(fmt = "%4.2f ", output$MclustOutput$parameters$mean[,K]), "\n", sep="") # } # # cat("\n Combining steps \n") # cat(" --------------- \n\n") # # cl = paste(rep(" ", max(output$MclustOutput$G-4,0)), "Classes labels after this step", rep(" ", max(output$MclustOutput$G-4,0)), sep="") # # if (output$MclustOutput$G>4) for (K in 5:output$MclustOutput$G) cl = paste(" ", cl, " ", sep="") # # cat(" Step | Classes combined at this step | Classes labels after this step", "\n", sep="") # cat("-------|-------------------------------|-------------------------------", "\n", sep="") # curr = 1:output$MclustOutput$G # # cat(" 0 | --- |", sprintf(fmt = "%2d ", curr), "\n", sep="") # # for (K in 1:(output$MclustOutput$G-1)) # { # Kp = output$MclustOutput$G - K + 1 # l1 = which(!output$combiM[[Kp-1]] %*% rep(1,Kp) == 1) # l2 = (output$combiM[[Kp-1]] %*% curr)[l1] - curr[l1] # # nc1 = floor((7-nchar(as.character(K)))/2) # nc2 = (7-nchar(as.character(K))) - nc1 # nc3 = floor((33-nchar(paste(as.character(c(l1)), " & ", as.character(l2))))/2) # nc4 = 33-nchar(paste(as.character(c(l1)), " & ", as.character(l2))) - nc3 # # curr <- output$combiM[[Kp-1]] %*% curr - l2*c(rep(0,(l1-1)),1,rep(0,(Kp-1-l1))) # # cat(rep(" ", nc1), as.character(K), rep(" ", nc2), "|", rep(" ", nc3), as.character(l1), " & ", as.character(l2), rep(" ", nc4), "|", sprintf(fmt = "%2d ", curr), "\n", sep="") # # } # # cat("\n Classification for K classes: output$classification[[K]]\n") # cat(" Combining matrix (K classes -> (K-1) classes): output$combiM[[K]]\n\n") # } print.clustCombi <- function(x, digits = getOption("digits"), ...) { cat("\'", class(x)[1], "\' object:\n", sep = "") cat(paste0(" Mclust model: (", x$MclustOutput$modelName, ",", x$MclustOutput$G, ")\n")) cat(" Available object components: ") cat(names(x), "\n") cat(" Combining matrix (K+1 classes -> K classes): $combiM[[K]]\n") cat(" Classification for K classes: $classification[[K]]\n") invisible() } summary.clustCombi <- function(object, ...) { title <- paste("Combining Gaussian mixture components for clustering") out <- with(object, list(title = title, MclustModelName = object$MclustOutput$modelName, MclustG = object$MclustOutput$G, combiM = object$combiM)) class(out) <- "summary.clustCombi" return(out) } print.summary.clustCombi <- function(x, digits = getOption("digits"), ...) { cat(rep("-", nchar(x$title)),"\n",sep="") cat(x$title, "\n") cat(rep("-", nchar(x$title)),"\n",sep="") # cat("\nMclust model name:", x$MclustModelName, "\n") cat("Number of components:", x$MclustG, "\n") # cat("\nCombining steps:\n\n") # cl <- paste(rep(" ", max(x$MclustG-4,0)), # "Class labels after this step", # rep(" ", max(x$MclustG-4,0)), sep="") # # if(x$MclustG>4) # for(K in 5:x$MclustG) # cl <- paste(" ", cl, " ", sep="") cat(" Step | Classes combined at this step | Class labels after this step", "\n", sep="") cat("-------|-------------------------------|-----------------------------", "\n", sep="") curr <- 1:x$MclustG cat(" 0 | --- | ", sprintf(fmt = "%d ", curr), "\n", sep="") for(K in 1:(x$MclustG-1)) { Kp = x$MclustG - K + 1 l1 = which(!x$combiM[[Kp-1]] %*% rep(1,Kp) == 1) l2 = (x$combiM[[Kp-1]] %*% curr)[l1] - curr[l1] nc1 = floor((7-nchar(as.character(K)))/2) nc2 = (7-nchar(as.character(K))) - nc1 nc3 = floor((33-nchar(paste(as.character(c(l1)), " & ", as.character(l2))))/2) nc4 = 33-nchar(paste(as.character(c(l1)), " & ", as.character(l2))) - nc3 curr <- x$combiM[[Kp-1]] %*% curr - l2*c(rep(0,(l1-1)),1,rep(0,(Kp-1-l1))) cat(rep(" ", nc1), as.character(K), rep(" ", nc2), "|", rep(" ", nc3), as.character(l1), " & ", as.character(l2), rep(" ", nc4), "| ", sprintf(fmt = "%d ", curr), "\n", sep="") } invisible() } clustCombiOptim <- function(object, reg = 2, plot = FALSE, ...) { # Return the optimal number of clusters suggested by the method based on the # entropy and discussed in reference given in help(clustCombi). # # object = "clustCombi" object # reg = see help(entPlot) z <- object$MclustOutput$z combiM <- object$combiM ent <- rep(as.double(NA, nrow(z))) Kmax <- ncol(z) z0 <- z for(K in Kmax:1) { z0 <- t(combiM[[K]] %*% t(z0)) ent[K] <- -sum(xlog(z0)) } if(Kmax == 2) { # reg <- NULL # in the original code # my modification to get however a result reg <- 1 pcwsreg <- list(K = Kmax) } if(reg == 2) { pcwsreg <- pcws2_reg(1:Kmax, ent) } if(reg == 3) { pcwsreg <- pcws3_reg(1:Kmax, ent) } if(plot) { plot(1:Kmax, ent, xlab = "Number of clusters", ylab = "Entropy", panel.first = grid(), xaxt = "n", ...) axis(side = 1, at = 1:Kmax) if(reg == 2) { lines(1:pcwsreg$c, pcwsreg$a1 * (1:pcwsreg$c) + pcwsreg$b1, lty = 2, col = "red") lines(pcwsreg$c:Kmax, pcwsreg$a2 * (pcwsreg$c:Kmax) + pcwsreg$b2, lty = 2, col = "red") } if(reg == 3) { lines(1:pcwsreg$c1, pcwsreg$a1 * (1:pcwsreg$c1) + pcwsreg$b1, lty = 2, col = "blue") lines(pcwsreg$c1:pcwsreg$c2, pcwsreg$a2 * (pcwsreg$c1:pcwsreg$c2) + pcwsreg$b2, lty = 2, col = "blue") lines(pcwsreg$c2:Kmax, pcwsreg$a3 * (pcwsreg$c2:Kmax) + pcwsreg$b3, lty = 2, col = "blue") } } K <- pcwsreg[[1]] z0 <- z for(K in Kmax:K) { z0 <- t(combiM[[K]] %*% t(z0)) } out <- list(numClusters.combi = K, z.combi = z0, cluster.combi = map(z0)) return(out) } mclust/R/options.R0000644000176200001440000000503213204254423013577 0ustar liggesusers############################################################################# .mclust <- structure(list( emModelNames = c("EII", "VII", "EEI", "VEI", "EVI", "VVI", "EEE", "EVE", "VEE", "VVE", "EEV", "VEV", "EVV", "VVV"), # in mclust version <= 4.x # emModelNames = c("EII", "VII", "EEI", "VEI", "EVI", "VVI", "EEE", "EEV", "VEV", "VVV"), hcModelNames = c("VVV", "EEE", "VII", "EII"), hcUse = "SVD", subset = 2000, bicPlotSymbols = structure(c(17, 2, 16, 10, 13, 1, 15, 5, 8, 9, 12, 7, 14, 0, 17, 2), .Names = c("EII", "VII", "EEI", "EVI", "VEI", "VVI", "EEE", "EVE", "VEE", "VVE", "EEV", "VEV", "EVV", "VVV", "E", "V")), bicPlotColors = structure( { pal <- grDevices::colorRampPalette(c("forestgreen", "royalblue1", "red3"), space = "Lab") c("gray", "black", pal(12), "gray", "black") }, .Names = c("EII", "VII", "EEI", "EVI", "VEI", "VVI", "EEE", "EVE", "VEE", "VVE", "EEV", "VEV", "EVV", "VVV", "E", "V")), classPlotSymbols = c(16, 0, 17, 3, 15, 4, 1, 8, 2, 7, 5, 9, 6, 10, 11, 18, 12, 13, 14), classPlotColors = c("dodgerblue2", "red3", "green3", "slateblue", "darkorange", "skyblue1", "violetred4", "forestgreen", "steelblue4", "slategrey", "brown", "black", "darkseagreen", "darkgoldenrod3", "olivedrab", "royalblue", "tomato4", "cyan2", "springgreen2"), warn = FALSE)) mclust.options <- function(...) { current <- .mclust if(nargs() == 0) return(current) args <- list(...) if(length(args) == 1 && is.null(names(args))) { arg <- args[[1]] switch(mode(arg), list = args <- arg, character = return(.mclust[[arg]]), stop("invalid argument: ", dQuote(arg))) } if(length(args) == 0) return(current) n <- names(args) if (is.null(n)) stop("options must be given by name") changed <- current[n] current[n] <- args if(sys.parent() == 0) env <- asNamespace("mclust") else env <- parent.frame() assign(".mclust", current, envir = env) invisible(current) } mclust/R/init.R0000644000176200001440000004006413176547150013065 0ustar liggesusers############################################################################# ## Initialization for d-dim data ############################################ ############################################################################# # This new version allowing transformation of the data. # By default it behaves as the old function hc <- function(data, modelName = mclust.options("hcModelNames")[1], use = mclust.options("hcUse"), ...) { if(!any(modelName == c("E", "V", "EII", "VII", "EEE", "VVV"))) stop("invalid 'modelName' argument for model-based hierarchical clustering") if(!any(use == c("VARS", "STD", "SPH", "PCS", "PCR", "SVD"))) stop("invalid 'use' argument for model-based hierarchical clustering") funcName <- paste("hc", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc$use <- mc$modelName <- NULL data <- data.matrix(data) dropCols <- function(x) { # select only those columns of matrix x with all finite numeric values x[,apply(x, 2, function(x) all(is.finite(x))), drop = FALSE] } use <- toupper(use) switch(use, "VARS" = { Z <- data }, "STD" = { Z <- scale(data, center = TRUE, scale = TRUE) Z <- dropCols(Z) }, "PCR" = { data <- scale(data, center = TRUE, scale = TRUE) data <- dropCols(data) SVD <- svd(data, nu=0) # evalues <- sqrt(SVD$d^2/(nrow(data)-1)) Z <- data %*% SVD$v }, "PCS" = { data <- scale(data, center = TRUE, scale = FALSE) SVD <- svd(data, nu=0) # evalues <- sqrt(SVD$d^2/(nrow(data)-1)) Z <- data %*% SVD$v Z <- dropCols(Z) }, "SPH" = { data <- scale(data, center = TRUE, scale = FALSE) n <- nrow(data); p <- ncol(data) Sigma <- var(data) * (n - 1)/n SVD <- svd(Sigma, nu = 0) Z <- data %*% SVD$v %*% diag(1/sqrt(SVD$d), p, p) Z <- dropCols(Z) }, "SVD" = { data <- scale(data, center = TRUE, scale = TRUE) data <- dropCols(data) p <- min(dim(data)) SVD <- svd(data, nu=0) Z <- data %*% SVD$v %*% diag(1/sqrt(SVD$d), p, p) }, stop("'use' argument not allowed. See help(mclust.options)") ) # call the proper hc function mc$data <- Z mc[[1]] <- as.name(funcName) out <- eval(mc, parent.frame()) attr(out, "call") <- match.call() class(out) <- "hc" return(out) } print.hc <- function(x, ...) { if(!is.null(attr(x, "call"))) cat("Call:\n", deparse(attr(x, "call")), "\n\n", sep = "") cat("Model-Based Agglomerative Hierarchical Clustering:\n") if(!is.null(attr(x, "modelName"))) cat("Model name = ", attr(x, "modelName"), "\n") if(!is.null(attr(x, "dimensions"))) cat("Number of objects = ", attr(x, "dimensions")[1], "\n") invisible(x) } randomPairs <- function(data, seed, ...) { if(!missing(seed)) set.seed(seed) data <- as.matrix(data) n <- nrow(data) m <- if(n%%2 == 1) n-1 else n tree <- matrix(sample(1:n, m, replace = FALSE), nrow = 2, ncol = ceiling(m/2)) tree <- apply(tree, 2, sort) ind <- unique(tree[1,]) while(ncol(tree) < (m-1)) { addtree <- sort(sample(ind, size = 2, replace = FALSE)) ind <- setdiff(ind, addtree[2]) tree <- cbind(tree, addtree) } dimnames(tree) <- NULL structure(tree, initialPartition = 1:n, dimensions = c(n,2)) } hclass <- function(hcPairs, G) { initial <- attributes(hcPairs)$init n <- length(initial) k <- length(unique(initial)) G <- if(missing(G)) k:2 else rev(sort(unique(G))) select <- k - G if(length(select) == 1 && !select) return(matrix(initial, ncol = 1, dimnames = list(NULL, as.character(G)))) bad <- select < 0 | select >= k if(all(bad)) stop("No classification with the specified number of clusters") if(any(bad) & mclust.options("warn")) { warning("Some selected classifications are inconsistent with mclust object") } L <- length(select) cl <- matrix(as.double(NA), nrow = n, ncol = L, dimnames = list(NULL, as.character(G))) if(select[1]) m <- 1 else { cl[, 1] <- initial m <- 2 } for(l in 1:max(select)) { ij <- hcPairs[, l] i <- min(ij) j <- max(ij) initial[initial == j] <- i if(select[m] == l) { cl[, m] <- initial m <- m + 1 } } apply(cl[, L:1, drop = FALSE], 2, partconv, consec = TRUE) } hcEII <- function(data, partition, minclus = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #==================================================================== dimdat <- dim(data) oneD <- (is.null(dimdat) || length(dimdat[dimdat > 1]) == 1) #if(oneD || length(dimdat) > 2) # stop("data should in the form of a matrix") data <- as.matrix(data) dimnames(data) <- NULL n <- nrow(data) p <- ncol(data) if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) { stop("initial number of clusters is not greater than minclus") } if(n <= p & mclust.options("warn")) { warning("# of observations <= data dimension") } #============================================================= storage.mode(data) <- "double" ld <- max(c((l * (l - 1))/2, 3 * m)) temp <- .Fortran("hceii", data, as.integer(n), as.integer(p), as.integer(partition), as.integer(l), as.integer(m), double(p), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 9)] temp[[1]] <- temp[[1]][1:m, 1:2, drop = FALSE] temp[[2]] <- temp[[2]][1:m] structure(t(temp[[1]]), initialPartition = partition, dimensions = dimdat, modelName = "EII", call = match.call()) } hcEEE <- function(data, partition, minclus = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #===================================================================== dimdat <- dim(data) oneD <- (is.null(dimdat) || length(dimdat[dimdat > 1]) == 1) #if(oneD || length(dimdat) > 2) # stop("data should in the form of a matrix") data <- as.matrix(data) dimnames(data) <- NULL n <- nrow(data) p <- ncol(data) if(n <= p & mclust.options("warn")) warning("# of observations <= data dimension") if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" ## R 2.12.0: 32 bit Windows build fails due to compiler bug ## workaround: removal (hopefully temporary) of hc functionality for EEE # Luca: commented the next line and uncommented below # stop("hc for EEE model is not currently supported") temp <- .Fortran("hceee", data, as.integer(n), as.integer(p), as.integer(partition), as.integer(l), as.integer(m), if(p < 3) integer(m) else integer(1), if(p < 4) integer(m) else integer(1), double(p), double(p * p), double(p * p), double(p * p), PACKAGE = "mclust")[c(1, 7:10)] # # currently temp[[5]] is not output temp[[4]] <- temp[[4]][1:2] temp[[5]] <- temp[[5]][1:2] names(temp[[5]]) <- c("determinant", "trace") temp[[1]] <- temp[[1]][1:(m + 1), ] if(p < 3) tree <- rbind(temp[[2]], temp[[3]]) else if(p < 4) tree <- rbind(temp[[1]][-1, 3], temp[[3]]) else tree <- t(temp[[1]][-1, 3:4, drop = FALSE]) determinant <- temp[[1]][, 1] attr(determinant, "breakpoints") <- temp[[4]] trace <- temp[[1]][, 2] structure(tree, initialPartition = partition, dimensions = dimdat, modelName = "EEE", call = match.call()) } hcVII <- function(data, partition, minclus = 1, alpha = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #===================================================================== dimdat <- dim(data) oneD <- (is.null(dimdat) || length(dimdat[dimdat > 1]) == 1) #if(oneD || length(dimdat) > 2) # stop("data should in the form of a matrix") data <- as.matrix(data) dimnames(data) <- NULL n <- nrow(data) p <- ncol(data) if(n <= p & mclust.options("warn")) warning("# of observations <= data dimension") if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" ll <- (l * (l - 1))/2 ld <- max(n, ll, 3 * m) alpha <- alpha * traceW(data/sqrt(n * p)) alpha <- max(alpha, .Machine$double.eps) temp <- .Fortran("hcvii", data, as.integer(n), as.integer(p), as.integer(partition), as.integer(l), as.integer(m), as.double(alpha), double(p), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 10)] temp[[1]] <- temp[[1]][1:m, 1:2, drop = FALSE] temp[[2]] <- temp[[2]][1:m] structure(t(temp[[1]]), initialPartition = partition, dimensions = dimdat, modelName = "VII", call = match.call()) } hcVVV <- function(data, partition, minclus = 1, alpha = 1, beta = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") dimdat <- dim(data) oneD <- (is.null(dimdat) || length(dimdat[dimdat > 1]) == 1) #if(oneD || length(dimdat) > 2) # stop("data should in the form of a matrix") data <- as.matrix(data) dimnames(data) <- NULL n <- nrow(data) p <- ncol(data) if(n <= p & mclust.options("warn")) warning("# of observations <= data dimension") if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" ll <- (l * (l - 1))/2 # dp <- duplicated(partition) #x[c((1:n)[!dp],(1:n)[dp]),], #as.integer(c(partition[!dp], partition[dp])), ld <- max(n, ll + 1, 3 * m) alpha <- alpha * traceW(data/sqrt(n * p)) alpha <- max(alpha, .Machine$double.eps) temp <- .Fortran("hcvvv", cbind(data, 0.), as.integer(n), as.integer(p), as.integer(partition), as.integer(l), as.integer(m), as.double(alpha), as.double(beta), double(p), double(p * p), double(p * p), double(p * p), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 14)] temp[[1]] <- temp[[1]][1:m, 1:2, drop = FALSE] temp[[2]] <- temp[[2]][1:m] structure(t(temp[[1]]), initialPartition = partition, dimensions = dimdat, modelName = "VVV", call = match.call()) } ## Initialization for 1-dim data ############################################ # This version is bugged when a quantile is equal to the following # qclass <- function (x, k) # { # q <- quantile(x, seq(from = 0, to = 1, by = 1/k)) # cl <- rep(0, length(x)) # q[1] <- q[1] - 1 # for(i in 1:k) # cl[x > q[i] & x <= q[i+1]] <- i # return(cl) # } # This should correct the above bug qclass <- function (x, k) { x <- as.vector(x) # eps <- sqrt(.Machine$double.eps) # numerical accuracy problem if scale of x is large, so make tolerance # scale dependent eps <- sd(x)*sqrt(.Machine$double.eps) q <- NA n <- k while(length(q) < (k+1)) { n <- n + 1 q <- unique(quantile(x, seq(from = 0, to = 1, length = n))) } if(length(q) > (k+1)) { dq <- diff(q) nr <- length(q)-k-1 q <- q[-order(dq)[1:nr]] } q[1] <- min(x) - eps q[length(q)] <- max(x) + eps cl <- rep(0, length(x)) for(i in 1:k) { cl[ x >= q[i] & x < q[i+1] ] <- i } return(cl) } hcE <- function(data, partition, minclus = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #==================================================================== dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" ld <- max(c((l * (l - 1))/2, 3 * m)) temp <- .Fortran("hc1e", data, as.integer(n), as.integer(partition), as.integer(l), as.integer(m), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 3, 7)] temp[[1]] <- temp[[1]][1:m] temp[[2]] <- temp[[2]][1:m] temp[[3]] <- temp[[3]][1:m] structure(rbind(temp[[1]], temp[[2]]), initialPartition = partition, dimensions = n, modelName = "E", call = match.call()) } hcV <- function(data, partition, minclus = 1, alpha = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #===================================================================== dimdat <- dim(data) oneD <- is.null(dimdat) || length(dimdat[dimdat > 1]) == 1 if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" alpha <- alpha * (vecnorm(data - mean(data))^2/n) alpha <- min(alpha, .Machine$double.eps) ld <- max(c((l * (l - 1))/2, 3 * m)) temp <- .Fortran("hc1v", data, as.integer(n), as.integer(partition), as.integer(l), as.integer(m), as.double(alpha), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 3, 8)] temp[[1]] <- temp[[1]][1:m] temp[[2]] <- temp[[2]][1:m] temp[[3]] <- temp[[3]][1:m] structure(rbind(temp[[1]], temp[[2]]), initialPartition = partition, dimensions = n, modelName = "V", call = match.call()) } mclust/R/toremove.R0000644000176200001440000002437013160230763013754 0ustar liggesusers# functions to be removed?? EMclust <- function(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = list(hcPairs=NULL, subset=NULL, noise=NULL), Vinv = NULL, warn = FALSE, x = NULL, ...) { if (!is.null(x)) { if (!missing(prior) || !missing(control) || !missing(initialization) || !missing(Vinv)) stop("only G and modelNames may be specified as arguments when x is supplied") prior <- attr(x,"prior") control <- attr(x,"control") initialization <- attr(x,"initialization") Vinv <- attr(x,"Vinv") warn <- attr(x,"warn") } dimData <- dim(data) oneD <- is.null(dimData) || length(dimData[dimData > 1]) == 1 if(!oneD && length(dimData) != 2) stop("data must be a vector or a matrix") if(oneD) { data <- drop(as.matrix(data)) n <- length(data) d <- 1 } else { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) } if (is.null(x)) { if (is.null(modelNames)) { if (d == 1) { modelNames <- c("E", "V") } else { modelNames <- mclust.options("emModelNames") if (n <= d) { # select only spherical and diagonal models m <- match(modelNames, c("EII", "VII", "EEI", "VEI", "EVI", "VVI"), nomatch = 0) modelNames <- modelNames[m] } } } if (is.null(G)) { G <- if (is.null(initialization$noise)) 1:9 else 0:9 } else { G <- sort(as.numeric(G)) } Gall <- G Mall <- modelNames } else { Glabels <- dimnames(x)[[1]] Mlabels <- dimnames(x)[[2]] if (is.null(G)) G <- Glabels if (is.null(modelNames)) modelNames <- Mlabels Gmatch <- match(as.character(G), Glabels, nomatch = 0) Mmatch <- match(modelNames, Mlabels, nomatch = 0) if (all(Gmatch) && all(Mmatch)) { attr( x, "G") <- as.numeric(G) attr( x, "modelNames") <- modelNames attr( x, "returnCodes") <- attr(x, "returnCodes")[as.character(G),modelNames,drop=FALSE] return(x[as.character(G),modelNames,drop=FALSE]) } Gall <- sort(as.numeric(unique(c(as.character(G), Glabels)))) Mall <- unique(c(modelNames, Mlabels)) } if (any(as.logical(as.numeric(G))) < 0) { if (is.null(initialization$noise)) { stop("G must be positive") } else { stop("G must be nonnegative") } } if (d == 1 && any(nchar(modelNames) > 1)) { Emodel <- any(sapply(modelNames, function(x) charmatch("E", x, nomatch = 0)[1]) == 1) Vmodel <- any(sapply(modelNames, function(x) charmatch("V", x, nomatch = 0)[1]) == 1) modelNames <- c("E", "V")[c(Emodel, Vmodel)] } l <- length(Gall) m <- length(Mall) EMPTY <- -.Machine$double.xmax BIC <- RET <- matrix(EMPTY, nrow = l, ncol = m, dimnames = list(as.character(Gall), as.character(Mall))) if (!is.null(x)) { BIC[dimnames(x)[[1]],dimnames(x)[[2]]] <- x RET[dimnames(x)[[1]],dimnames(x)[[2]]] <- attr(x, "returnCodes") BIC <- BIC[as.character(G),modelNames,drop=FALSE] RET <- RET[as.character(G),modelNames,drop=FALSE] } G <- as.numeric(G) Glabels <- as.character(G) Gout <- G if (is.null(initialization$noise)) { if (G[1] == 1) { for (mdl in modelNames[BIC["1",] == EMPTY]) { out <- mvn(modelName = mdl, data = data, prior = prior) BIC["1", mdl] <- bic(modelName = mdl, loglik = out$loglik, n = n, d = d, G = 1, equalPro = FALSE) RET["1", mdl] <- attr(out, "returnCode") } if (l == 1) { BIC[BIC == EMPTY] <- NA return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = initialization, warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$subset)) { ####################################################### # all data in initial hierarchical clustering phase ####################################################### if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(modelName = mclust.options("hcModelNames")[1], data = data) } else { hcPairs <- hc(modelName = "EII", data = data) } } else { hcPairs <- NULL # hcPairs <- hc(modelName = "E", data = data) } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) for (g in Glabels) { if (d > 1 || !is.null(hcPairs)) { z <- unmap(clss[, g]) } else { z <- unmap(qclass( data, as.numeric(g))) } for (modelName in modelNames[BIC[g,] == EMPTY]) { out <- me(modelName = modelName, data = data, z = z, prior = prior, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = as.numeric(g), equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") } } } else { ###################################################### # initial hierarchical clustering phase on a subset ###################################################### if (is.logical(initialization$subset)) initialization$subset <- (1:n)[initialization$subset] if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(data = data[initialization$subset, ], modelName = mclust.options("hcModelNames")[1]) } else { hcPairs <- hc(data = data[initialization$subset,], modelName = "EII") } } else { hcPairs <- NULL # hcPairs <- hc(data = data[initialization$subset], # modelName = "E") } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) for (g in Glabels) { if (d > 1 || !is.null(hcPairs)) { z <- unmap(clss[, g]) } else { z <- unmap(qclass(data[initialization$subset], as.numeric(g))) } dimnames(z) <- list(as.character(initialization$subset), NULL) for (modelName in modelNames[!is.na(BIC[g,])]) { ms <- mstep(modelName = modelName, z = z, data = as.matrix(data)[initialization$subset, ], prior = prior, control = control, warn = warn) # # ctrl <- control # ctrl$itmax[1] <- 1 # ms <- me(modelName = modelName, data = as.matrix(data)[ # initialization$subset, ], z = z, prior = prior, control = ctrl) # es <- do.call("estep", c(list(data = data, warn = warn), ms)) out <- me(modelName = modelName, data = data, z = es$z, prior = prior, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = as.numeric(g), equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") } } } } else { ###################################################### # noise case ###################################################### if (!is.null(initialization$subset)) stop("subset option not implemented with noise") if (is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) noise <- initialization$noise if (!is.logical(noise)) noise <- as.logical(match(1:n, noise, nomatch = 0)) if (!G[1]) { hood <- n * log(Vinv) BIC["0", ] <- 2 * hood - log(n) if (l == 1) { return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = list(hcPairs = hcPairs, subset = initialization$subset), warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(data = data[!noise,], modelName = mclust.options("hcModelNames")[1]) } else { hcPairs <- hc(data = data[!noise,], modelName = "EII") } } else { hcPairs <- NULL # hcPairs <- hc(data = data[!noise], modelName = "E") } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) z <- matrix(0, n, max(G) + 1) for (g in Glabels) { z[] <- 0 k <- as.numeric(g) if (d > 1 || !is.null(hcPairs)) { z[!noise, 1:k] <- unmap(clss[, g]) } else { z[!noise, 1:k] <- unmap(qclass(data[!noise])) } z[noise, k+1] <- 1 K <- 1:(k+1) for (modelName in modelNames[BIC[g,] == EMPTY]) { out <- me(modelName = modelName, data = data, z = z[, K], prior = prior, control = control, Vinv = Vinv, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = k, noise = TRUE, equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") } } } structure(BIC, G = Gout, modelNames = modelNames, prior = prior, control = control, initialization = list(hcPairs = hcPairs, subset = initialization$subset, noise = initialization$noise), Vinv = Vinv, warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC") } # EMclust <- function(...) .Defunct("mclustBIC", PACKAGE = "mclust") mclust/R/util.R0000644000176200001440000010011613171423463013065 0ustar liggesusers adjustedRandIndex <- function (x, y) { x <- as.vector(x) y <- as.vector(y) if(length(x) != length(y)) stop("arguments must be vectors of the same length") tab <- table(x,y) if(all(dim(tab)==c(1,1))) return(1) a <- sum(choose(tab, 2)) b <- sum(choose(rowSums(tab), 2)) - a c <- sum(choose(colSums(tab), 2)) - a d <- choose(sum(tab), 2) - a - b - c ARI <- (a - (a + b) * (a + c)/(a + b + c + d)) / ((a + b + a + c)/2 - (a + b) * (a + c)/(a + b + c + d)) return(ARI) } classError <- function(classification, truth) { q <- function(map, len, x) { x <- as.character(x) map <- lapply(map, as.character) y <- sapply(map, function(x) x[1]) best <- y != x if(all(len) == 1) return(best) errmin <- sum(as.numeric(best)) z <- sapply(map, function(x) x[length(x)]) mask <- len != 1 counter <- rep(0, length(len)) k <- sum(as.numeric(mask)) j <- 0 while(y != z) { i <- k - j m <- mask[i] counter[m] <- (counter[m] %% len[m]) + 1 y[x == names(map)[m]] <- map[[m]][counter[m]] temp <- y != x err <- sum(as.numeric(temp)) if(err < errmin) { errmin <- err best <- temp } j <- (j + 1) %% k } best } if (any(isNA <- is.na(classification))) { classification <- as.character(classification) nachar <- paste(unique(classification[!isNA]),collapse="") classification[isNA] <- nachar } MAP <- mapClass(classification, truth) len <- sapply(MAP[[1]], length) if(all(len) == 1) { CtoT <- unlist(MAP[[1]]) I <- match(as.character(classification), names(CtoT), nomatch= 0) one <- CtoT[I] != truth } else { one <- q(MAP[[1]], len, truth) } len <- sapply(MAP[[2]], length) if(all(len) == 1) { TtoC <- unlist(MAP[[2]]) I <- match(as.character(truth), names(TtoC), nomatch = 0) two <- TtoC[I] != classification } else { two <- q(MAP[[2]], len, classification) } err <- if(sum(as.numeric(one)) > sum(as.numeric(two))) as.vector(one) else as.vector(two) bad <- seq(along = classification)[err] list(misclassified = bad, errorRate = length(bad)/length(truth)) } map <- function(z, warn = mclust.options("warn"), ...) { nrowz <- nrow(z) cl <- numeric(nrowz) I <- 1:nrowz J <- 1:ncol(z) for(i in I) { cl[i] <- (J[z[i, ] == max(z[i, ])])[1] } if(warn) { K <- as.logical(match(J, sort(unique(cl)), nomatch = 0)) if(any(!K)) warning(paste("no assignment to", paste(J[!K], collapse = ","))) } return(cl) } unmap <- function(classification, groups=NULL, noise=NULL, ...) { # converts a classification to conditional probabilities # classes are arranged in sorted order unless groups is specified # if a noise indicator is specified, that column is placed last n <- length(classification) u <- sort(unique(classification)) if(is.null(groups)) { groups <- u } else { if(any(match( u, groups, nomatch = 0) == 0)) stop("groups incompatible with classification") miss <- match( groups, u, nomatch = 0) == 0 } cgroups <- as.character(groups) if(!is.null(noise)) { noiz <- match( noise, groups, nomatch = 0) if(any(noiz == 0)) stop("noise incompatible with classification") groups <- c(groups[groups != noise],groups[groups==noise]) noise <- as.numeric(factor(as.character(noise), levels = unique(groups))) } groups <- as.numeric(factor(cgroups, levels = unique(cgroups))) classification <- as.numeric(factor(as.character(classification), levels = unique(cgroups))) k <- length(groups) - length(noise) nam <- levels(groups) if(!is.null(noise)) { k <- k + 1 nam <- nam[1:k] nam[k] <- "noise" } z <- matrix(0, n, k, dimnames = c(names(classification),nam)) for(j in 1:k) { z[classification == groups[j], j] <- 1 } return(z) } orth2 <- function (n) { u <- rnorm(n) u <- u/vecnorm(u) v <- rnorm(n) v <- v/vecnorm(v) Q <- cbind(u, v - sum(u * v) * u) dimnames(Q) <- NULL Q } logsumexp <- function(x) { # Numerically efficient implementation of log(sum(exp(x))) max <- max(x) max + log(sum(exp(x-max))) } partconv <- function(x, consec = TRUE) { n <- length(x) y <- numeric(n) u <- unique(x) if(consec) { # number groups in order of first row appearance l <- length(u) for(i in 1:l) y[x == u[i]] <- i } else { # represent each group by its lowest-numbered member for(i in u) { l <- x == i y[l] <- (1:n)[l][1] } } y } partuniq <- function(x) { # finds the classification that removes duplicates from x charconv <- function(x, sep = "001") { if(!is.data.frame(x)) x <- data.frame(x) do.call("paste", c(as.list(x), sep = sep)) } n <- nrow(x) x <- charconv(x) k <- duplicated(x) partition <- 1.:n partition[k] <- match(x[k], x) partition } shapeO <- function(shape, O, transpose = FALSE) { dimO <- dim(O) if(dimO[1] != dimO[2]) stop("leading dimensions of O are unequal") if((ldO <- length(dimO)) != 3) { if(ldO == 2) { dimO <- c(dimO, 1) O <- array(O, dimO) } else stop("O must be a matrix or an array") } l <- length(shape) if(l != dimO[1]) stop("dimension of O and length s are unequal") storage.mode(O) <- "double" .Fortran("shapeo", as.logical(transpose), as.double(shape), O, as.integer(l), as.integer(dimO[3]), double(l * l), integer(1), PACKAGE = "mclust")[[3]] } traceW <- function(x) { # sum(as.vector(sweep(x, 2, apply(x, 2, mean)))^2) dimx <- dim(x) n <- dimx[1] p <- dimx[2] .Fortran("mcltrw", as.double(x), as.integer(n), as.integer(p), double(p), double(1), PACKAGE = "mclust")[[5]] } unchol <- function(x, upper = NULL) { if(is.null(upper)) { upper <- any(x[row(x) < col(x)]) lower <- any(x[row(x) > col(x)]) if(upper && lower) stop("not a triangular matrix") if(!(upper || lower)) { x <- diag(x) return(diag(x * x)) } } dimx <- dim(x) storage.mode(x) <- "double" .Fortran("uncholf", as.logical(upper), x, as.integer(nrow(x)), as.integer(ncol(x)), integer(1), PACKAGE = "mclust")[[2]] } vecnorm <- function (x, p = 2) { if (is.character(p)) { if (charmatch(p, "maximum", nomatch = 0) == 1) p <- Inf else if (charmatch(p, "euclidean", nomatch = 0) == 1) p <- 2 else stop("improper specification of p") } if (!is.numeric(x) && !is.complex(x)) stop("mode of x must be either numeric or complex") if (!is.numeric(p)) stop("improper specification of p") if (p < 1) stop("p must be greater than or equal to 1") if (is.numeric(x)) x <- abs(x) else x <- Mod(x) if (p == 2) return(.Fortran("d2norm", as.integer(length(x)), as.double(x), as.integer(1), double(1), PACKAGE = "mclust")[[4]]) if (p == Inf) return(max(x)) if (p == 1) return(sum(x)) xmax <- max(x) if (!xmax) xmax <- max(x) if (!xmax) return(xmax) x <- x/xmax xmax * sum(x^p)^(1/p) } errorBars <- function(x, upper, lower, width = 0.1, code = 3, angle = 90, horizontal = FALSE, ...) { # Draw error bars at x from upper to lower. If horizontal = FALSE (default) # bars are drawn vertically, otherwise horizontally. if(horizontal) arrows(upper, x, lower, x, length = width, angle = angle, code = code, ...) else arrows(x, upper, x, lower, length = width, angle = angle, code = code, ...) } covw <- function(X, Z, normalize = TRUE) # Given data matrix X(n x p) and weight matrix Z(n x G) computes # weighted means(p x G), weighted covariance matrices S(p x p x G) and # weighted scattering matrices W(p x p x G) { X <- as.matrix(X) Z <- as.matrix(Z) n <- nrow(X) p <- ncol(X) nZ <- nrow(Z) G <- ncol(Z) if(n != nZ) stop("X and Z must have same number of rows") if(normalize) Z <- apply(Z, 1, function(z) z/sum(z)) tmp <- .Fortran("covwf", X = as.double(X), Z = as.double(Z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mean = double(p*G), S = double(p*p*G), W = double(p*p*G), PACKAGE = "mclust") out <- list(mean = matrix(tmp$mean, p,G), S = array(tmp$S, c(p,p,G)), W = array(tmp$W, c(p,p,G)) ) return(out) } clPairs <- function (data, classification, symbols = NULL, colors = NULL, labels = dimnames(data)[[2]], CEX = 1, gap = 0.2, ...) { data <- as.matrix(data) n <- nrow(data) p <- ncol(data) if(missing(classification)) classification <- rep(1, n) if(!is.factor(classification)) classification <- as.factor(classification) l <- length(levels(classification)) if(length(classification) != n) stop("classification variable must have the same length as nrows of data!") if(missing(symbols)) { if(l == 1) { symbols <- "." } if(l <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols") } else { if(l <= 9) { symbols <- as.character(1:9) } else if(l <= 26) { symbols <- LETTERS[1:l] } else symbols <- rep( 16,l) } } if(length(symbols) == 1) symbols <- rep(symbols, l) if(length(symbols) < l) { symbols <- rep( 16, l) warning("more symbols needed") } if(is.null(colors)) { if(l <= length(mclust.options("classPlotColors"))) colors <- mclust.options("classPlotColors")[1:l] } if(length(colors) == 1) colors <- rep(colors, l) if(length(colors) < l) { colors <- rep( "black", l) warning("more colors needed") } if(p > 2) { pairs(x = data, labels = labels, pch = symbols[classification], cex = CEX, col = colors[classification], gap = gap, ...) } else if(p == 2) { plot(data, cex = CEX, pch = symbols[classification], col = colors[classification], ...) } invisible(list(class = levels(classification), col = colors, pch = symbols[seq(l)])) } clPairsLegend <- function(x, y, class, col, pch, ...) { legend(x = x, y = y, legend = class, col = col, text.col = col, pch = pch, title.col = par("fg"), xpd = NA, ...) } coordProj <- function(data, dimens = c(1,2), parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "errors", "uncertainty"), addEllipses = TRUE, symbols = NULL, colors = NULL, scale = FALSE, xlim = NULL, ylim = NULL, CEX = 1, PCH = ".", main = FALSE, ...) { if(is.null(dimens)) dimens <- c(1, 2) if(is.null(classification) && !is.null(z)) classification <- map(z) if(is.null(uncertainty) && !is.null(z)) uncertainty <- 1 - apply(z, 1, max) if(!is.null(parameters)) { mu <- parameters$mean L <- ncol(mu) sigma <- parameters$variance$sigma haveParams <- !is.null(mu) && !is.null(sigma) && !any(is.na(mu)) && !any( is.na(sigma)) } else haveParams <- FALSE data <- data[, dimens, drop = FALSE] if(dim(data)[2] != 2) stop("need two dimensions") if(is.null(xlim)) xlim <- range(data[, 1]) if(is.null(ylim)) ylim <- range(data[, 2]) if(scale) { par(pty = "s") d <- diff(xlim) - diff(ylim) if(d > 0) { ylim <- c(ylim[1] - d/2, ylim[2] + d/2.) } else { xlim <- c(xlim[1] + d/2, xlim[2] - d/2) } } if(is.null(dnames <- dimnames(data)[[2]])) xlab <- ylab <- "" else { xlab <- dnames[1] ylab <- dnames[2] } main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) if(haveParams) { G <- ncol(mu) dimpar <- dim(sigma) if(length(dimpar) != 3) { haveParams <- FALSE warning("covariance must be a 3D matrix") } if(G != dimpar[3]) { haveParams <- FALSE warning("means and variance parameters are incompatible") } mu <- array(mu[dimens, ], c(2, G)) sigma <- array(sigma[dimens, dimens, ], c(2, 2, G)) } if(!is.null(truth)) { if(is.null(classification)) { classification <- truth truth <- NULL } } if(!is.null(classification)) { classification <- as.character(classification) U <- sort(unique(classification)) L <- length(U) noise <- (U[1] == "0") # browser() if(is.null(symbols)) { if(L <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols")[1:L] if(noise) { symbols <- c(16,symbols)[1:L] } } else if(L <= 9) { symbols <- as.character(1:9) } else if(L <= 26) { symbols <- LETTERS } } else if(length(symbols) == 1) symbols <- rep(symbols, L) if(is.null(colors)) { if(L <= length(mclust.options("classPlotColors"))) { colors <- mclust.options("classPlotColors")[1:L] if(noise) { colors <- unique(c("black", colors))[1:L] } } } else if(length(colors) == 1) colors <- rep(colors, L) if(length(symbols) < L) { warning("more symbols needed to show classification ") symbols <- rep(16,L) } if(length(colors) < L) { warning("more colors needed to show classification ") colors <- rep("black",L) } } if(length(what) > 1) what <- what[1] choices <- c("classification", "errors", "uncertainty") m <- charmatch(what, choices, nomatch = 0) if(m) { what <- choices[m] bad <- what == "classification" && is.null(classification) bad <- bad || (what == "uncertainty" && is.null(uncertainty)) bad <- bad || (what == "errors" && (is.null(classification) || is.null( truth))) if(bad) warning("insufficient input for specified plot") badClass <- (what == "errors" && (length(unique(classification)) != length( unique(truth)))) if(badClass && !bad) warning("classification and truth differ in number of groups") bad <- bad && badClass } else { bad <- !m warning("what improperly specified") } if(bad) what <- "bad" switch(EXPR = what, "classification" = { plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste(paste(dimens, collapse = ","), "Coordinate Projection showing Classification") title(main = TITLE) } for(k in 1:L) { I <- classification == U[k] points(data[I, 1], data[I, 2], pch = symbols[k], col = colors[k], cex = if(U[k] == "0") CEX/3 else CEX) } }, "errors" = { ERRORS <- classError(classification, truth)$misclassified plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste(paste(dimens, collapse = ","), "Coordinate Projection showing Errors") title(main = TITLE) } CLASSES <- unique(as.character(truth)) symOpen <- c(2, 0, 1, 5) symFill <- c(17, 15, 16, 18) good <- rep(TRUE, length(classification)) good[ERRORS] <- FALSE if(L > 4) { points(data[good, 1], data[good, 2], pch = 1, col = colors, cex = CEX) points(data[!good, 1], data[!good, 2], pch = 16, cex = CEX) } else { for(k in 1:L) { K <- truth == CLASSES[k] if(any(I <- (K & good))) { points(data[I, 1], data[I, 2], pch = symOpen[k], col = colors[k], cex = CEX) } if(any(I <- (K & !good))) { points(data[I, 1], data[I, 2], pch = symFill[k], cex = CEX) } } } }, "uncertainty" = { u <- (uncertainty - min(uncertainty)) / (max(uncertainty) - min(uncertainty) + sqrt(.Machine$double.eps)) b <- bubble(u, cex = CEX * c(0.3, 2), alpha = c(0.3, 0.9)) cl <- sapply(classification, function(cl) which(cl == U)) plot(data[, 1], data[, 2], pch = 19, main = "", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, cex = b$cex, col = mapply(adjustcolor, col = colors[cl], alpha.f = b$alpha), ...) if(main) { TITLE <- paste(paste(dimens, collapse = ","), "Coordinate Projection showing Uncertainty") title(main = TITLE) } }, { plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste(paste(dimens, collapse = ","), "Coordinate Projection") title(main = TITLE) } points(data[, 1], data[, 2], pch = PCH, cex = CEX) } ) if(haveParams && addEllipses) { ## plot ellipsoids for(k in 1:G) mvn2plot(mu = mu[,k], sigma = sigma[,,k], k = 15) } invisible() } randProj <- function(data, seeds = 0, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "errors", "uncertainty"), quantiles = c(0.75, 0.95), symbols = NULL, colors = NULL, scale = FALSE, xlim = NULL, ylim = NULL, CEX = 1, PCH = ".", main = FALSE, ...) { if(scale) par(pty = "s") if(is.null(classification) && !is.null(z)) classification <- map(z) if(is.null(uncertainty) && !is.null(z)) uncertainty <- 1 - apply(z, 1, max) if(!is.null(parameters)) { mu <- parameters$mean L <- ncol(mu) sigma <- parameters$variance$sigma haveParams <- !is.null(mu) && !is.null(sigma) && !any(is.na(mu)) && !any( is.na(sigma)) } else haveParams <- FALSE xlab <- ylab <- "" p <- ncol(data) if(haveParams) { G <- ncol(mu) dimpar <- dim(sigma) if(length(dimpar) != 3) { haveParams <- FALSE warning("covariance must be a 3D matrix") } if(G != dimpar[3]) { haveParams <- FALSE warning("means and variance parameters are incompatible") } cho <- array(apply(sigma, 3, chol), c(p, p, G)) } if(!is.null(truth)) { if(is.null(classification)) { classification <- truth truth <- NULL } else { if(length(unique(truth)) != length(unique(classification))) truth <- NULL else truth <- as.character(truth) } } if(!is.null(classification)) { classification <- as.character(classification) U <- sort(unique(classification)) L <- length(U) noise <- (U[1] == "0") if(is.null(symbols)) { if(L <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols")[1:L] if(noise) { symbols <- c(16,symbols)[1:L] } } else if(L <= 9) { symbols <- as.character(1:9) } else if(L <= 26) { symbols <- LETTERS } } else if(length(symbols) == 1) symbols <- rep(symbols, L) if(is.null(colors)) { if(L <= length(mclust.options("classPlotColors"))) { colors <- mclust.options("classPlotColors")[1:L] if(noise) { colors <- unique(c("black", colors))[1:L] } } } else if(length(colors) == 1) colors <- rep(colors, L) if(length(symbols) < L) { warning("more symbols needed to show classification ") symbols <- rep(16,L) } if (length(colors) < L) { warning("more colors needed to show classification ") colors <- rep("black",L) } } if(length(what) > 1) what <- what[1] choices <- c("classification", "errors", "uncertainty") m <- charmatch(what, choices, nomatch = 0) if(m) { what <- choices[m] bad <- what == "classification" && is.null(classification) bad <- bad || (what == "uncertainty" && is.null(uncertainty)) bad <- bad || (what == "errors" && (is.null(classification) || is.null( truth))) if(bad) warning("insufficient input for specified plot") } else { bad <- !m warning("what improperly specified") } if(bad) what <- "bad" main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) nullXlim <- is.null(xlim) nullYlim <- is.null(ylim) if(length(seeds) > 1) par(ask = TRUE) for(seed in seeds) { set.seed(seed) Q <- orth2(p) Data <- as.matrix(data) %*% Q if(dim(Data)[2] != 2) stop("need two dimensions") if(nullXlim) xlim <- range(Data[, 1]) if(nullYlim) ylim <- range(Data[, 2]) if(scale) { d <- diff(xlim) - diff(ylim) if(d > 0) { ylim <- c(ylim[1] - d/2, ylim[2] + d/2.) } else { xlim <- c(xlim[1] + d/2, xlim[2] - d/2) } } switch(EXPR = what, classification = { plot(Data[, 1], Data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) for(k in 1:L) { I <- classification == U[k] points(Data[I, 1], Data[I, 2], pch = symbols[k], col = colors[k], cex = CEX) } if(main) { TITLE <- paste("Random Projection showing Classification: seed = ", seed) title(TITLE) } } , errors = { ERRORS <- classError(classification, truth)$misclassified plot(Data[, 1], Data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste("Random Projection showing Errors: seed = ", seed) title(TITLE) } CLASSES <- unique(as.character(truth)) symOpen <- c(2, 0, 1, 5) symFill <- c(17, 15, 16, 18) good <- !ERRORS if(L > 4) { points(Data[good, 1], Data[good, 2], pch = 1, col = colors, cex = CEX) points(Data[!good, 1], Data[!good, 2], pch = 16, cex = CEX) } else { for(k in 1:L) { K <- truth == CLASSES[k] points(Data[K, 1], Data[K, 2], pch = symOpen[k], col = colors[k], cex = CEX) if(any(I <- (K & ERRORS))) { points(Data[I, 1], Data[I, 2], pch = symFill[k], cex = CEX) } } } } , uncertainty = { plot(Data[, 1], Data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste("Random Projection showing Uncertainty: seed = ", seed ) title(TITLE) } breaks <- quantile(uncertainty, probs = sort(quantiles)) I <- uncertainty <= breaks[1] points(Data[I, 1], Data[I, 2], pch = 16, col = "gray75", cex = 0.5 * CEX) I <- uncertainty <= breaks[2] & !I points(Data[I, 1], Data[I, 2], pch = 16, col = "gray50", cex = 1 * CEX) I <- uncertainty > breaks[2] & !I points(Data[I, 1], Data[I, 2], pch = 16, col = "black", cex = 1.5 * CEX) } , { plot(Data[, 1], Data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste("Random Projection: seed = ", seed) title(TITLE) } points(Data[, 1], Data[, 2], pch = PCH, cex = CEX) } ) if(haveParams) { ## plot ellipsoids muTrans <- crossprod(Q, mu) sigmaTrans <- array(apply(cho, 3, function(R, Q) crossprod(R %*% Q), Q = Q), c(2, 2, G)) for(k in 1:G) mvn2plot(mu = muTrans[, k], sigma = sigmaTrans[, , k], k = 15) } } invisible() } surfacePlot <- function(data, parameters, type = c("contour", "image", "persp"), what = c("density", "uncertainty"), transformation = c("none", "log", "sqrt"), grid = 100, nlevels = 11, levels = NULL, col = grey(0.6), xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, scale = FALSE, main = FALSE, swapAxes = FALSE, verbose = FALSE, ...) { grid1 <- function(n, range = c(0, 1), edge = TRUE) { if (any(n < 0 | round(n) != n)) stop("n must be nonpositive and integer") G <- rep(0, n) if (edge) { G <- seq(from = min(range), to = max(range), by = abs(diff(range))/(n-1)) } else { lj <- abs(diff(range)) incr <- lj/(2 * n) G <- seq(from = min(range) + incr, to = max(range) - incr, by = 2 * incr) } G } grid2 <- function(x, y) { lx <- length(x) ly <- length(y) xy <- matrix(0, nrow = lx * ly, ncol = 2) l <- 0 for (j in 1:ly) { for (i in 1:lx) { l <- l + 1 xy[l,] <- c(x[i], y[j]) } } xy } data <- as.matrix(data) if(dim(data)[2] != 2) stop("data must be two dimensional") densNuncer <- function(modelName, data, parameters) { if(is.null(parameters$variance$cholsigma)) { parameters$variance$cholsigma <- parameters$variance$sigma G <- dim(parameters$variance$sigma)[3] for(k in 1:G) parameters$variance$cholsigma[,,k] <- chol(parameters$variance$sigma[,,k]) } cden <- cdensVVV(data = data, parameters = parameters, logarithm = TRUE) pro <- parameters$pro if(!is.null(parameters$Vinv)) pro <- pro[-length(pro)] z <- sweep(cden, MARGIN = 2, FUN = "+", STATS = log(pro)) logden <- apply(z, 1, logsumexp) z <- sweep(z, MARGIN = 1, FUN = "-", STATS = logden) z <- exp(z) data.frame(density = exp(logden), uncertainty = 1 - apply(z, 1, max)) } pro <- parameters$pro mu <- parameters$mean sigma <- parameters$variance$sigma haveParams <- !is.null(mu) && !is.null(sigma) && !is.null(pro) && !any(is.na(mu)) && !any(is.na(sigma)) && !(any(is.na(pro))) if(haveParams) { G <- ncol(mu) dimpar <- dim(sigma) if(length(dimpar) != 3) { haveParams <- FALSE warning("covariance must be a 3D matrix") } if(G != dimpar[3]) { haveParams <- FALSE warning("means and variance parameters are incompatible") } mu <- array(mu, c(2, G)) sigma <- array(sigma, c(2, 2, G)) } if(!haveParams) stop("need parameters to compute density") if(swapAxes) { if(haveParams) { parameters$pro <- pro[2:1] parameters$mean <- mu[2:1,] parameters$variance$sigma <- sigma[2:1, 2:1,] } data <- data[, 2:1] } main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) if(is.null(xlim)) xlim <- range(data[, 1]) if(is.null(ylim)) ylim <- range(data[, 2]) if(scale) { par(pty = "s") d <- diff(xlim) - diff(ylim) if(d > 0) { ylim <- c(ylim[1] - d/2, ylim[2] + d/2) } else { xlim <- c(xlim[1] + d/2, xlim[2] - d/2) } } dnames <- dimnames(data)[[2]] if(is.null(xlab)) { xlab <- if(is.null(dnames)) "" else dnames[1] } if(is.null(ylab)) { ylab <- if(is.null(dnames)) "" else dnames[2] } if(length(grid) == 1) grid <- c(grid, grid) x <- grid1(n = grid[1], range = xlim, edge = TRUE) y <- grid1(n = grid[2], range = ylim, edge = TRUE) xy <- grid2(x, y) if(verbose) cat("\n computing density and uncertainty over grid ...\n") Z <- densNuncer(modelName = "VVV", data = xy, parameters = parameters) lx <- length(x) ly <- length(y) CI <- type DU <- what TRANS <- transformation if(length(CI) > 1) CI <- CI[1] if(length(DU) > 1) DU <- DU[1] if(length(TRANS) > 1) TRANS <- TRANS[1] switch(EXPR = DU, density = { zz <- matrix(Z$density, lx, ly) title2 <- "Density" }, uncertainty = { zz <- matrix(Z$uncertainty, lx, ly) title2 <- "Uncertainty" }, stop("what improperly specified")) switch(EXPR = TRANS, none = { title1 <- "" }, log = { zz <- log(zz) title1 <- "log" }, sqrt = { zz <- sqrt(zz) title1 <- "sqrt" }, stop("transformation improperly specified")) switch(EXPR = CI, contour = { title3 <- "Contour" if(is.null(levels)) levels <- pretty(zz, nlevels) contour(x = x, y = y, z = zz, levels = levels, xlab = xlab, ylab = ylab, col = col, main = "", ...) }, image = { title3 <- "Image" if(length(col) == 1) { if(!is.null(levels)) nlevels <- length(levels) col <- mapply(adjustcolor, col = col, alpha.f = seq(0.1, 1, length = nlevels)) } image(x = x, y = y, z = zz, xlab = xlab, ylab = ylab, col = col, main = "", ...) }, persp = { title3 <- "Perspective" dots <- list(...) if(is.null(dots$theta)) dots$theta <- -30 if(is.null(dots$phi)) dots$phi <- 20 if(is.null(dots$expand)) dots$expand <- 0.6 do.call("persp", c(list(x = x, y = y, z = zz, xlab = xlab, ylab = ylab, col = col, zlab = "Density", main = ""), dots)) }, stop("type improperly specified")) if(main) { TITLE <- paste(c(title1, title2, title3, "Plot"), collapse = " ") title(TITLE) } invisible(list(x = x, y = y, z = zz)) } uncerPlot <- function (z, truth=NULL, ...) { oldpar <- par(no.readonly = TRUE) on.exit(par(oldpar)) par(pty = "m") uncer <- 1 - apply(z, 1, max) ord <- order(uncer) M <- max(uncer) plot(uncer[ord], ylab = "uncertainty", ylim = c(-(M/32), M), xaxt = "n", xlab = "observations in order of increasing uncertainty", type = "n") points(uncer[ord], pch = 15, cex = 0.5) lines(uncer[ord]) abline(h = c(0, 0), lty = 3) if (!is.null(truth)) { truth <- as.numeric(as.factor(truth)) n <- length(truth) result <- map(z) bad <- classError(result, truth)$misclassified if(length(bad)) { for(i in bad) { x <- (1:n)[ord == i] lines(c(x, x), c(-(0.5/32), uncer[i]), lty = 1) } } } invisible() } bubble <- function(x, cex = c(0.2, 3), alpha = c(0.1, 1)) { x <- as.vector(x) cex <- cex[!is.na(cex)] alpha <- alpha[!is.na(alpha)] x <- (x - min(x))/(max(x) - min(x) + sqrt(.Machine$double.eps)) n <- length(x) r <- sqrt(x/pi) r <- (r - min(r, na.rm = TRUE))/ (max(r, na.rm = TRUE) - min(r, na.rm = TRUE) + sqrt(.Machine$double.eps)) cex <- r * diff(range(cex)) + min(cex) alpha <- x * diff(range(alpha)) + min(alpha) return(list(cex = cex, alpha = alpha)) } ## ##-- Convert to a from classes 'Mclust' and 'densityMclust' ------------------ ## as.Mclust <- function(x, ...) { UseMethod("as.Mclust") } as.Mclust.default <- function(x, ...) { if(inherits(x, "Mclust")) x else stop("argument 'x' cannot be coerced to class 'Mclust'") } as.densityMclust <- function(x, ...) { UseMethod("as.densityMclust") } as.densityMclust.default <- function(x, ...) { if(inherits(x, "densityMclust")) x else stop("argument 'x' cannot be coerced to class 'densityMclust'") } as.densityMclust.Mclust <- function(x, ...) { class(x) <- c("densityMclust", class(x)) x$density <- dens(modelName = x$modelName, data = x$data, parameters = x$parameters, logarithm = FALSE) return(x) }mclust/R/zzz.R0000644000176200001440000000155713175410675012763 0ustar liggesusers.onLoad <- function(libname, pkgname) { library.dynam("mclust", pkgname, libname) } mclustStartupMessage <- function() { # Startup message obtained as # > figlet -f slant MCLUST msg <- c(paste0( " __ ___________ __ _____________ / |/ / ____/ / / / / / ___/_ __/ / /|_/ / / / / / / / /\\__ \\ / / / / / / /___/ /___/ /_/ /___/ // / /_/ /_/\\____/_____/\\____//____//_/ version ", packageVersion("mclust")), "\nType 'citation(\"mclust\")' for citing this R package in publications.") return(msg) } .onAttach <- function(lib, pkg) { # unlock .mclust variable allowing its modification unlockBinding(".mclust", asNamespace("mclust")) # startup message msg <- mclustStartupMessage() if(!interactive()) msg[1] <- paste("Package 'mclust' version", packageVersion("mclust")) packageStartupMessage(msg) invisible() } mclust/vignettes/0000755000176200001440000000000013205037571013574 5ustar liggesusersmclust/vignettes/mclust.Rmd0000644000176200001440000001471213201045253015544 0ustar liggesusers--- title: "A quick tour of **mclust**" author: "Luca Scrucca" date: "`r format(Sys.time(), '%d %b %Y')`" output: rmarkdown::html_vignette: toc: true number_sections: false css: "vignette.css" vignette: > %\VignetteIndexEntry{A quick tour of mclust} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE} library(knitr) opts_chunk$set(fig.align = "center", out.width = "90%", fig.width = 6, fig.height = 5.5, dev.args=list(pointsize=10), par = TRUE, # needed for setting hook collapse = TRUE, # collapse input & ouput code in chunks warning = FALSE) knit_hooks$set(par = function(before, options, envir) { if(before && options$fig.show != "none") par(family = "sans", mar=c(4.1,4.1,1.1,1.1), mgp=c(3,1,0), tcl=-0.5) }) ``` # Introduction **mclust** is a contributed R package for model-based clustering, classification, and density estimation based on finite normal mixture modelling. It provides functions for parameter estimation via the EM algorithm for normal mixture models with a variety of covariance structures, and functions for simulation from these models. Also included are functions that combine model-based hierarchical clustering, EM for mixture estimation and the Bayesian Information Criterion (BIC) in comprehensive strategies for clustering, density estimation and discriminant analysis. Additional functionalities are available for displaying and visualizing fitted models along with clustering, classification, and density estimation results. This document gives a quick tour of **mclust** (version `r packageVersion("mclust")`) functionalities. It was written in R Markdown, using the [knitr](https://cran.r-project.org/package=knitr) package for production. See `help(package="mclust")` for further details and references provided by `citation("mclust")`. ```{r, message = FALSE, echo=-2} library(mclust) cat(mclust:::mclustStartupMessage(), sep="") ``` # Clustering ```{r, par=TRUE} data(diabetes) class <- diabetes$class table(class) X <- diabetes[,-1] head(X) clPairs(X, class) BIC <- mclustBIC(X) plot(BIC) summary(BIC) mod1 <- Mclust(X, x = BIC) summary(mod1, parameters = TRUE) plot(mod1, what = "classification") table(class, mod1$classification) par(mfrow = c(2,2)) plot(mod1, what = "uncertainty", dimens = c(2,1), main = "") plot(mod1, what = "uncertainty", dimens = c(3,1), main = "") plot(mod1, what = "uncertainty", dimens = c(2,3), main = "") par(mfrow = c(1,1)) ICL = mclustICL(X) summary(ICL) plot(ICL) LRT = mclustBootstrapLRT(X, modelName = "VVV") LRT ``` # Classification ## EDDA ```{r} data(iris) class <- iris$Species table(class) X <- iris[,1:4] head(X) mod2 <- MclustDA(X, class, modelType = "EDDA") summary(mod2) plot(mod2, what = "scatterplot") plot(mod2, what = "classification") ``` ## MclustDA ```{r} data(banknote) class <- banknote$Status table(class) X <- banknote[,-1] head(X) mod3 <- MclustDA(X, class) summary(mod3) plot(mod3, what = "scatterplot") plot(mod3, what = "classification") ``` ## Cross-validation error ```{r} unlist(cvMclustDA(mod2, nfold = 10)[2:3]) unlist(cvMclustDA(mod3, nfold = 10)[2:3]) ``` # Density estimation ## Univariate ```{r} data(acidity) mod4 <- densityMclust(acidity) summary(mod4) plot(mod4, what = "BIC") plot(mod4, what = "density", data = acidity, breaks = 15) plot(mod4, what = "diagnostic", type = "cdf") plot(mod4, what = "diagnostic", type = "qq") ``` ## Multivariate ```{r} data(faithful) mod5 <- densityMclust(faithful) summary(mod5) plot(mod5, what = "BIC") plot(mod5, what = "density") plot(mod5, what = "density", type = "image", col = "dodgerblue3", grid = 100) plot(mod5, what = "density", type = "persp") ``` # Bootstrap inference ```{r} boot1 <- MclustBootstrap(mod1, nboot = 999, type = "bs") summary(boot1, what = "se") summary(boot1, what = "ci") par(mfrow=c(4,3)) plot(boot1, what = "pro") plot(boot1, what = "mean") ``` ```{r} boot4 <- MclustBootstrap(mod4, nboot = 999, type = "bs") summary(boot4, what = "se") summary(boot4, what = "ci") par(mfrow=c(2,2)) plot(boot4, what = "pro") plot(boot4, what = "mean") ``` # Dimension reduction ## Clustering ```{r} mod1dr <- MclustDR(mod1) summary(mod1dr) plot(mod1dr, what = "pairs") plot(mod1dr, what = "boundaries", ngrid = 200) mod1dr <- MclustDR(mod1, lambda = 1) summary(mod1dr) plot(mod1dr, what = "scatterplot") plot(mod1dr, what = "boundaries", ngrid = 200) ``` ## Classification ```{r} mod2dr <- MclustDR(mod2) summary(mod2dr) plot(mod2dr, what = "scatterplot") plot(mod2dr, what = "boundaries", ngrid = 200) mod3dr <- MclustDR(mod3) summary(mod3dr) plot(mod3dr, what = "scatterplot") plot(mod3dr, what = "boundaries", ngrid = 200) ``` # Using colorblind-friendly palettes Most of the graphs produced by **mclust** use colors that by default are defined in the following options: ```{r} mclust.options("bicPlotColors") mclust.options("classPlotColors") ``` The first option controls colors used for plotting BIC, ICL, etc. curves, whereas the second option is used to assign colors for indicating clusters or classes when plotting data. Color-blind-friendly palettes can be defined and assigned to the above options as follows: ```{r} cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#999999", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") bicPlotColors <- mclust.options("bicPlotColors") bicPlotColors[1:14] <- c(cbPalette, cbPalette[1:6]) mclust.options("bicPlotColors" = bicPlotColors) mclust.options("classPlotColors" = cbPalette) clPairs(iris[,-5], iris$Species) mod <- Mclust(iris[,-5]) plot(mod, what = "BIC") plot(mod, what = "classification") ``` The above color definitions are adapted from http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/, but users can easily define their own palettes if needed. # References Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, *The R Journal*, 8/1, pp. 205-233. https://journal.r-project.org/archive/2016/RJ-2016-021/RJ-2016-021.pdf Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, *Journal of the American Statistical Association*, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. *Technical Report* No. 597, Department of Statistics, University of Washington. mclust/vignettes/vignette.css0000644000176200001440000001053213176552313016137 0ustar liggesusersbody { background-color: #fff; margin: 1em auto; max-width: 700px; overflow: visible; padding-left: 2em; padding-right: 2em; font-family: "Open Sans", Helvetica, Arial, sans-serif; font-size: 14px; line-height: 1.35; } #header { text-align: center; } #TOC { clear: both; margin: 0 0 10px 10px; padding: 4px; width: 400px; border: 1px solid #CCCCCC; border-radius: 5px; background-color: #f6f6f6; font-size: 12px; line-height: 1.3; } #TOC .toctitle { font-weight: bold; font-size: 15px; margin-left: 5px; } #TOC ul { padding-left: 40px; margin-left: -1.5em; margin-top: 5px; margin-bottom: 5px; } #TOC ul ul { margin-left: -2em; } #TOC li { line-height: 16px; } table { margin: 1em auto; border-width: 1px; border-color: #DDDDDD; border-style: outset; border-collapse: collapse; } table th { border-width: 2px; padding: 5px; border-style: inset; } table td { border-width: 1px; border-style: inset; line-height: 18px; padding: 5px 5px; } table, table th, table td { border-left-style: none; border-right-style: none; } table thead, table tr.even { background-color: #f7f7f7; } p { margin: 0.5em 0; } blockquote { background-color: #f6f6f6; padding: 0.25em 0.75em; } hr { border-style: solid; border: none; border-top: 1px solid #777; margin: 28px 0; } dl { margin-left: 0; } dl dd { margin-bottom: 13px; margin-left: 13px; } dl dt { font-weight: bold; } ul { margin-top: 0; } ul li { list-style: circle outside; } ul ul { margin-bottom: 0; } figure { margin: 0; text-align: center; } div.figure { text-align: center; } img { background-color: #FFFFFF; padding: 2px; border: 1px solid #DDDDDD; border-radius: 3px; /* border: 1px solid #CCCCCC; */ margin: 0 5px; } h1 { padding-top: 10px; border-bottom: 4px solid #f7f7f7; margin-top: 0; font-size: 150%; line-height: 30px; } h1.title { color: rgb(46,90,136); font-size: 200%; line-height: 40px; } h2 { border-bottom: 3px solid #f7f7f7; padding-top: 10px; padding-bottom: 2px; margin-left: 4px; font-size: 120%; } h3 { border-bottom: 2px solid #f7f7f7; padding-top: 10px; margin-left: 8px; font-size: 110%; } h3.subtitle { color: rgb(16,78,139); } h4 { border-bottom: 1px solid #f7f7f7; margin-left: 12px; font-size: 100%; } h4.author { border-bottom: 0; color: rgb(77,77,77); } h4.date { border-bottom: 1px solid #f7f7f7; font-size: 100%; color: rgb(77,77,77); } h5, h6 { border-bottom: 1px solid #ccc; font-size: 105%; } address{ font-weight: bold; color: rgb(77,77,77); margin-left: 8px; font-size: 100%; } a { color: rgb(24,116,205); text-decoration: none; } a:hover { color: rgb(28,134,238); } a:visited { color: rgb(24,116,205); } a:visited:hover { color: rgb(28,134,238); } a[href^="http:"] { text-decoration: underline; } a[href^="https:"] { text-decoration: underline; } code { font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', Consolas, Monaco, monospace; color: rgb(77,77,77); font-size: 85%; } p > code, li > code { padding: 2px 0; } pre, code { background-color: #F8F8F8; border-radius: 3px; color: #333; } pre { white-space: pre-wrap; /* Wrap long lines */ border-radius: 3px; margin: 5px 0px 10px 0px; padding: 10px; } pre:not([class]) { background-color: #F8F8F8; } strong { font-weight: bold; } hi { font-weight: bold; color: rgb(28,134,238); } /* Class described in https://benjeffrey.com/posts/pandoc-syntax-highlighting-css Colours from https://gist.github.com/robsimmons/1172277 */ code > span.kw { color: rgb(23,74,133); font-weight: bold; } /* Keyword */ code > span.dt { color: #4075AD; } /* DataType */ code > span.dv { color: #555; } /* DecVal (decimal values) */ /* code > span.bn { color: #d14; } /* BaseN */ /* code > span.fl { color: #d14; } /* Float */ /* code > span.ch { color: #d14; } /* Char */ code > span.st { color: #407546; } /* String */ code > span.co { color: #888888; font-style: italic; } /* Comment */ code > span.ot { color: #007020; } /* OtherToken */ code > span.al { color: #ff0000; font-weight: bold; } /* AlertToken /* code > span.fu { color: #900; font-weight: bold; } /* Function calls */ code > span.er { color: #a61717; background-color: #e3d2d2; } /* ErrorTok */ mclust/MD50000644000176200001440000001626713205247703012110 0ustar liggesusers50b88afcb39120761d2bcdb1701120d1 *DESCRIPTION 5e053616fbe64b0edada24d63067cdc1 *NAMESPACE 1ec80fbab7a89c8194de40f82e88f2de *R/bootstrap.R 1b6f33b59844e7c930b525c1ddb91c33 *R/clustCombi.R 1d12c2bff2abe41c8f64fc8c4f0e7c4a *R/densityMclust.R 430a7e958e36ad1c89137964a9f60445 *R/gmmhd.R 54094fb362fb3d7105263f3e10f1bbaf *R/icl.R 67228833ad80b453b6e27d33b0f3be13 *R/impute.R e7bb6282de53efcf42491674eae991f5 *R/init.R 0552b7b57db09aaec5ffb4951624fd36 *R/mclust.R 6d226fc745cddbbd613aceaa7cd239ef *R/mclustaddson.R 787e8ca5f920b042d8c1673903c48182 *R/mclustda.R 13112723db00e4a4aaa5b18e2a03a5af *R/mclustdr.R f927261081645766ae7d05462fd6dd61 *R/options.R 1c4ebbea1015342313457a1705ede4cd *R/toremove.R f02b7b557f7628153ee2ec7ae8e8d190 *R/util.R f4e0dc159cb386c756eeb3d91e833a4d *R/weights.R 9be9d3df6b113ff912c6b54afc264190 *R/zzz.R c38e9c1e3062ba1eb8d46423196a8a5b *build/vignette.rds 8e7c342e4085bab236f6987919d6b486 *data/Baudry_etal_2010_JCGS_examples.rda 199ebbd1e5052c11a462ed77f30beabb *data/GvHD.rda 085a30352b3f69b407ed1a2788446542 *data/acidity.rda a7c2f1483533ae3b27cfab88043e12b6 *data/banknote.txt.gz 3083130f89db4d6b36dff65f92c20348 *data/chevron.rda a34f6759c7851b72fc76f3ec328ff35d *data/cross.rda 6630fcb619de4fa611dbebd4e43aac13 *data/diabetes.rda dbdeebf56e9f31e42851353cc5cbc034 *data/thyroid.rda 6d5cabd301e0c0f12cb5825554b2a1bf *data/wreath.rda 921d80211e44ca9d6b7b685f32f5f1ad *inst/CITATION 76458b6211cc932d35a8c6301086cbad *inst/NEWS c0b4af2d7229af3aeea906010589f509 *inst/doc/mclust.R d9a954797a515adbb7c16c46afa9a421 *inst/doc/mclust.Rmd c18647ee1cb593f21badc20fc82c7680 *inst/doc/mclust.html b6a4dc645c8081f7edf4dacb25086895 *man/Baudry_etal_2010_JCGS_examples.Rd 9557e1a95dee2fb443e0b870d50e3a9a *man/GvHD.Rd 6a60e0364e2598d743c953e0a7f083ee *man/Mclust.Rd fc544622bef5bcbb4720de17a477e77b *man/MclustBootstrap.Rd 2db38d20a186dae8e7675520c1be90ec *man/MclustDA.Rd eeb9bd70e4efa9ccc390d61c589b32a3 *man/MclustDR.Rd 5476e6d3dfb5eee60c3178a46c11613e *man/MclustDRsubsel.Rd be7d9c84a562490de0d98f27345b349a *man/acidity.Rd 25e4f71de95b8fff43de166f3245d8ba *man/adjustedRandIndex.Rd 40bf7f2db2b9677c5b9bf7733da4aeac *man/banknote.Rd eb92ce9bfd46fcf3eb40fb062bd82071 *man/bic.Rd 91c47a73a377e471de41dfd14de7d82c *man/cdens.Rd a0f106c2c8ffeb166a9bbe85fdceb392 *man/cdensE.Rd 3cb1c9457f1b0f03056e314304e78274 *man/cdfMclust.Rd b0cfe540c4eb91f99f49c2127562cd49 *man/chevron.Rd 75fa2aac4db3636c6e157bfa48671390 *man/clPairs.Rd 95acdf199eda7e905af6d62d5dee4037 *man/classError.Rd fd20be5ef505890b5fb82337022b9f0a *man/clustCombi-internals.Rd f8217ed1b7f5fb9c01aba42ef108111f *man/clustCombi.Rd d96652286bdc361222e433015dd34c09 *man/clustCombiOptim.Rd 48abd66a9f3c36f3abff08a91621194a *man/combMat.Rd 475a007399de44be290408d1ad78dd23 *man/combiPlot.Rd 0cd38d0a73b603a863713b799c381935 *man/combiTree.Rd a44f544424dc60023f7eb22e831d38f2 *man/coordProj.Rd 1d07d6f1d66d69631c66d96ca89504b2 *man/covw.Rd 2e85d0cc4be9094e9d19aeffe69a296d *man/cross.Rd c1774b8ca195f2e462c6a90ae16a0b70 *man/cvMclustDA.Rd 7ba712f61529f680d276ef92f6fe4c52 *man/decomp2sigma.Rd a3e7795e683dd50ffd2bc554b99250c3 *man/defaultPrior.Rd b1195aa852e01046fee1d9484e2fafc9 *man/dens.Rd a4095e46c457ef454624e778ac6172fb *man/densityMclust.Rd c9e15c27049f95bc8a863714d20b0d85 *man/densityMclust.diagnostic.Rd 9555c9ce741dcf600b6458e3c90e72c6 *man/diabetes.Rd b654b9c121541bb601ec612c74b794e7 *man/em.Rd 3fa2a2ce77200f1855abaf1d769db11d *man/emControl.Rd 22e8433a92b0245f67c9acce549056a0 *man/emE.Rd a8b01b80ea614467f102f237a2830c68 *man/entPlot.Rd 6e7e4d7ec91567f07c07d95891720b0b *man/errorBars.Rd 2b3e5919d2445368ee9d40f5498e5ed6 *man/estep.Rd ed4de5adcefd398a587e3fe176be5ef0 *man/estepE.Rd df6daeac190195cfd0f6cde35e18365c *man/gmmhd.Rd 0214e66d4ec2603c9b2438af1cd0b047 *man/hc.Rd 3b505711416a599b4a71586ad9c6ee66 *man/hcE.Rd d63ca89381715f823a343ab10d8eb590 *man/hclass.Rd f5c538311fa12c3298e7524d9eb62817 *man/hypvol.Rd 918939d5858154bd5aa2a789e8adda3a *man/icl.Rd cfaaf677496a141f288bd91068a8c94a *man/imputeData.Rd fafd046c110aa10b51f1b45f040fdf6b *man/imputePairs.Rd cc916d8c4282eb5e20899b25d0b741ea *man/logLik.Mclust.Rd db9dd261d97ef515e5008f7f9351bc0b *man/logLik.MclustDA.Rd 699915f3a4cf8bfd6718f71f2a754c48 *man/majorityVote.Rd 7d8989594ce3a00a8633781ff84658f0 *man/map.Rd d14828c2540a8710153683f0ccababef *man/mapClass.Rd 05c969579ccb630cd896ad7da79579d3 *man/mclust-deprecated.Rd 5abe7723f0a0449aabfd877603d7134d *man/mclust-internal.Rd 50ae4334bc463755f4b1ed5575c374b1 *man/mclust-package.Rd bbf504c64c2d1e87ad3fb394c5b8281c *man/mclust.options.Rd 6bad206b10172b17b29ce4462b8a65fa *man/mclust1Dplot.Rd 7dcfd9736e6ba03d72267680f8721944 *man/mclust2Dplot.Rd 501cd1d40ea71e4db04f6434eeb236da *man/mclustBIC.Rd 8b53ade82e8b688643dbd9e3f8a4bc08 *man/mclustBootstrapLRT.Rd f5cbeb541cd54f0deaa2a236f2988129 *man/mclustICL.Rd a1be1fdcd8d4925c2c55ec4143c7d9c7 *man/mclustModel.Rd fb1c70bbf7f6015ff75056d9a49fe984 *man/mclustModelNames.Rd 9c49fb43d204333caacaa786cac65af9 *man/mclustVariance.Rd 5d8414e6a62ce1c47ff292db60faddb3 *man/me.Rd 601d4fddd6bde3efc311aa4c09157b26 *man/me.weighted.Rd 203afc28381354b4e7d1d6f76749e25d *man/meE.Rd f5aba54db25d276db69176271a273774 *man/mstep.Rd 5fe8fa68540ed08aaab3ed8e66587de1 *man/mstepE.Rd a6758084584175d70efaaaa7d6454933 *man/mvn.Rd 95f6a5c67782e061d7e922dd937ff7bc *man/mvnX.Rd c3796dae5e43a2c26a9ee9b73fcf2276 *man/nMclustParams.Rd 2099a0c748ce32038c52f4aedf1886c5 *man/nVarParams.Rd 36355172607046a0c37f10dee1197ed5 *man/partconv.Rd cde8fd0e95c3ca5e84846d10a851fd76 *man/partuniq.Rd b959e76134b880265f27763c8f58888a *man/plot.Mclust.Rd c84c4de6bd074ba8b2c376425a17fa9d *man/plot.MclustBoostrap.Rd 5ebda0c2ef8654eb5d83589eae39fd86 *man/plot.MclustDA.Rd 90fb48a539700bd0c521c992a5673b5c *man/plot.MclustDR.Rd 52dac99ab54e1f8447dc0eac605c97d2 *man/plot.clustCombi.Rd ad9d68a07a37697fb1924f780a8fb628 *man/plot.densityMclust.Rd 12951e89daa1f9b1ac1ab80cb05b93f5 *man/plot.mclustBIC.Rd 73cbe302b1095c8a9069163414501214 *man/plot.mclustICL.Rd f640c08bd9098247a97a44f30c89a4cf *man/predict.Mclust.Rd c67d113b971eba2d386d19c7eae2d959 *man/predict.MclustDA.Rd e84b696c5b8eff056814c3cba07bee9f *man/predict.MclustDR.Rd 8733ade274da088b88c74ba022739813 *man/predict.densityMclust.Rd c59592f03a637487c2b2e8473ac427e0 *man/priorControl.Rd 86b8cb0e2fb1de61cf5f04be4f7b673d *man/randProj.Rd 164e38869f2f45ca98a8ad38912e15bf *man/randomPairs.Rd db6e45328c8d98198da2b16c881403b4 *man/sigma2decomp.Rd 9d8d67169745bd4498faf84a31687ea4 *man/sim.Rd a28f1d4193a31f36ba654fcf05836c22 *man/simE.Rd 77fd1a835bb490319a118d67fe022de4 *man/summary.Mclust.Rd be774ef3d7a4acd7272d815fa89d63a1 *man/summary.MclustBootstrap.Rd 95de4c4b7332009fd6efffaef686cf95 *man/summary.MclustDA.Rd 103fff818063adef9f3262398342fb0a *man/summary.MclustDR.Rd 986ecadd060b62810d73fa37ec72dc19 *man/summary.mclustBIC.Rd d4a8b773608162be1f841acbe41d490d *man/surfacePlot.Rd db0b51f96c35a65bea5efa26a19ee50d *man/thyroid.Rd 16f301b425aebb1ac6b6f0175427dabc *man/uncerPlot.Rd 6310a244a9397d432360fd51766bfe29 *man/unmap.Rd c1b81d23059192faf2b0fdb40b0bc0d2 *man/wreath.Rd 7ab4aae0ebff42ce151372d97076416b *src/Makevars 40d32b1d4496c878d52464b8f1500e16 *src/init.c 73b881e1007a4923ba62ea25536d3620 *src/mclust.f 32962ceccae710a5c5727183c04fc418 *src/mclustaddson.f d9a954797a515adbb7c16c46afa9a421 *vignettes/mclust.Rmd 0906cb78f2dae91a6181c01c8511edef *vignettes/vignette.css mclust/build/0000755000176200001440000000000013205037571012663 5ustar liggesusersmclust/build/vignette.rds0000644000176200001440000000032213205037571015217 0ustar liggesusers‹‹àb```b`fab`b2™… 1# 'æÊMÎ)-.Ñ ÊMA“sT(,ÍLÎV(É/-RÈOS€¨DSÅ ÕŸQ’›ƒ&Å3è a°8D€NB¨cÍKÌM-FÓÌî’Zš—þ‡]?ãtû¼S+Ëó‹`zPÔ°AÕ°¸eæ¤Âì É,s˜\Ü LÆ t7`˜â~΢ür=˜xAaÞ$þºG“s‹Ñ=Ê•’X’¨—VÔr7˜Wúvµmclust/DESCRIPTION0000644000176200001440000000245313205247703013276 0ustar liggesusersPackage: mclust Version: 5.4 Date: 2017-11-21 Title: Gaussian Mixture Modelling for Model-Based Clustering, Classification, and Density Estimation Description: Gaussian finite mixture models fitted via EM algorithm for model-based clustering, classification, and density estimation, including Bayesian regularization, dimension reduction for visualisation, and resampling-based inference. Authors@R: c(person("Chris", "Fraley", role = "aut"), person("Adrian E.", "Raftery", role = "aut"), person("Luca", "Scrucca", role = c("aut", "cre"), email = "luca.scrucca@unipg.it"), person("Thomas Brendan", "Murphy", role = "ctb"), person("Michael", "Fop", role = "ctb")) Depends: R (>= 3.0) Imports: stats, utils, graphics, grDevices Suggests: knitr (>= 1.12), rmarkdown (>= 0.9), mix (>= 1.0), geometry (>= 0.3-6), MASS License: GPL (>= 2) URL: http://www.stat.washington.edu/mclust/ VignetteBuilder: knitr Repository: CRAN ByteCompile: true LazyLoad: yes NeedsCompilation: yes Packaged: 2017-11-21 15:00:11 UTC; luca Author: Chris Fraley [aut], Adrian E. Raftery [aut], Luca Scrucca [aut, cre], Thomas Brendan Murphy [ctb], Michael Fop [ctb] Maintainer: Luca Scrucca Date/Publication: 2017-11-22 10:21:55 UTC mclust/man/0000755000176200001440000000000013205037573012341 5ustar liggesusersmclust/man/em.Rd0000644000176200001440000001204013175051656013232 0ustar liggesusers\name{em} \alias{em} \title{ EM algorithm starting with E-step for parameterized Gaussian mixture models. } \description{ Implements the EM algorithm for parameterized Gaussian mixture models, starting with the expectation step. } \usage{ em(modelName, data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ A names list giving the parameters of the model. The components are as follows: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ An estimate of the reciprocal hypervolume of the data region. If set to NULL or a negative value, the default is determined by applying function \code{hypvol} to the data. Used only when \code{pro} includes an additional mixing proportion for a noise component. } } } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The number of mixture components. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{control}{ The list of control parameters for EM used. } \item{prior}{ The specification of a conjugate prior on the means and variances used, \code{NULL} if no prior is used. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{emE}}, \ldots, \code{\link{emVVV}}, \code{\link{estep}}, \code{\link{me}}, \code{\link{mstep}}, \code{\link{mclust.options}}, \code{\link{do.call}} } \examples{ \dontrun{ msEst <- mstep(modelName = "EEE", data = iris[,-5], z = unmap(iris[,5])) names(msEst) em(modelName = msEst$modelName, data = iris[,-5], parameters = msEst$parameters) do.call("em", c(list(data = iris[,-5]), msEst)) ## alternative call } } \keyword{cluster} mclust/man/simE.Rd0000644000176200001440000000724213205036651013526 0ustar liggesusers\name{simE} \alias{simE} \alias{simV} \alias{simEII} \alias{simVII} \alias{simEEI} \alias{simVEI} \alias{simEVI} \alias{simVVI} \alias{simEEE} \alias{simEEV} \alias{simVEV} \alias{simVVV} \alias{simEVE} \alias{simEVV} \alias{simVEE} \alias{simVVE} \title{ Simulate from a Parameterized MVN Mixture Model } \description{ Simulate data from a parameterized MVN mixture model. } \usage{ simE(parameters, n, seed = NULL, \dots) simV(parameters, n, seed = NULL, \dots) simEII(parameters, n, seed = NULL, \dots) simVII(parameters, n, seed = NULL, \dots) simEEI(parameters, n, seed = NULL, \dots) simVEI(parameters, n, seed = NULL, \dots) simEVI(parameters, n, seed = NULL, \dots) simVVI(parameters, n, seed = NULL, \dots) simEEE(parameters, n, seed = NULL, \dots) simEEV(parameters, n, seed = NULL, \dots) simVEV(parameters, n, seed = NULL, \dots) simVVV(parameters, n, seed = NULL, \dots) simEVE(parameters, n, seed = NULL, \dots) simEVV(parameters, n, seed = NULL, \dots) simVEE(parameters, n, seed = NULL, \dots) simVVE(parameters, n, seed = NULL, \dots) } \arguments{ \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{n}{ An integer specifying the number of data points to be simulated. } \item{seed}{ An optional integer argument to \code{set.seed} for reproducible random class assignment. By default the current seed will be used. Reproducibility can also be achieved by calling \code{set.seed} before calling \code{sim}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A matrix in which first column is the classification and the remaining columns are the \code{n} observations simulated from the specified MVN mixture model. \item{Attributes:}{ \code{"modelName"} A character string indicating the variance model used for the simulation. } } \details{ This function can be used with an indirect or list call using \code{do.call}, allowing the output of e.g. \code{mstep}, \code{em} \code{me}, \code{Mclust}, to be passed directly without the need to specify individual parameters as arguments. } \seealso{ \code{\link{sim}}, \code{\link{Mclust}}, \code{\link{mstepE}}, \code{\link{mclustVariance}}. } \examples{ \dontrun{ d <- 2 G <- 2 scale <- 1 shape <- c(1, 9) O1 <- diag(2) O2 <- diag(2)[,c(2,1)] O <- array(cbind(O1,O2), c(2, 2, 2)) O variance <- list(d= d, G = G, scale = scale, shape = shape, orientation = O) mu <- matrix(0, d, G) ## center at the origin simdat <- simEEV( n = 200, parameters = list(pro=c(1,1),mean=mu,variance=variance), seed = NULL) cl <- simdat[,1] sigma <- array(apply(O, 3, function(x,y) crossprod(x*y), y = sqrt(scale*shape)), c(2,2,2)) paramList <- list(mu = mu, sigma = sigma) coordProj( simdat, paramList = paramList, classification = cl) } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/acidity.Rd0000644000176200001440000000221512502264254014253 0ustar liggesusers\name{acidity} \alias{acidity} \docType{data} \title{Acidity data} \description{ Acidity index measured in a sample of 155 lakes in the Northeastern United States. The data are on the log scale, as analysed by Crawford et al. (1992, 1994). The data were also used to fit mixture of gaussian distributions by Richardson and Green (1997), and by McLachlan and Peel (2000, Sec. 6.6.2). } \usage{data(acidity)} \source{\url{http://www.stats.bris.ac.uk/~peter/mixdata}} \references{ Crawford, S. L. (1994) An application of the Laplace method to finite mixture distribution. \emph{Journal of the American Statistical Association}, 89, 259--267. Crawford, S. L., DeGroot, M. H., Kadane, J. B., and Small, M. J. (1994) Modeling lake chemistry distributions: Approximate Bayesian methods for estimating a finite mixture model. \emph{Technometrics}, 34, 441--453. McLachlan, G. and Peel, D. (2000) \emph{Finite Mixture Models}. Wiley, New York. Richardson, S. and Green, P. J. (1997) On Bayesian analysis of mixtures with unknown number of components (with discussion). \emph{Journal of the Royal Statistical Society, Series B}, 59, 731--792. } \keyword{datasets} mclust/man/surfacePlot.Rd0000644000176200001440000001177413205037124015121 0ustar liggesusers\name{surfacePlot} \alias{surfacePlot} \title{ Density or uncertainty surface for bivariate mixtures. } \description{ Plots a density or uncertainty surface given bivariate data and parameters of an MVN mixture model for the data. } \usage{ surfacePlot(data, parameters, type = c("contour", "image", "persp"), what = c("density", "uncertainty"), transformation = c("none", "log", "sqrt"), grid = 100, nlevels = 11, levels = NULL, col = grey(0.6), xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, scale = FALSE, main = FALSE, swapAxes = FALSE, verbose = FALSE, \dots) } \arguments{ \item{data}{ A matrix, or data frame of bivariate observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{type}{ Choose from one of the following three options: \code{"contour"} (default), \code{"image"}, \code{"persp"} indicating the plot type. } \item{what}{ Choose from one of the following options: \code{"density"} (default), \code{"uncertainty"} indicating what to plot. } \item{transformation}{ Choose from one of the following three options: \code{"none"} (default), \code{"log"}, \code{"sqrt"} indicating a transformation to be applied before plotting. } \item{grid}{ The number of grid points (evenly spaced on each axis). The mixture density and uncertainty is computed at \code{grid x grid} points to produce the surface plot. Default: \code{100}. } \item{nlevels}{ The number of levels to use for a contour plot. Default: \code{11}. } \item{levels}{ A vector of levels at which to draw the lines in a contour plot. } \item{col}{ The color to be used for drawing contour lines. } \item{xlim, ylim}{ Optional argument specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. } \item{xlab, ylab}{ Optional argument specifying labels for the x-axis and y-axis. } \item{scale}{ A logical variable indicating whether or not the two dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. The default is not to scale. } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{swapAxes}{ A logical variable indicating whether or not the axes should be swapped for the plot. } \item{verbose}{ A logical variable telling whether or not to print an indication that the function is in the process of computing values at the grid points, which typically takes some time to complete. } \item{\dots}{ Other graphics parameters. } } \value{ A plots showing (a transformation of) the density or uncertainty for the given mixture model and data. The function also returns an invisible list with components \code{x}, \code{y}, and \code{z} in which \code{x} and \code{y} are the values used to define the grid and \code{z} is the transformed density or uncertainty at the grid points. } \details{ For an image plot, a color scheme may need to be selected on the display device in order to view the plot. } \references{ C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611-631}. C. Fraley, A. E. Raftery, T. B. Murphy and L. Scrucca (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. } \seealso{ \code{\link{mclust2Dplot}} } \examples{ \dontrun{ faithfulModel <- Mclust(faithful) surfacePlot(faithful, parameters = faithfulModel$parameters, type = "contour", what = "density", transformation = "none", drawlabels = FALSE) surfacePlot(faithful, parameters = faithfulModel$parameters, type = "persp", what = "density", transformation = "log") surfacePlot(faithful, parameters = faithfulModel$parameters, type = "contour", what = "uncertainty", transformation = "log") } } \keyword{cluster} mclust/man/me.Rd0000644000176200001440000001074013175053412013227 0ustar liggesusers\name{me} \alias{me} \title{ EM algorithm starting with M-step for parameterized MVN mixture models. } \description{ Implements the EM algorithm for MVN mixture models parameterized by eignevalue decomposition, starting with the maximization step. } \usage{ me(modelName, data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is an initial estimate of the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. } \item{prior}{ Specification of a conjugate prior on the means and variances. See the help file for \code{priorControl} for further information. The default assumes no prior. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{Vinv}{ If the model is to include a noise term, \code{Vinv} is an estimate of the reciprocal hypervolume of the data region. If set to a negative value or 0, the model will include a noise term with the reciprocal hypervolume estimated by the function \code{hypvol}. The default is not to assume a noise term in the model through the setting \code{Vinv=NULL}. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is set in \code{mclust.options("warn")}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The number of mixture components. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{control}{ The list of control parameters for EM used. } \item{prior}{ The specification of a conjugate prior on the means and variances used, \code{NULL} if no prior is used. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{meE}},..., \code{\link{meVVV}}, \code{\link{em}}, \code{\link{mstep}}, \code{\link{estep}}, \code{\link{priorControl}}, \code{\link{mclustModelNames}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}} } \examples{ \dontrun{ me(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5]))} } \keyword{cluster} mclust/man/logLik.MclustDA.Rd0000644000176200001440000000200713175052652015524 0ustar liggesusers\name{logLik.MclustDA} \alias{logLik.MclustDA} \title{Log-Likelihood of a \code{MclustDA} object} \description{ Returns the log-likelihood for a \code{MclustDA} object.} \usage{ \method{logLik}{MclustDA}(object, data, \dots) } \arguments{ \item{object}{an object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}.} \item{data}{the data for which the log-likelihood must be computed. If missing, the observed data from the \code{'MclustDA'} object is used.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{Returns an object of class \code{'logLik'} with an element providing the maximized log-likelihood, and further arguments giving the number of (estimated) parameters in the model (\code{"df"}) and the sample size (\code{"nobs"}).} \author{Luca Scrucca} % \note{} \seealso{\code{\link{MclustDA}}.} \examples{ \dontrun{ irisMclustDA <- MclustDA(iris[,1:4], iris$Species) summary(irisMclustDA) logLik(irisMclustDA) } } \keyword{multivariate} mclust/man/dens.Rd0000644000176200001440000000451213175051432013557 0ustar liggesusers\name{dens} \alias{dens} \title{ Density for Parameterized MVN Mixtures } \description{ Computes densities of observations in parameterized MVN mixtures. } \usage{ dens(modelName, data, logarithm = FALSE, parameters, warn=NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{logarithm}{ A logical value indicating whether or not the logarithm of the component densities should be returned. The default is to return the component densities, obtained from the log component densities by exponentiation. } \item{parameters}{ The parameters of the model: \describe{ \item{\code{pro}}{ The vector of mixing proportions for the components of the mixture. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A numeric vector whose \emph{i}th component is the density of the \emph{ith} observation in \code{data} in the MVN mixture specified by \code{parameters}. } \seealso{ \code{\link{cdens}}, \code{\link{mclust.options}}, \code{\link{do.call}} } \examples{ \dontrun{ faithfulModel <- Mclust(faithful) Dens <- dens(modelName = faithfulModel$modelName, data = faithful, parameters = faithfulModel$parameters) Dens ## alternative call do.call("dens", faithfulModel)} } \keyword{cluster} mclust/man/meE.Rd0000644000176200001440000001216513175053437013346 0ustar liggesusers\name{meE} \alias{meE} \alias{meV} \alias{meX} \alias{meEII} \alias{meVII} \alias{meEEI} \alias{meVEI} \alias{meEVI} \alias{meVVI} \alias{meEEE} \alias{meEVE} \alias{meVEE} \alias{meVVE} \alias{meEEV} \alias{meVEV} \alias{meEVV} \alias{meVVV} \alias{meXII} \alias{meXXI} \alias{meXXX} \title{ EM algorithm starting with M-step for a parameterized Gaussian mixture model. } \description{ Implements the EM algorithm for a parameterized Gaussian mixture model, starting with the maximization step. } \usage{ meE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meX(data, prior = NULL, warn = NULL, \dots) meEII(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVII(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEEI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVEI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEVI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVVI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEEE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEVE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVEE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVVE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEEV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVEV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEVV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVVV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meXII(data, prior = NULL, warn = NULL, \dots) meXXI(data, prior = NULL, warn = NULL, \dots) meXXX(data, prior = NULL, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{Vinv}{ An estimate of the reciprocal hypervolume of the data region, when the model is to include a noise term. Set to a negative value or zero if a noise term is desired, but an estimate is unavailable --- in that case function \code{hypvol} will be used to obtain the estimate. The default is not to assume a noise term in the model through the setting \code{Vinv=NULL}. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is given by \code{mclust.options("warn")}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations.\cr } } \seealso{ \code{\link{em}}, \code{\link{me}}, \code{\link{estep}}, \code{\link{mclust.options}} } \examples{ meVVV(data = iris[,-5], z = unmap(iris[,5])) } \keyword{cluster} mclust/man/densityMclust.Rd0000644000176200001440000000562013175052454015503 0ustar liggesusers\name{densityMclust} \alias{densityMclust} \title{Density Estimation via Model-Based Clustering} \description{ Produces a density estimate for each data point using a Gaussian finite mixture model from \code{Mclust}. } \usage{ densityMclust(data, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{\dots }{ Additional arguments for the \code{\link{Mclust}} function. In particular, setting the arguments \code{G} and \code{modelNames} allow to specify the number of mixture components and the type of model to be fitted. By default an "optimal" model is selected based on the BIC criterion. } } \value{ An object of class \code{densityMclust}, which inherits from \code{Mclust}, is returned with the following slot added: \item{density}{The density evaluated at the input \code{data} computed from the estimated model.} } %\details{} \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. } \author{Revised version by Luca Scrucca based on the original code by C. Fraley and A.E. Raftery.} \seealso{ \code{\link{plot.densityMclust}}, \code{\link{Mclust}}, \code{\link{summary.Mclust}}, \code{\link{predict.densityMclust}}. } \examples{ dens = densityMclust(faithful$waiting) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "BIC", legendArgs = list(x = "topright")) plot(dens, what = "density", data = faithful$waiting) dens = densityMclust(faithful) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "density", data = faithful) plot(dens, what = "density", data = faithful, drawlabels = FALSE, points.pch = 20, col = "grey", levels = quantile(dens$density, probs = c(0.05, 0.25, 0.5, 0.75, 0.95))) plot(dens, what = "density", type = "image", col = topo.colors(50)) plot(dens, what = "density", type = "persp") dens = densityMclust(iris[,1:4]) summary(dens, parameters = TRUE) plot(dens, what = "density", data = iris[,1:4], col = "slategrey", drawlabels = FALSE, nlevels = 7) \dontrun{ plot(dens, what = "density", type = "image", col = "slategrey") plot(dens, what = "density", type = "persp", col = grey(0.9)) } } \keyword{cluster} mclust/man/hypvol.Rd0000644000176200001440000000301313175052576014154 0ustar liggesusers\name{hypvol} \alias{hypvol} \title{ Aproximate Hypervolume for Multivariate Data } \description{ Computes a simple approximation to the hypervolume of a multivariate data set. } \usage{ hypvol(data, reciprocal=FALSE) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{reciprocal}{ A logical variable indicating whether or not the reciprocal hypervolume is desired rather than the hypervolume itself. The default is to return the hypervolume. } } \value{ Returns the minimum of the hypervolume computed from simple variable bounds and that computed from variable bounds of the principal component scores. Used for the default hypervolume parameter for the noise component when observations are designated as noise in \code{Mclust} and \code{mclustBIC}. } \references{ A. Dasgupta and A. E. Raftery (1998). Detecting features in spatial point processes with clutter via model-based clustering. \emph{Journal of the American Statistical Association 93:294-302}. C. Fraley and A.E. Raftery (1998). \emph{Computer Journal 41:578-588}. C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611-631}. } \seealso{ \code{\link{mclustBIC}} } \examples{ hypvol(iris[,-5]) } \keyword{cluster} mclust/man/clustCombi.Rd0000644000176200001440000001210713107071706014732 0ustar liggesusers\name{clustCombi} \alias{clustCombi} \alias{print.clustCombi} \alias{summary.clustCombi} \alias{print.summary.clustCombi} \title{ Combining Gaussian Mixture Components for Clustering } \description{ Provides a hierarchy of combined clusterings from the EM/BIC Gaussian mixture solution to one class, following the methodology proposed in the article cited in the references. } \usage{ clustCombi(object = NULL, data = NULL, \dots) } \arguments{ \item{object}{ An object returned by \code{\link{Mclust}} giving the optimal (according to BIC) parameters, conditional probabilities, and log-likelihood, together with the associated classification and its uncertainty. If not provided, the \code{data} argument must be specified. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. If the \code{object} argument is not provided, the function \code{\link{Mclust}} is applied to the given \code{data} to fit a mixture model.} \item{\dots}{ Optional arguments to be passed to called functions. Notably, any argument (such as the numbers of components for which the BIC is computed; the models to be fitted by EM; initialization parameters for the EM algorithm, ...) to be passed to \code{\link{Mclust}} in case \code{object = NULL}. Please see the \code{\link{Mclust}} documentation for more details. } } \details{ Mclust provides a Gaussian mixture fitted to the data by maximum likelihood through the EM algorithm, for the model and number of components selected according to BIC. The corresponding components are hierarchically combined according to an entropy criterion, following the methodology described in the article cited in the references section. The solutions with numbers of classes between the one selected by BIC and one are returned as a \code{clustCombi} class object. } \value{ A list of class \code{clustCombi} giving the hierarchy of combined solutions from the number of components selected by BIC to one. The details of the output components are as follows: \item{classification}{A list of the data classifications obtained for each combined solution of the hierarchy through a MAP assignment} \item{combiM}{A list of matrices. \code{combiM[[K]]} is the matrix used to combine the components of the (K+1)-classes solution to get the K-classes solution. Please see the examples.} \item{combiz}{A list of matrices. \code{combiz[[K]]} is a matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class according to the K-classes combined solution.} \item{MclustOutput}{A list of class \code{Mclust}. Output of a call to the Mclust function (as provided by the user or the result of a call to the Mclust function) used to initiate the combined solutions hierarchy: please see the \code{\link{Mclust}} function documentation for details.} } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \examples{ data(Baudry_etal_2010_JCGS_examples) # run Mclust using provided data output <- clustCombi(data = ex4.1) \dontrun{ # run Mclust and then clustcombi on the returned object mod <- Mclust(ex4.1) output <- clustCombi(mod)} output summary(output) \dontrun{ # run Mclust using provided data and any further optional argument provided output <- clustCombi(data = ex4.1, modelName = "EEV", G = 1:15)} # plot the hierarchy of combined solutions plot(output, what = "classification") # plot some "entropy plots" which may help one to select the number of classes plot(output, what = "entropy") # plot the tree structure obtained from combining mixture components plot(output, what = "tree") # the selected model and number of components obtained from Mclust using BIC output$MclustOutput # the matrix whose [i,k]th entry is the probability that i-th observation in # the data belongs to the k-th class according to the BIC solution head( output$combiz[[output$MclustOutput$G]] ) # the matrix whose [i,k]th entry is the probability that i-th observation in # the data belongs to the k-th class according to the first combined solution head( output$combiz[[output$MclustOutput$G-1]] ) # the matrix describing how to merge the 6-classes solution to get the # 5-classes solution output$combiM[[5]] # for example the following code returns the label of the class (in the # 5-classes combined solution) to which the 4th class (in the 6-classes # solution) is assigned. Only two classes in the (K+1)-classes solution # are assigned the same class in the K-classes solution: the two which # are merged at this step... output$combiM[[5]] %*% c(0,0,0,1,0,0) # recover the 5-classes soft clustering from the 6-classes soft clustering # and the 6 -> 5 combining matrix all( output$combiz[[5]] == t( output$combiM[[5]] \%*\% t(output$combiz[[6]]) ) ) # the hard clustering under the 5-classes solution head( output$classification[[5]] ) } \keyword{ cluster } mclust/man/plot.densityMclust.Rd0000644000176200001440000001026413175053635016462 0ustar liggesusers\name{plot.densityMclust} \alias{plot.densityMclust} \alias{plotDensityMclust1} \alias{plotDensityMclust2} \alias{plotDensityMclustd} \title{Plots for Mixture-Based Density Estimate} \description{ Plotting methods for an object of class \code{'mclustDensity'}. Available graphs are plot of BIC values and density for univariate and bivariate data. For higher data dimensionality a scatterplot matrix of pairwise densities is drawn. } \usage{ \method{plot}{densityMclust}(x, data = NULL, what = c("BIC", "density", "diagnostic"), \dots) plotDensityMclust1(x, data = NULL, hist.col = "lightgrey", hist.border = "white", breaks = "Sturges", \dots) plotDensityMclust2(x, data = NULL, nlevels = 11, levels = NULL, col = grey(0.6), points.pch = 1, points.col = 1, points.cex = 0.8, \dots) plotDensityMclustd(x, data = NULL, nlevels = 11, levels = NULL, col = grey(0.6), points.pch = 1, points.col = 1, points.cex = 0.8, gap = 0.2, \dots) } \arguments{ \item{x}{An object of class \code{'mclustDensity'} obtained from a call to \code{\link{densityMclust}} function.} \item{data}{Optional data points.} \item{what}{The type of graph requested: \describe{ \item{\code{"density"} =}{a plot of estimated density; if \code{data} is also provided the density is plotted over data points (see Details section).} \item{\code{"BIC"} =}{a plot of BIC values for the estimated models versus the number of components.} \item{\code{"diagnostic"} =}{diagnostic plots (only available for the one-dimensional case, see \code{\link{densityMclust.diagnostic}})} } } \item{hist.col}{The color to be used to fill the bars of the histogram.} \item{hist.border}{The color of the border around the bars of the histogram.} \item{breaks}{See the argument in function \code{\link[graphics]{hist}}.} \item{points.pch, points.col, points.cex}{The character symbols, colors, and magnification to be used for plotting \code{data} points.} \item{nlevels}{An integer, the number of levels to be used in plotting contour densities.} \item{levels}{A vector of density levels at which to draw the contour lines.} \item{col}{Color to be used for drawing the contour lines, the perspective plot, or the image density. In the latter case can be also a vector of color values.} \item{gap}{Distance between subplots, in margin lines, for the matrix of pairwise scatterplots.} \item{\dots}{Additional arguments.} } \details{The function \code{plot.densityMclust} allows to obtain the plot of estimated density or the graph of BIC values for evaluated models. If \code{what = "density"} the produced plot dependes on the dimensionality of the data. For one-dimensional data a call with no \code{data} provided produces a plot of the estimated density over a sensible range of values. If \code{data} is provided the density is over-plotted on a histogram for the observed data. For two-dimensional data further arguments available are those accepted by the \code{\link{surfacePlot}} function. In particular, the density can be represented through \code{"contour"}, \code{"image"}, and \code{"persp"} type of graph. For higher dimensionality a scatterplot matrix of pairwise densities is drawn. } % \value{} \author{Luca Scrucca} \seealso{ \code{\link{densityMclust}}, \code{\link{densityMclust.diagnostic}}, \code{\link{Mclust}}. } \examples{ dens = densityMclust(faithful$waiting) plot(dens, what = "density") plot(dens, what = "density", data = faithful$waiting) dens = densityMclust(faithful) plot(dens, what = "density") plot(dens, what = "density", type = "image", col = "steelblue") plot(dens, what = "density", type = "persp", col = adjustcolor("steelblue", alpha.f = 0.5)) x = iris[,1:4] dens = densityMclust(x) plot(dens, what = "density", nlevels = 7) \dontrun{ plot(dens, x, what = "density", drawlabels = FALSE, levels = quantile(dens$density, probs = c(0.05, 0.25, 0.5, 0.75, 0.95))) plot(dens, what = "density", type = "image", col = "steelblue") plot(dens, what = "density", type = "persp", border = adjustcolor(grey(0.1), alpha.f = 0.5)) } } \keyword{cluster} \keyword{dplot} mclust/man/mclustVariance.Rd0000644000176200001440000000743613175053400015613 0ustar liggesusers\name{mclustVariance} \alias{mclustVariance} \title{ Template for variance specification for parameterized Gaussian mixture models } \description{ Specification of variance parameters for the various types of Gaussian mixture models. } \usage{ mclustVariance(modelName, d = NULL, G = 2) } \arguments{ \item{modelName}{A character string specifying the model.} \item{d}{A integer specifying the dimension of the data.} \item{G}{An integer specifying the number of components in the mixture model.} } \details{The \code{variance} component in the {parameters} list from the output to e.g. \code{me} or \code{mstep} or input to e.g. \code{estep} may contain one or more of the following arguments, depending on the model: \describe{ \item{\code{modelName}}{ A character string indicating the model. } \item{\code{d}}{ The dimension of the data. } \item{\code{G}}{ The number of components in the mixture model. } \item{\code{sigmasq}}{ for the one-dimensional models (\code{"E"}, \code{"V"}) and spherical models (\code{"EII"}, \code{"VII"}). This is either a vector whose \emph{k}th component is the variance for the \emph{k}th component in the mixture model (\code{"V"} and \code{"VII"}), or a scalar giving the common variance for all components in the mixture model (\code{"E"} and \code{"EII"}). } \item{\code{Sigma}}{ For the equal variance models \code{"EII"}, \code{"EEI"}, and \code{"EEE"}. A \emph{d} by \emph{d} matrix giving the common covariance for all components of the mixture model. } \item{\code{cholSigma}}{ For the equal variance model {"EEE"}. A \emph{d} by \emph{d} upper triangular matrix giving the Cholesky factor of the common covariance for all components of the mixture model. } \item{\code{sigma}}{ For all multidimensional mixture models. A \emph{d} by \emph{d} by \emph{G} matrix array whose \code{[,,k]}th entry is the covariance matrix for the \emph{k}th component of the mixture model. } \item{\code{cholsigma}}{ For the unconstrained covariance mixture model \code{"VVV"}. A \emph{d} by \emph{d} by \emph{G} matrix array whose \code{[,,k]}th entry is the upper triangular Cholesky factor of the covariance matrix for the \emph{k}th component of the mixture model. } \item{\code{scale}}{ For diagonal models \code{"EEI"}, \code{"EVI"}, \code{"VEI"}, \code{"VVI"} and constant-shape models \code{"EEV"} and \code{"VEV"}. Either a \emph{G}-vector giving the scale of the covariance (the \emph{d}th root of its determinant) for each component in the mixture model, or a single numeric value if the scale is the same for each component. } \item{\code{shape}}{ For diagonal models \code{"EEI"}, \code{"EVI"}, \code{"VEI"}, \code{"VVI"} and constant-shape models \code{"EEV"} and \code{"VEV"}. Either a \emph{G} by \emph{d} matrix in which the \emph{k}th column is the shape of the covariance matrix (normalized to have determinant 1) for the \emph{k}th component, or a \emph{d}-vector giving a common shape for all components. } \item{\code{orientation}}{ For the constant-shape models \code{"EEV"} and \code{"VEV"}. Either a \emph{d} by \emph{d} by \emph{G} array whose \code{[,,k]}th entry is the orthonomal matrix whose columns are the eigenvectors of the covariance matrix of the \emph{k}th component, or a \emph{d} by \emph{d} orthonormal matrix if the mixture components have a common orientation. The \code{orientation} component is not needed in spherical and diagonal models, since the principal components are parallel to the coordinate axes so that the orientation matrix is the identity. } } In all cases, the value \code{-1} is used as a placeholder for unknown nonzero entries. } \keyword{cluster} mclust/man/plot.MclustDR.Rd0000644000176200001440000001144313175055034015303 0ustar liggesusers\name{plot.MclustDR} \alias{plot.MclustDR} \alias{plotEvalues.MclustDR} \title{Plotting method for dimension reduction for model-based clustering and classification} \description{ Graphs data projected onto the estimated subspace for model-based clustering and classification. } \usage{ \method{plot}{MclustDR}(x, dimens, what = c("scatterplot", "pairs", "contour", "classification", "boundaries", "density", "evalues"), symbols, colors, col.contour = gray(0.7), col.sep = grey(0.4), ngrid = 100, nlevels = 5, asp = NULL, \dots) } \arguments{ \item{x}{ An object of class \code{'MclustDR'} resulting from a call to \code{\link{MclustDR}}. } \item{dimens}{ A vector of integers giving the dimensions of the desired coordinate projections for multivariate data. } \item{what}{ The type of graph requested: \describe{ \item{\code{"scatterplot"} =}{a two-dimensional plot of data projected onto the first two directions specified by \code{dimens} and with data points marked according to the corresponding mixture component. By default, the first two directions are selected for plotting.} \item{\code{"pairs"} =}{a scatterplot matrix of data projected onto the estimated subspace and with data points marked according to the corresponding mixture component. By default, all the available directions are used, unless they have been specified by \code{dimens}.} \item{\code{"contour"} =}{a two-dimensional plot of data projected onto the first two directions specified by \code{dimens} (by default, the first two directions) with density contours for classes or clusters and data points marked according to the corresponding mixture component.} \item{\code{"classification"} =}{a two-dimensional plot of data projected onto the first two directions specified by \code{dimens} (by default, the first two directions) with classification region and data points marked according to the corresponding mixture component.} \item{\code{"boundaries"} =}{a two-dimensional plot of data projected onto the first two directions specified by \code{dimens} (by default, the first two directions) with uncertainty boundaries and data points marked according to the corresponding mixture component. The uncertainty is shown using a greyscale with darker regions indicating higher uncertainty. } \item{\code{"density"} =}{a one-dimensional plot of estimated density for the first direction specified by \code{dimens} (by default, the first one). A set of box-plots for each estimated cluster or known class are also shown at the bottom of the graph. } } } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique mixture component. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{factor}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique cluster or known class. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{factor}). The default is given by \code{mclust.options("classPlotColors")}. } \item{col.contour}{ The color of contours in case \code{what = "contour"}. } \item{col.sep}{ The color of classification boundaries in case \code{what = "classification"}. } \item{ngrid}{ An integer specifying the number of grid points to use in evaluating the classification regions. } \item{nlevels}{ The number of levels to use in case \code{what = "contour"}. } \item{asp}{For scatterplots the \eqn{y/x} aspect ratio, see \code{\link{plot.window}}. } \item{\dots}{further arguments passed to or from other methods.} } %\details{} %\value{} \references{ Scrucca, L. (2010) Dimension reduction for model-based clustering. \emph{Statistics and Computing}, 20(4), pp. 471-484. } \author{Luca Scrucca} %\note{} \seealso{\link{MclustDR}} \examples{ mod = Mclust(iris[,1:4], G = 3) dr = MclustDR(mod) plot(dr, what = "evalues") plot(dr, what = "pairs") plot(dr, what = "scatterplot", dimens = c(1,3)) plot(dr, what = "contour") plot(dr, what = "classification", ngrid = 200) plot(dr, what = "boundaries", ngrid = 200) plot(dr, what = "density") plot(dr, what = "density", dimens = 2) data(banknote) da = MclustDA(banknote[,2:7], banknote$Status, G = 1:3) dr = MclustDR(da) plot(dr, what = "evalues") plot(dr, what = "pairs") plot(dr, what = "contour") plot(dr, what = "contour", dimens = c(1,3)) plot(dr, what = "classification", ngrid = 200) plot(dr, what = "boundaries", ngrid = 200) plot(dr, what = "density") plot(dr, what = "density", dimens = 2) } \keyword{multivariate} mclust/man/estep.Rd0000644000176200001440000000620513175052007013746 0ustar liggesusers\name{estep} \alias{estep} \title{ E-step for parameterized Gaussian mixture models. } \description{ Implements the expectation step of EM algorithm for parameterized Gaussian mixture models. } \usage{ estep( modelName, data, parameters, warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ A names list giving the parameters of the model. The components are as follows: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ An estimate of the reciprocal hypervolume of the data region. If set to NULL or a negative value, the default is determined by applying function \code{hypvol} to the data. Used only when \code{pro} includes an additional mixing proportion for a noise component. } } } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ The input parameters. } \item{loglik}{ The log-likelihood for the data in the mixture model. } \item{Attributes}{ \code{"WARNING"}: an appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{estepE}}, \dots, \code{\link{estepVVV}}, \code{\link{em}}, \code{\link{mstep}}, \code{\link{mclust.options}} \code{\link{mclustVariance}} } \examples{ \dontrun{ msEst <- mstep(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5])) names(msEst) estep(modelName = msEst$modelName, data = iris[,-5], parameters = msEst$parameters)} } \keyword{cluster} mclust/man/chevron.Rd0000644000176200001440000000107612460535131014273 0ustar liggesusers\name{chevron} \alias{chevron} \title{Simulated minefield data} \usage{data(chevron)} \description{A set of simulated bivariate minefield data (1104 observations).} \references{ A. Dasgupta and A. E. Raftery (1998). Detecting features in spatial point processes with clutter via model-based clustering. \emph{Journal of the American Statistical Association 93:294-302}. C. Fraley and A.E. Raftery (1998). \emph{Computer Journal 41:578-588}. G. J. McLachlan and D. Peel (2000). \emph{Finite Mixture Models}, Wiley, pages 110-112. } \keyword{datasets} mclust/man/map.Rd0000644000176200001440000000225413175052667013417 0ustar liggesusers\name{map} \alias{map} \title{Classification given Probabilities} \description{ Converts a matrix in which each row sums to 1 to an integer vector specifying for each row the column index of the maximum. } \usage{ map(z, warn = mclust.options("warn"), \dots) } \arguments{ \item{z}{ A matrix (for example a matrix of conditional probabilities in which each row sums to 1 as produced by the E-step of the EM algorithm). } \item{warn}{ A logical variable indicating whether or not a warning should be issued when there are some columns of \code{z} for which no row attains a maximum. } \item{\dots }{ Provided to allow lists with elements other than the arguments can be passed in indirect or list calls with \code{do.call}. } } \value{ A integer vector with one entry for each row of z, in which the \emph{i}-th value is the column index at which the \emph{i}-th row of \code{z} attains a maximum. } \seealso{ \code{\link{unmap}}, \code{\link{estep}}, \code{\link{em}}, \code{\link{me}}. } \examples{ emEst <- me(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5])) map(emEst$z) } \keyword{cluster} % docclass is function mclust/man/mclust-package.Rd0000644000176200001440000000312313205034706015522 0ustar liggesusers\name{mclust-package} \alias{mclust-package} \alias{mclust} \docType{package} \title{Gaussian Mixture Modelling for Model-Based Clustering, Classification, and Density Estimation} \description{Finite Gaussian mixture modelling fitted via EM algorithm for model-based clustering, classification, and density estimation, including Bayesian regularization and dimension reduction.} \details{For a quick introduction to \pkg{mclust} see the vignette \href{../doc/mclust.html}{A quick tour of mclust}.} \author{ Chris Fraley, Adrian Raftery and Luca Scrucca. Maintainer: Luca Scrucca \email{luca.scrucca@unipg.it} } \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. } \examples{ # Clustering mod1 = Mclust(iris[,1:4]) summary(mod1) plot(mod1, what = c("BIC", "classification")) # Classification data(banknote) mod2 = MclustDA(banknote[,2:7], banknote$Status) summary(mod2) plot(mod2) # Density estimation mod3 = densityMclust(faithful$waiting) summary(mod3) plot(mod3, faithful$waiting) } \keyword{package} mclust/man/mclust2Dplot.Rd0000644000176200001440000001265413205036570015230 0ustar liggesusers\name{mclust2Dplot} \alias{mclust2Dplot} \title{ Plot two-dimensional data modelled by an MVN mixture. } \description{ Plot two-dimensional data given parameters of an MVN mixture model for the data. } \usage{ mclust2Dplot(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification","uncertainty","errors"), addEllipses = TRUE, symbols = NULL, colors = NULL, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, scale = FALSE, CEX = 1, PCH = ".", main = FALSE, swapAxes = FALSE, \dots) } \arguments{ \item{data}{ A numeric matrix or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. In this case the data are two dimensional, so there are two columns. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. There should one more mixing proportion than the number of Gaussian components if the mixture model includes a Poisson noise term. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix in which the \code{[i,k]}th entry gives the probability of observation \emph{i} belonging to the \emph{k}th class. Used to compute \code{classification} and \code{uncertainty} if those arguments aren't available. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. If present argument \code{z} will be ignored. } \item{truth}{ A numeric or character vector giving a known classification of each data point. If \code{classification} or \code{z} is also present, this is used for displaying classification errors. } \item{uncertainty}{ A numeric vector of values in \emph{(0,1)} giving the uncertainty of each data point. If present argument \code{z} will be ignored. } \item{what}{ Choose from one of the following three options: \code{"classification"} (default), \code{"errors"}, \code{"uncertainty"}. } \item{addEllipses}{ A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given is \code{mclust.options("classPlotColors")}. } \item{xlim, ylim}{ Optional argument specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. } \item{xlab, ylab}{ Optional argument specifying labels for the x-axis and y-axis. } \item{scale}{ A logical variable indicating whether or not the two chosen dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. Default: \code{scale=FALSE} } \item{CEX}{ An argument specifying the size of the plotting symbols. The default value is 1. } \item{PCH}{ An argument specifying the symbol to be used when a classificatiion has not been specified for the data. The default value is a small dot ".". } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{swapAxes}{ A logical variable indicating whether or not the axes should be swapped for the plot. } \item{\dots}{ Other graphics parameters. } } \value{ A plot showing the data, together with the location of the mixture components, classification, uncertainty, and/or classification errors. } \seealso{ \code{\link{surfacePlot}}, \code{\link{clPairs}}, \code{\link{coordProj}}, \code{\link{mclust.options}} } \examples{ \dontrun{ faithfulModel <- Mclust(faithful) mclust2Dplot(faithful, parameters=faithfulModel$parameters, z=faithfulModel$z, what = "classification", main = TRUE) mclust2Dplot(faithful, parameters=faithfulModel$parameters, z=faithfulModel$z, what = "uncertainty", main = TRUE) } } \keyword{cluster} mclust/man/partuniq.Rd0000644000176200001440000000141212460535131014464 0ustar liggesusers\name{partuniq} \alias{partuniq} \title{ Classifies Data According to Unique Observations } \description{ Gives a one-to-one mapping from unique observations to rows of a data matrix. } \usage{ partuniq(x) } \arguments{ \item{x}{Matrix of observations.} } \value{ A vector of length \code{nrow(x)} with integer entries. An observation \code{k} is assigned an integer \code{i} whenever observation \code{i} is the first row of \code{x} that is identical to observation \code{k} (note that \code{i <= k}). } \seealso{ \code{\link{partconv}} } \examples{ set.seed(0) mat <- data.frame(lets = sample(LETTERS[1:2],9,TRUE), nums = sample(1:2,9,TRUE)) mat ans <- partuniq(mat) ans partconv(ans,consec=TRUE) } \keyword{cluster} % Converted by Sd2Rd version 0.3-2. mclust/man/MclustDRsubsel.Rd0000644000176200001440000001226213175053314015543 0ustar liggesusers\name{MclustDRsubsel} \alias{MclustDRsubsel} \alias{print.MclustDRsubsel} \alias{MclustDRsubsel_classif} \alias{MclustDRsubsel_cluster} \alias{MclustDRrecoverdir} \alias{MclustDRsubsel1cycle} \alias{print.MclustDRsubsel} \alias{summary.MclustDRsubsel} \title{Subset selection for GMMDR directions based on BIC.} \description{ Implements a subset selection method for selecting the relevant directions spanning the dimension reduction subspace for visualizing the clustering or classification structure obtained from a finite mixture of Gaussian densities.} \usage{ MclustDRsubsel(object, G = 1:9, modelNames = mclust.options("emModelNames"), \dots, bic.stop = 0, bic.cutoff = 0, mindir = 1, verbose = interactive()) } \arguments{ \item{object}{An object of class \code{'MclustDR'} resulting from a call to \code{\link{MclustDR}}.} \item{G}{An integer vector specifying the numbers of mixture components or clusters.} \item{modelNames}{A vector of character strings indicating the models to be fitted. See \code{\link{mclustModelNames}} for a description of the available models.} \item{\dots}{Further arguments passed through \code{\link{Mclust}} or \code{\link{MclustDA}}.} \item{bic.stop}{A criterion to terminate the search. If maximal BIC difference is less than \code{bic.stop} then the algorithm stops. \cr Two tipical values are: \tabular{lcl}{ \code{0} \tab = \tab algorithm stops when the BIC difference becomes negative (default) \cr \code{-Inf} \tab = \tab algorithm continues until all directions have been selected } } \item{bic.cutoff}{A value specifying how to select simplest ``best'' model within \code{bic.cutoff} from the maximum value achieved. Setting this to \code{0} (default) simply select the model with the largest BIC difference.} \item{mindir}{An integer value specifying the minimum number of directions to be estimated.} \item{verbose}{A logical or integer value specifying if and how much detailed information should be reported during the iterations of the algorithm. \cr Possible values are: \tabular{ll}{ \code{0} or \code{FALSE} \tab = no trace info is shown; \cr \code{1} or \code{TRUE} \tab = a trace info is shown at each step of the search; \cr \code{2} \tab = a more detailed trace info is is shown.} } } \details{ The GMMDR method aims at reducing the dimensionality by identifying a set of linear combinations, ordered by importance as quantified by the associated eigenvalues, of the original features which capture most of the clustering or classification structure contained in the data. This is implemented in \code{\link{MclustDR}}. The \code{MclustDRsubsel} function implements the greedy forward search algorithm discussed in Scrucca (2010) to prune the set of all GMMDR directions. The criterion used to select the relevant directions is based on the BIC difference between a clustering model and a model in which the feature proposal has no clustering relevance. The steps are the following: 1. Select the first feature to be the one which maximizes the BIC difference between the best clustering model and the model which assumes no clustering, i.e. a single component. 2. Select the next feature amongst those not previously included, to be the one which maximizes the BIC difference. 3. Iterate the previous step until all the BIC differences for the inclusion of a feature become less than \code{bic.stop}. At each step, the search over the model space is performed with respect to the model parametrisation and the number of clusters. } \value{ An object of class \code{'MclustDRsubsel'} which inherits from \code{'MclustDR'}, so it has the same components of the latter plus the following: \item{basisx}{The basis of the estimated dimension reduction subspace expressed in terms of the original variables.} \item{std.basisx}{The basis of the estimated dimension reduction subspace expressed in terms of the original variables standardized to have unit standard deviation.} } \references{ Scrucca, L. (2010) Dimension reduction for model-based clustering. \emph{Statistics and Computing}, 20(4), pp. 471-484. Scrucca, L. (2014) Graphical Tools for Model-based Mixture Discriminant Analysis. \emph{Advances in Data Analysis and Classification}, 8(2), pp. 147-165 } \author{Luca Scrucca} %\note{} \seealso{ \code{\link{MclustDR}}, \code{\link{Mclust}}, \code{\link{MclustDA}}. } \examples{ \dontrun{ # clustering data(crabs, package = "MASS") x = crabs[,4:8] class = paste(crabs$sp, crabs$sex, sep = "|") mod = Mclust(x) table(class, mod$classification) dr = MclustDR(mod) summary(dr) plot(dr) drs = MclustDRsubsel(dr) summary(drs) table(class, drs$class) plot(drs, what = "scatterplot") plot(drs, what = "pairs") plot(drs, what = "contour") plot(drs, what = "boundaries") plot(drs, what = "evalues") # classification data(banknote) da = MclustDA(banknote[,2:7], banknote$Status) table(banknote$Status, predict(da)$class) dr = MclustDR(da) summary(dr) drs = MclustDRsubsel(dr) summary(drs) table(banknote$Status, predict(drs)$class) plot(drs, what = "scatterplot") plot(drs, what = "classification") plot(drs, what = "boundaries")} } \keyword{multivariate}mclust/man/plot.clustCombi.Rd0000644000176200001440000000440513107072134015705 0ustar liggesusers\name{plot.clustCombi} \alias{plot.clustCombi} \title{ Plot Combined Clusterings Results } \description{ Plot combined clusterings results: classifications corresponding to \code{Mclust}/BIC and to the hierarchically combined classes, "entropy plots" to help to select a number of classes, and the tree structure obtained from combining mixture components. } \usage{ \method{plot}{clustCombi}(x, what = c("classification", "entropy", "tree"), \dots) } \arguments{ \item{x}{ Object returned by \code{\link{clustCombi}} function. } \item{what}{ Type of plot. } \item{\dots}{ Other arguments to be passed to other functions: \code{\link{combiPlot}}, \code{\link{entPlot}}, \code{\link{combiTree}}. Please see the corresponding documentations. } } \value{ Classifications are plotted with \code{\link{combiPlot}}, which relies on the \code{Mclust} plot functions. Entropy plots are plotted with \code{\link{entPlot}} and may help to select a number of classes: please see the article cited in the references. Tree plots are produced by \code{\link{combiTree}} and graph the tree structure implied by the clusters combining process. } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{combiPlot}}, \code{\link{entPlot}}, \code{\link{combiTree}}, \code{\link{clustCombi}}. } \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) ## 1D Example output <- clustCombi(Test1D, G=1:15) # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes (please see the article cited # in the references) plot(output) ## 2D Example output <- clustCombi(ex4.1) # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes (please see the article cited # in the references) plot(output) ## 3D Example output <- clustCombi(ex4.4.2) # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes (please see the article cited # in the references) plot(output) } } \keyword{ cluster } mclust/man/classError.Rd0000644000176200001440000000255513175050767014764 0ustar liggesusers\name{classError} \alias{classError} \title{ Classification error } \description{ Error for a given classification relative to a known truth. Location of errors in a given classification relative to a known truth. } \usage{ classError(classification, truth) } \arguments{ \item{classification}{ A numeric or character vector of class labels. } \item{truth}{ A numeric or character vector of class labels. Must have the same length as \code{classification}. } } \value{ A list with the following two components: \item{misclassified}{ The indexes of the misclassified data points in a minimum error mapping between the given classification and the given truth. } \item{errorRate}{ The errorRate corresponding to a minimum error mapping mapping between the given classification and the given truth. } } \details{ If more than one mapping between classification and truth corresponds to the minimum number of classification errors, only one possible set of misclassified observations is returned. } \seealso{ \code{\link{mapClass}}, \code{\link{table}} } \examples{ a <- rep(1:3, 3) a b <- rep(c("A", "B", "C"), 3) b classError(a, b) a <- sample(1:3, 9, replace = TRUE) a b <- sample(c("A", "B", "C"), 9, replace = TRUE) b classError(a, b) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/wreath.Rd0000644000176200001440000000103113175055360014115 0ustar liggesusers\name{wreath} \alias{wreath} \title{Data Simulated from a 14-Component Mixture} \usage{data(wreath)} \description{ A dataset consisting of 1000 observations drawn from a 14-component normal mixture in which the covariances of the components have the same size and shape but differ in orientation. } \references{ C. Fraley, A. E. Raftery and R. Wehrens (2005). Incremental model-based clustering for large datasets with small clusters. \emph{Journal of Computational and Graphical Statistics 14:1:18}. } \keyword{datasets} mclust/man/logLik.Mclust.Rd0000644000176200001440000000151413175052642015320 0ustar liggesusers\name{logLik.Mclust} \alias{logLik.Mclust} \title{Log-Likelihood of a \code{Mclust} object} \description{ Returns the log-likelihood for a \code{'Mclust'} object.} \usage{ \method{logLik}{Mclust}(object, \dots) } \arguments{ \item{object}{an object of class \code{'Mclust'} resulting from a call to \code{\link{Mclust}}.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{Returns an object of class \code{'logLik'} with an element providing the maximized log-likelihood, and further arguments giving the number of (estimated) parameters in the model (\code{"df"}) and the sample size (\code{"nobs"}).} \author{Luca Scrucca} % \note{} \seealso{\code{\link{Mclust}}.} \examples{ \dontrun{ irisMclust <- Mclust(iris[,1:4]) summary(irisMclust) logLik(irisMclust) } } \keyword{multivariate} mclust/man/mclustBIC.Rd0000644000176200001440000001557513175415250014470 0ustar liggesusers\name{mclustBIC} \alias{mclustBIC} \alias{EMclust} \alias{print.mclustBIC} \title{BIC for Model-Based Clustering} \description{ BIC for parameterized Gaussian mixture models fitted by EM algorithm initialized by model-based hierarchical clustering.} \usage{ mclustBIC(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = list(hcPairs = NULL, subset = NULL, noise = NULL), Vinv = NULL, warn = mclust.options("warn"), x = NULL, verbose = interactive(), \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{G}{ An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated. The default is \code{G=1:9}, unless the argument \code{x} is specified, in which case the default is taken from the values associated with \code{x}. } \item{modelNames}{ A vector of character strings indicating the models to be fitted in the EM phase of clustering. The help file for \code{\link{mclustModelNames}} describes the available models. The default is: \describe{ \item{\code{c("E", "V")}}{for univariate data} \item{\code{mclust.options("emModelNames")}}{for multivariate data (n > d)} \item{\code{c("EII", "VII", "EEI", "EVI", "VEI", "VVI")}}{the spherical and diagonal models for multivariate data (n <= d)} } unless the argument \code{x} is specified, in which case the default is taken from the values associated with \code{x}. } \item{prior}{ The default assumes no prior, but this argument allows specification of a conjugate prior on the means and variances through the function \code{priorControl}. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{initialization}{ A list containing zero or more of the following components: \describe{ \item{\code{hcPairs}}{ A matrix of merge pairs for hierarchical clustering such as produced by function \code{hc}. For multivariate data, the default is to compute a hierarchical clustering tree by applying function \code{hc} with \code{modelName = "VVV"} to the data or a subset as indicated by the \code{subset} argument. The hierarchical clustering results are to start EM. For univariate data, the default is to use quantiles to start EM. } \item{\code{subset}}{ A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. By default no subset is used unless the number of observations exceeds the value specified by \code{mclust.options("subset")}. Note that to guarantee exact reproducibility of results a seed must be specified (see \code{\link{set.seed}}). } \item{\code{noise}}{ A logical or numeric vector indicating an initial guess as to which observations are noise in the data. If numeric the entries should correspond to row indexes of the data. If supplied, a noise term will be added to the model in the estimation. } } } \item{Vinv}{ An estimate of the reciprocal hypervolume of the data region. The default is determined by applying function \code{hypvol} to the data. Used only if an initial guess as to which observations are noise is supplied. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when estimation fails. The default is controlled by \code{\link{mclust.options}}. } \item{x}{ An object of class \code{'mclustBIC'}. If supplied, \code{mclustBIC} will use the settings in \code{x} to produce another object of class \code{'mclustBIC'}, but with \code{G} and \code{modelNames} as specified in the arguments. Models that have already been computed in \code{x} are not recomputed. All arguments to \code{mclustBIC} except \code{data}, \code{G} and \code{modelName} are ignored and their values are set as specified in the attributes of \code{x}. Defaults for \code{G} and \code{modelNames} are taken from \code{x}. } \item{verbose}{ A logical controlling if a text progress bar is displayed during the fitting procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ Return an object of class \code{'mclustBIC'} containing the Bayesian Information Criterion for the specified mixture models numbers of clusters. Auxiliary information returned as attributes. The corresponding \code{print} method shows the matrix of values and the top models according to the BIC criterion. } \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. } \seealso{ \code{\link{priorControl}}, \code{\link{emControl}}, \code{\link{mclustModel}}, \code{\link{summary.mclustBIC}}, \code{\link{hc}}, \code{\link{me}}, \code{\link{mclustModelNames}}, \code{\link{mclust.options}} } \examples{ irisBIC <- mclustBIC(iris[,-5]) irisBIC plot(irisBIC) \dontrun{ subset <- sample(1:nrow(iris), 100) irisBIC <- mclustBIC(iris[,-5], initialization=list(subset = subset)) irisBIC plot(irisBIC) irisBIC1 <- mclustBIC(iris[,-5], G=seq(from=1,to=9,by=2), modelNames=c("EII", "EEI", "EEE")) irisBIC1 plot(irisBIC1) irisBIC2 <- mclustBIC(iris[,-5], G=seq(from=2,to=8,by=2), modelNames=c("VII", "VVI", "VVV"), x= irisBIC1) irisBIC2 plot(irisBIC2) } nNoise <- 450 set.seed(0) poissonNoise <- apply(apply( iris[,-5], 2, range), 2, function(x, n) runif(n, min = x[1]-.1, max = x[2]+.1), n = nNoise) set.seed(0) noiseInit <- sample(c(TRUE,FALSE),size=nrow(iris)+nNoise,replace=TRUE, prob=c(3,1)) irisNdata <- rbind(iris[,-5], poissonNoise) irisNbic <- mclustBIC(data = irisNdata, G = 1:5, initialization = list(noise = noiseInit)) irisNbic plot(irisNbic) } \keyword{cluster} % docclass is function mclust/man/combMat.Rd0000644000176200001440000000173113175051065014212 0ustar liggesusers\name{combMat} \alias{combMat} \title{ Combining Matrix } \description{ Create a combining matrix } \usage{ combMat(K, l1, l2) } \arguments{ \item{K}{ The original number of classes: the matrix will define a combining from K to (K-1) classes. } \item{l1}{ Label of one of the two classes to be combined. } \item{l2}{ Label of the other class to be combined. } } \value{ If \code{z} is a vector (length \emph{K}) whose \emph{k}th entry is the probability that an observation belongs to the \emph{k}th class in a \emph{K}-classes classification, then \code{combiM \%*\% z} is the vector (length \emph{K-1}) whose \emph{k}th entry is the probability that the observation belongs to the \emph{k}th class in the \emph{K-1}-classes classification obtained by merging classes \code{l1} and \code{l2} in the initial classification. } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{clustCombi}}, \code{\link{combiPlot}} } %\examples{} \keyword{ cluster } mclust/man/Baudry_etal_2010_JCGS_examples.Rd0000644000176200001440000000371512460535131020232 0ustar liggesusers\name{Baudry_etal_2010_JCGS_examples} \alias{Baudry_etal_2010_JCGS_examples} \alias{ex4.1} \alias{ex4.2} \alias{ex4.3} \alias{ex4.4.1} \alias{ex4.4.2} \alias{Test1D} \docType{data} \title{Simulated Example Datasets From Baudry et al. (2010)} \description{ Simulated datasets used in Baudry et al. (2010) to illustrate the proposed mixture components combining method for clustering. Please see the cited article for a detailed presentation of these datasets. The data frame with name exN.M is presented in Section N.M in the paper. Test1D (not in the article) has been simulated from a Gaussian mixture distribution in R. ex4.1 and ex4.2 have been simulated from a Gaussian mixture distribution in R^2. ex4.3 has been simulated from a mixture of a uniform distribution on a square and a spherical Gaussian distribution in R^2. ex4.4.1 has been simulated from a Gaussian mixture model in R^2 ex4.4.2 has been simulated from a mixture of two uniform distributions in R^3. } \usage{data(Baudry_etal_2010_JCGS_examples)} \format{ \code{ex4.1} is a data frame with 600 observations on 2 real variables. \code{ex4.2} is a data frame with 600 observations on 2 real variables. \code{ex4.3} is a data frame with 200 observations on 2 real variables. \code{ex4.4.1} is a data frame with 800 observations on 2 real variables. \code{ex4.4.2} is a data frame with 300 observations on 3 real variables. \code{Test1D} is a data frame with 200 observations on 1 real variable. } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) output <- clustCombi(ex4.4.1) output # is of class clustCombi # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes plot(output, ex4.4.1) } } \keyword{datasets} mclust/man/MclustBootstrap.Rd0000644000176200001440000001042113175053211015764 0ustar liggesusers\name{MclustBootstrap} \alias{MclustBootstrap} \alias{print.MclustBootstrap} \title{Resampling-based Inference for Gaussian finite mixture models} \description{Bootstrap or jackknife estimation of standard errors and percentile bootstrap confidence intervals for the parameters of a Gaussian mixture model.} \usage{ MclustBootstrap(object, nboot = 999, type = c("bs", "wlbs", "jk"), verbose = interactive(), \dots) } \arguments{ \item{object}{An object of class \code{'Mclust'} or \code{'densityMclust'} providing an estimated Gaussian mixture model.} \item{nboot}{The number of bootstrap replications.} \item{type}{A character string specifying the type of resampling to use: \tabular{ll}{ \code{"bs"} \tab = nonparametric bootstrap \cr \code{"wlbs"} \tab = weighted likelihood bootstrap \cr \code{"jk"} \tab = jackknife \cr } } \item{verbose}{A logical controlling if a text progress bar is displayed during the resampling procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.} \item{\dots}{Further arguments passed to or from other methods.} } \details{For a fitted Gaussian mixture model with \code{object$G} mixture components and covariances parameterisation \code{object$modelName}, this function returns either the boostrap distribution or the jackknife distribution of mixture parameters. In the former case, the nonparametric bootstrap or the weighted likelihood bootstrap approach could be used, so the the bootstrap procedure generates \code{nboot} bootstrap samples of the same size as the original data by resampling with replacement from the observed data. In the jackknife case, the procedure considers all the samples obtained by omitting one observation at time. The resulting resampling distribution can then be used to obtain standard errors and percentile confidence intervals by the use of \code{\link{summary.MclustBootstrap}} function.} \value{An object of class \code{'MclustBootstrap'} with the following components: \item{n}{The number of observations in the data.} \item{d}{The dimension of the data.} \item{G}{A value specifying the number of mixture components.} \item{modelName}{A character string specifying the mixture model covariances parameterisation (see \code{\link{mclustModelNames}}).} \item{parameters}{A list of estimated parameters for the mixture components with the following components: \describe{ \item{\code{pro}}{a vector of mixing proportions.} \item{\code{mean}}{a matrix of means for each component.} \item{\code{variance}}{an array of covariance matrices for each component.} } } \item{nboot}{The number of bootstrap replications if \code{type = "bs"} or \code{type = "wlbs"}. The sample size if \code{type = "jk"}.} \item{type}{The type of resampling approach used.} \item{nonfit}{The number of resamples that did not convergence during the procedure.} \item{pro}{A matrix of dimension (\code{nboot} x \code{G}) containing the bootstrap distribution for the mixing proportion.} \item{mean}{An array of dimension (\code{nboot} x \code{d} x \code{G}), where \code{d} is the dimension of the data, containing the bootstrap distribution for the component means.} \item{variance}{An array of dimension (\code{nboot} x \code{d} x \code{d} x \code{G}), where \code{d} is the dimension of the data, containing the bootstrap distribution for the component covariances.} } \references{ Davison, A. and Hinkley, D. (1997) \emph{Bootstrap Methods and Their Applications}. Cambridge University Press. McLachlan, G.J. and Peel, D. (2000) \emph{Finite Mixture Models}. Wiley. O'Hagan A., Murphy T. B., Gormley I. C. and Scrucca L. (2015) On Estimation of Parameter Uncertainty in Model-Based Clustering. Submitted to \emph{Computational Statistics}. } \seealso{\code{\link{summary.MclustBootstrap}}, \code{\link{plot.MclustBootstrap}}, \code{\link{Mclust}}, \code{\link{densityMclust}}.} \examples{ \dontrun{ data(diabetes) X = diabetes[,-1] modClust = Mclust(X) bootClust = MclustBootstrap(modClust) summary(bootClust, what = "se") summary(bootClust, what = "ci") data(acidity) modDens = densityMclust(acidity) modDens = MclustBootstrap(modDens) summary(modDens, what = "se") summary(modDens, what = "ci") } } \keyword{htest} \keyword{cluster} mclust/man/mclustICL.Rd0000644000176200001440000001044013175053351014464 0ustar liggesusers\name{mclustICL} \alias{mclustICL} \alias{print.mclustICL} \alias{summary.mclustICL} \alias{print.summary.mclustICL} \title{ICL Criterion for Model-Based Clustering} \description{ ICL (Integrated Complete-data Likelihood) for parameterized Gaussian mixture models fitted by EM algorithm initialized by model-based hierarchical clustering. } \usage{ mclustICL(data, G = NULL, modelNames = NULL, initialization = list(hcPairs = NULL, subset = NULL, noise = NULL), x = NULL, \dots) \method{summary}{mclustICL}(object, G, modelNames, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{G}{ An integer vector specifying the numbers of mixture components (clusters) for which the criteria should be calculated. The default is \code{G = 1:9}. } \item{modelNames}{ A vector of character strings indicating the models to be fitted in the EM phase of clustering. The help file for \code{\link{mclustModelNames}} describes the available models. The default is: \describe{ \item{\code{c("E", "V")}}{for univariate data} \item{\code{mclust.options("emModelNames")}}{for multivariate data (n > d)} \item{\code{c("EII", "VII", "EEI", "EVI", "VEI", "VVI")}}{the spherical and diagonal models for multivariate data (n <= d)} } } \item{initialization}{ A list containing zero or more of the following components: \describe{ \item{\code{hcPairs}}{ A matrix of merge pairs for hierarchical clustering such as produced by function \code{hc}. For multivariate data, the default is to compute a hierarchical clustering tree by applying function \code{hc} with \code{modelName = "VVV"} to the data or a subset as indicated by the \code{subset} argument. The hierarchical clustering results are to start EM. For univariate data, the default is to use quantiles to start EM. } \item{\code{subset}}{ A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. } } } \item{x}{ An object of class \code{'mclustICL'}. If supplied, \code{mclustICL} will use the settings in \code{x} to produce another object of class \code{'mclustICL'}, but with \code{G} and \code{modelNames} as specified in the arguments. Models that have already been computed in \code{x} are not recomputed. All arguments to \code{mclustICL} except \code{data}, \code{G} and \code{modelName} are ignored and their values are set as specified in the attributes of \code{x}. Defaults for \code{G} and \code{modelNames} are taken from \code{x}. } \item{\dots}{ Futher arguments used in the call to \code{\link{Mclust}}. See also \code{\link{mclustBIC}}. } \item{object}{ An integer vector specifying the numbers of mixture components (clusters) for which the criteria should be calculated. The default is \code{G = 1:9}. } } \value{ Returns an object of class \code{'mclustICL'} containing the the ICL criterion for the specified mixture models and numbers of clusters. The corresponding \code{print} method shows the matrix of values and the top models according to the ICL criterion. The \code{summary} method shows only the top models. } \references{ Biernacki, C., Celeux, G., Govaert, G. (2000). Assessing a mixture model for clustering with the integrated completed likelihood. \emph{IEEE Trans. Pattern Analysis and Machine Intelligence}, 22 (7), 719-725. Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. } \seealso{ \code{\link{plot.mclustICL}}, \code{\link{Mclust}}, \code{\link{mclustBIC}}, \code{\link{mclustBootstrapLRT}}, \code{\link{bic}}, \code{\link{icl}} } \examples{ data(faithful) faithful.ICL <- mclustICL(faithful) faithful.ICL summary(faithful.ICL) plot(faithful.ICL) \dontrun{ # compare with faithful.BIC = mclustBIC(faithful) faithful.BIC plot(faithful.BIC) } } \keyword{cluster} mclust/man/mvnX.Rd0000644000176200001440000000601613205036667013566 0ustar liggesusers\name{mvnX} \alias{mvnX} \alias{mvnXII} \alias{mvnXXI} \alias{mvnXXX} \title{ Univariate or Multivariate Normal Fit } \description{ Computes the mean, covariance, and log-likelihood from fitting a single Gaussian (univariate or multivariate normal). } \usage{ mvnX(data, prior = NULL, warn = NULL, \dots) mvnXII(data, prior = NULL, warn = NULL, \dots) mvnXXI(data, prior = NULL, warn = NULL, \dots) mvnXXX(data, prior = NULL, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{warn}{ A logical value indicating whether or not a warning should be issued whenever a singularity is encountered. The default is given by \code{mclust.options("warn")}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \details{ \describe{ \item{\code{mvnXII}}{computes the best fitting Gaussian with the covariance restricted to be a multiple of the identity.} \item{\code{mvnXXI}}{computes the best fitting Gaussian with the covariance restricted to be diagonal.} \item{\code{mvnXXX}}{computes the best fitting Gaussian with ellipsoidal (unrestricted) covariance.} } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{parameters}{ \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{mvn}}, \code{\link{mstepE}} } \examples{ \dontrun{ n <- 1000 set.seed(0) x <- rnorm(n, mean = -1, sd = 2) mvnX(x) mu <- c(-1, 0, 1) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% (2*diag(3)), MARGIN = 2, STATS = mu, FUN = "+") mvnXII(x) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% diag(1:3), MARGIN = 2, STATS = mu, FUN = "+") mvnXXI(x) Sigma <- matrix(c(9,-4,1,-4,9,4,1,4,9), 3, 3) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% chol(Sigma), MARGIN = 2, STATS = mu, FUN = "+") mvnXXX(x) } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/plot.mclustBIC.Rd0000644000176200001440000000421513175053654015440 0ustar liggesusers\name{plot.mclustBIC} \alias{plot.mclustBIC} \title{BIC Plot for Model-Based Clustering} \description{ Plots the BIC values returned by the \code{\link{mclustBIC}} function. } \usage{ \method{plot}{mclustBIC}(x, G = NULL, modelNames = NULL, symbols = NULL, colors = NULL, xlab = NULL, ylab = "BIC", ylim = NULL, legendArgs = list(x = "bottomright", ncol = 2, cex = 1, inset = 0.01), \dots) } \arguments{ \item{x}{ Output from \code{mclustBIC}. } \item{G}{ One or more numbers of components corresponding to models fit in \code{x}. The default is to plot the BIC for all of the numbers of components fit. } \item{modelNames}{ One or more model names corresponding to models fit in \code{x}. The default is to plot the BIC for all of the models fit. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotColors")}. } \item{xlab}{ Optional label for the horizontal axis of the BIC plot. } \item{ylab}{ Label for the vertical axis of the BIC plot. } \item{ylim}{ Optional limits for the vertical axis of the BIC plot. } \item{legendArgs}{ Arguments to pass to the \code{legend} function. Set to \code{NULL} for no legend. } \item{\dots}{ Other graphics parameters. } } \value{ A plot of the BIC values. } \seealso{ \code{\link{mclustBIC}} } \examples{ \dontrun{ plot(mclustBIC(precip), legendArgs = list(x = "bottomleft")) plot(mclustBIC(faithful)) plot(mclustBIC(iris[,-5])) } } \keyword{cluster} % docclass is function mclust/man/errorBars.Rd0000644000176200001440000000317212542725574014604 0ustar liggesusers\name{errorBars} \alias{errorBars} \title{Draw error bars on a plot} \description{ Draw error bars at x from upper to lower. If \code{horizontal = FALSE} (default) bars are drawn vertically, otherwise horizontally. } \usage{ errorBars(x, upper, lower, width = 0.1, code = 3, angle = 90, horizontal = FALSE, \dots) } \arguments{ \item{x}{A vector of values where the bars must be drawn.} \item{upper}{A vector of upper values where the bars must end.} \item{lower}{A vector of lower values where the bars must start.} \item{width}{A value specifying the width of the end-point segment.} \item{code}{An integer code specifying the kind of arrows to be drawn. For details see \code{\link[graphics]{arrows}}.} \item{angle}{A value specifying the angle at the arrow edge. For details see \code{\link[graphics]{arrows}}.} \item{horizontal}{A logical specifying if bars should be drawn vertically (default) or horizontally.} \item{\dots}{Further arguments are passed to \code{\link[graphics]{arrows}}.} } %\value{} \examples{ par(mfrow=c(2,2)) # Create a simple example dataset x <- 1:5 n <- c(10, 15, 12, 6, 3) se <- c(1, 1.2, 2, 1, .5) # upper and lower bars b <- barplot(n, ylim = c(0, max(n)*1.5)) errorBars(b, lower = n-se, upper = n+se, lwd = 2, col = "red3") # one side bars b <- barplot(n, ylim = c(0, max(n)*1.5)) errorBars(b, lower = n, upper = n+se, lwd = 2, col = "red3", code = 1) # plot(x, n, ylim = c(0, max(n)*1.5), pch = 0) errorBars(x, lower = n-se, upper = n+se, lwd = 2, col = "red3") # dotchart(n, labels = x, pch = 19, xlim = c(0, max(n)*1.5)) errorBars(x, lower = n-se, upper = n+se, col = "red3", horizontal = TRUE) } mclust/man/priorControl.Rd0000644000176200001440000000330713175055124015325 0ustar liggesusers\name{priorControl} \alias{priorControl} \title{ Conjugate Prior for Gaussian Mixtures. } \description{ Specify a conjugate prior for Gaussian mixtures. } \usage{ priorControl(functionName = "defaultPrior", \dots) } \arguments{ \item{functionName}{ The name of the function specifying the conjugate prior. By default the function \code{\link{defaultPrior}} is used, and this can also be used as a template for alternative specification. } \item{\dots}{ Optional named arguments to the function specified in \code{functionName} together with their values. } } \value{ A list with the function name as the first component. The remaining components (if any) consist of a list of arguments to the function with assigned values. } \details{ The function \code{priorControl} is used to specify a conjugate prior for EM within \emph{MCLUST}.\cr Note that, as described in \code{\link{defaultPrior}}, in the multivariate case only 10 out of 14 models may be used in conjunction with a prior, i.e. those available in \emph{MCLUST} up to version 4.4. } \references{ C. Fraley and A. E. Raftery (2007). Bayesian regularization for normal mixture estimation and model-based clustering. \emph{Journal of Classification 24:155-181}. } \seealso{ \code{\link{mclustBIC}}, \code{\link{me}}, \code{\link{mstep}}, \code{\link{defaultPrior}} } \examples{ # default prior irisBIC <- mclustBIC(iris[,-5], prior = priorControl()) summary(irisBIC, iris[,-5]) # no prior on the mean; default prior on variance irisBIC <- mclustBIC(iris[,-5], prior = priorControl(shrinkage = 0)) summary(irisBIC, iris[,-5]) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/mstepE.Rd0000644000176200001440000001172013175053522014064 0ustar liggesusers\name{mstepE} \alias{mstepE} \alias{mstepV} \alias{mstepEII} \alias{mstepVII} \alias{mstepEEI} \alias{mstepVEI} \alias{mstepEVI} \alias{mstepVVI} \alias{mstepEEE} \alias{mstepEEV} \alias{mstepVEV} \alias{mstepVVV} \alias{mstepEVE} \alias{mstepEVV} \alias{mstepVEE} \alias{mstepVVE} \title{ M-step for a parameterized Gaussian mixture model. } \description{ Maximization step in the EM algorithm for a parameterized Gaussian mixture model. } \usage{ mstepE( data, z, prior = NULL, warn = NULL, \dots) mstepV( data, z, prior = NULL, warn = NULL, \dots) mstepEII( data, z, prior = NULL, warn = NULL, \dots) mstepVII( data, z, prior = NULL, warn = NULL, \dots) mstepEEI( data, z, prior = NULL, warn = NULL, \dots) mstepVEI( data, z, prior = NULL, warn = NULL, control = NULL, \dots) mstepEVI( data, z, prior = NULL, warn = NULL, \dots) mstepVVI( data, z, prior = NULL, warn = NULL, \dots) mstepEEE( data, z, prior = NULL, warn = NULL, \dots) mstepEEV( data, z, prior = NULL, warn = NULL, \dots) mstepVEV( data, z, prior = NULL, warn = NULL, control = NULL,\dots) mstepVVV( data, z, prior = NULL, warn = NULL, \dots) mstepEVE( data, z, prior = NULL, warn = NULL, control = NULL, \dots) mstepEVV( data, z, prior = NULL, warn = NULL, \dots) mstepVEE( data, z, prior = NULL, warn = NULL, control = NULL, \dots) mstepVVE( data, z, prior = NULL, warn = NULL, control = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. In analyses involving noise, this should not include the conditional probabilities for the noise component. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is given by \code{mclust.options("warn")}. } \item{control}{ Values controlling termination for models \code{"VEI"} and \code{"VEV"} that have an iterative M-step. This should be a list with components named \emph{itmax} and \emph{tol}. These components can be of length 1 or 2; in the latter case, \code{mstep} will use the second value, under the assumption that the first applies to an outer iteration (as in the function \code{me}). The default uses the default values from the function \code{emControl}, which sets no limit on the number of iterations, and a relative tolerance of \code{sqrt(.Machine$double.eps)} on successive iterates. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{Attributes:}{ \code{"info"} For those models with iterative M-steps (\code{"VEI"} and \code{"VEV"}), information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \note{ This function computes the M-step only for MVN mixtures, so in analyses involving noise, the conditional probabilities input should exclude those for the noise component. \cr In contrast to \code{me} for the EM algorithm, computations in \code{mstep} are carried out unless failure due to overflow would occur. To impose stricter tolerances on a single \code{mstep}, use \code{me} with the \emph{itmax} component of the \code{control} argument set to 1. } \seealso{ \code{\link{mstep}}, \code{\link{me}}, \code{\link{estep}}, \code{\link{mclustVariance}}, \code{\link{priorControl}}, \code{\link{emControl}}. } \examples{ \dontrun{ mstepVII(data = iris[,-5], z = unmap(iris[,5]))} } \keyword{cluster} mclust/man/estepE.Rd0000644000176200001440000000752613175052017014063 0ustar liggesusers\name{estepE} \alias{estepE} \alias{estepV} \alias{estepEII} \alias{estepVII} \alias{estepEEI} \alias{estepVEI} \alias{estepEVI} \alias{estepVVI} \alias{estepEEE} \alias{estepEEV} \alias{estepVEV} \alias{estepVVV} \alias{estepEVE} \alias{estepEVV} \alias{estepVEE} \alias{estepVVE} \title{ E-step in the EM algorithm for a parameterized Gaussian mixture model. } \description{ Implements the expectation step in the EM algorithm for a parameterized Gaussian mixture model. } \usage{ estepE(data, parameters, warn = NULL, \dots) estepV(data, parameters, warn = NULL, \dots) estepEII(data, parameters, warn = NULL, \dots) estepVII(data, parameters, warn = NULL, \dots) estepEEI(data, parameters, warn = NULL, \dots) estepVEI(data, parameters, warn = NULL, \dots) estepEVI(data, parameters, warn = NULL, \dots) estepVVI(data, parameters, warn = NULL, \dots) estepEEE(data, parameters, warn = NULL, \dots) estepEEV(data, parameters, warn = NULL, \dots) estepVEV(data, parameters, warn = NULL, \dots) estepVVV(data, parameters, warn = NULL, \dots) estepEVE(data, parameters, warn = NULL, \dots) estepEVV(data, parameters, warn = NULL, \dots) estepVEE(data, parameters, warn = NULL, \dots) estepVVE(data, parameters, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ The parameters of the model: %\itemize{ %\item An argument describing the variance (depends on the model): \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{mu}{ The mean for each component. If there is more than one component, this is a matrix whose columns are the means of the components. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ An estimate of the reciprocal hypervolume of the data region. If not supplied or set to a negative value, the default is determined by applying function \code{hypvol} to the data. Used only when \code{pro} includes an additional mixing proportion for a noise component. } } %} } \item{warn}{ A logical value indicating whether or certain warnings should be issued. The default is given by \code{mclust.options("warn")}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ Character string identifying the model. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ The input parameters. } \item{loglik}{ The logliklihood for the data in the mixture model. } \item{Attribute}{ \code{"WARNING"}: An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{estep}}, \code{\link{em}}, \code{\link{mstep}}, \code{\link{do.call}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}}. } \examples{ \dontrun{ msEst <- mstepEII(data = iris[,-5], z = unmap(iris[,5])) names(msEst) estepEII(data = iris[,-5], parameters = msEst$parameters)} } \keyword{cluster} mclust/man/cross.Rd0000644000176200001440000000102413107075407013755 0ustar liggesusers\name{cross} \alias{cross} \title{Simulated Cross Data} \usage{data(cross)} \description{ A 500 by 3 matrix in which the first column is the classification and the remaining columns are two data from a simulation of two crossed elliptical Gaussians. } \examples{ # This dataset was created as follows \dontrun{ n <- 250 set.seed(0) cross <- rbind(matrix(rnorm(n*2), n, 2) \%*\% diag(c(1,9)), matrix(rnorm(n*2), n, 2) \%*\% diag(c(1,9))[,2:1]) cross <- cbind(c(rep(1,n),rep(2,n)), cross) } } \keyword{datasets} mclust/man/clustCombi-internals.Rd0000644000176200001440000000034612460535131016727 0ustar liggesusers\name{clustCombi-internal} \title{Internal clustCombi functions} \alias{combi} \alias{pcws2_reg} \alias{pcws3_reg} \alias{xlog} \description{ Internal functions not intended to be called directly by users. } \keyword{internal} mclust/man/hclass.Rd0000644000176200001440000000221313175052541014101 0ustar liggesusers\name{hclass} \alias{hclass} \title{ Classifications from Hierarchical Agglomeration } \description{ Determines the classifications corresponding to different numbers of groups given merge pairs from hierarchical agglomeration. } \usage{ hclass(hcPairs, G) } \arguments{ \item{hcPairs}{ A numeric two-column matrix in which the \emph{i}th row gives the minimum index for observations in each of the two clusters merged at the \emph{i}th stage of agglomerative hierarchical clustering. } \item{G}{ An integer or vector of integers giving the number of clusters for which the corresponding classfications are wanted. } } \value{ A matrix with \code{length(G)} columns, each column corresponding to a classification. Columns are indexed by the character representation of the integers in \code{G}. } \seealso{ \code{\link{hc}}, \code{\link{hcE}} } \examples{ hcTree <- hc(modelName="VVV", data = iris[,-5]) cl <- hclass(hcTree,c(2,3)) \dontrun{ par(pty = "s", mfrow = c(1,1)) clPairs(iris[,-5],cl=cl[,"2"]) clPairs(iris[,-5],cl=cl[,"3"]) } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/mclustBootstrapLRT.Rd0000644000176200001440000001111613026470160016411 0ustar liggesusers\name{mclustBootstrapLRT} \alias{mclustBootstrapLRT} \alias{print.mclustBootstrapLRT} \alias{plot.mclustBootstrapLRT} \title{Bootstrap Likelihood Ratio Test for the Number of Mixture Components} \description{Perform the likelihood ratio test (LRT) for assessing the number of mixture components in a specific finite mixture model parameterisation. The observed significance is approximated by using the (parametric) bootstrap for the likelihood ratio test statistic (LRTS).} \usage{ mclustBootstrapLRT(data, modelName = NULL, nboot = 999, level = 0.05, maxG = NULL, verbose = interactive(), \dots) \method{print}{mclustBootstrapLRT}(x, \dots) \method{plot}{mclustBootstrapLRT}(x, G = 1, hist.col = "grey", hist.border = "lightgrey", breaks = "Scott", col = "forestgreen", lwd = 2, lty = 3, main = NULL, \dots) } \arguments{ \item{data}{A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables.} \item{modelName}{A character string indicating the mixture model to be fitted. The help file for \code{\link{mclustModelNames}} describes the available models.} \item{nboot}{The number of bootstrap replications to use (by default 999).} \item{level}{The significance level to be used to terminate the sequential bootstrap procedure.} \item{maxG}{The maximum number of mixture components \eqn{G} to test. If not provided the procedure is stopped when a test is not significant at the specified \code{level}.} \item{verbose}{A logical controlling if a text progress bar is displayed during the bootstrap procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.} \item{\dots}{Further arguments passed to or from other methods. In particular, see the optional arguments in \code{\link{mclustBIC}}.} \item{x}{An \code{'mclustBootstrapLRT'} object.} \item{G}{A value specifying the number of components for which to plot the bootstrap distribution.} \item{hist.col}{The colour to be used to fill the bars of the histogram.} \item{hist.border}{The color of the border around the bars of the histogram.} \item{breaks}{See the argument in function \code{\link[graphics]{hist}}.} \item{col, lwd, lty}{The color, line width and line type to be used to represent the observed LRT statistic.} \item{main}{The title for the graph.} } \details{The implemented algorithm for computing the LRT observed significance using the bootstrap is the following. Let \eqn{G_0} be the number of mixture components under the null hypothesis versus \eqn{G_1 = G_0+1} under the alternative. Bootstrap samples are drawn by simulating data under the null hypothesis. Then, the p-value may be approximated using eq. (13) on McLachlan and Rathnayake (2014). Equivalently, using the notation of Davison and Hinkley (1997) it may be computed as \deqn{\textnormal{p-value} = \frac{1 + \#\{LRT^*_b \ge LRTS_{obs}\}}{B+1}}{% p-value = (1 + #{LRTS*_b \ge LRT_obs}) / (B+1)} where \cr \eqn{B} = number of bootstrap samples \cr \eqn{LRT_{obs}}{LRT_obs} = LRTS computed on the observed data\cr \eqn{LRT^*_b}{LRT*_b} = LRTS computed on the \eqn{b}th bootstrap sample. } \value{An object of class \code{'mclustBootstrapLRT'} with the following components: \item{G}{A vector of number of components tested under the null hypothesis.} \item{modelName}{A character string specifying the mixture model as provided in the function call (see above).} \item{obs}{The observed values of the LRTS.} \item{boot}{A matrix of dimension \code{nboot} x the number of components tested containing the bootstrap values of LRTS.} \item{p.value}{A vector of p-values.} } \references{ Davison, A. and Hinkley, D. (1997) \emph{Bootstrap Methods and Their Applications}. Cambridge University Press. McLachlan G.J. (1987) On bootstrapping the likelihood ratio test statistic for the number of components in a normal mixture. \emph{Applied Statistics}, 36, 318-324. McLachlan, G.J. and Peel, D. (2000) \emph{Finite Mixture Models}. Wiley. McLachlan, G.J. and Rathnayake, S. (2014) On the number of components in a Gaussian mixture model. \emph{Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery}, 4(5), pp. 341-355. } \seealso{\code{\link{mclustBIC}}, \code{\link{mclustICL}}, \code{\link{Mclust}}} \examples{ \dontrun{ data(faithful) faithful.boot = mclustBootstrapLRT(faithful, model = "VVV") faithful.boot plot(faithful.boot, G = 1) plot(faithful.boot, G = 2) } } \keyword{htest} \keyword{cluster} mclust/man/adjustedRandIndex.Rd0000644000176200001440000000275313175052444016237 0ustar liggesusers\name{adjustedRandIndex} \alias{adjustedRandIndex} \title{ Adjusted Rand Index } \description{ Computes the adjusted Rand index comparing two classifications. } \usage{ adjustedRandIndex(x, y) } \arguments{ \item{x}{ A numeric or character vector of class labels. } \item{y}{ A numeric or character vector of class labels. The length of \code{y} should be the same as that of \code{x}. } } \value{ The adjusted Rand index comparing the two partitions (a scalar). This index has zero expected value in the case of random partition, and it is bounded above by 1 in the case of perfect agreement between two partitions. } \references{ L. Hubert and P. Arabie (1985) Comparing Partitions, \emph{Journal of the Classification}, 2, pp. 193-218. } \seealso{ \code{\link{classError}}, \code{\link{mapClass}}, \code{\link{table}} } \examples{ a <- rep(1:3, 3) a b <- rep(c("A", "B", "C"), 3) b adjustedRandIndex(a, b) a <- sample(1:3, 9, replace = TRUE) a b <- sample(c("A", "B", "C"), 9, replace = TRUE) b adjustedRandIndex(a, b) a <- rep(1:3, 4) a b <- rep(c("A", "B", "C", "D"), 3) b adjustedRandIndex(a, b) irisHCvvv <- hc(modelName = "VVV", data = iris[,-5]) cl3 <- hclass(irisHCvvv, 3) adjustedRandIndex(cl3,iris[,5]) irisBIC <- mclustBIC(iris[,-5]) adjustedRandIndex(summary(irisBIC,iris[,-5])$classification,iris[,5]) adjustedRandIndex(summary(irisBIC,iris[,-5],G=3)$classification,iris[,5]) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/imputePairs.Rd0000644000176200001440000000517213175055571015143 0ustar liggesusers\name{imputePairs} \alias{imputePairs} \title{ Pairwise Scatter Plots showing Missing Data Imputations } \description{ Creates a scatter plot for each pair of variables in given data, allowing display of imputations for missing values in different colors and symbols than non missing values. } \usage{ imputePairs(data, dataImp, symbols = c(1,16), colors = c("black", "red"), labels, panel = points, ..., lower.panel = panel, upper.panel = panel, diag.panel = NULL, text.panel = textPanel, label.pos = 0.5 + has.diag/3, cex.labels = NULL, font.labels = 1, row1attop = TRUE, gap = 0.2) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations containing missing values. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{dataImp}{ The dataset \code{data} with missing values imputed. } \item{symbols}{ Either an integer or character vector assigning plotting symbols to the nonmissing data and impued values, respectively. The default is a closed circle for the nonmissing data and an open circle for the imputed values. } \item{colors}{ Either an integer or character vector assigning colors to the nonmissing data and impued values, respectively. The default is black for the nonmissing data and red for the imputed values. } \item{labels}{ As in function \code{pairs}. } \item{panel}{ As in function \code{pairs}. } \item{\dots}{ As in function \code{pairs}. } \item{lower.panel}{ As in function \code{pairs}. } \item{upper.panel}{ As in function \code{pairs}. } \item{diag.panel}{ As in function \code{pairs}. } \item{text.panel}{ As in function \code{pairs}. } \item{label.pos}{ As in function \code{pairs}. } \item{cex.labels}{ As in function \code{pairs}. } \item{font.labels}{ As in function \code{pairs}. } \item{row1attop}{ As in function \code{pairs}. } \item{gap}{ As in function \code{pairs}. } } \value{ A pairs plot displaying the location of missing and nonmissing values. } \references{ Schafer J. L. (1997). Analysis of Imcomplete Multivariate Data, Chapman and Hall. } \seealso{ \code{\link{pairs}}, \code{\link{imputeData}} } \examples{ \dontrun{ # Note that package 'mix' must be installed data(stlouis, package = "mix") # impute the continuos variables in the stlouis data stlimp <- imputeData(stlouis[,-(1:3)]) # plot imputed values imputePairs(stlouis[,-(1:3)], stlimp) } } \keyword{cluster} mclust/man/mclustModelNames.Rd0000644000176200001440000000431113175053370016102 0ustar liggesusers\name{mclustModelNames} \alias{mclustModelNames} \title{ MCLUST Model Names } \description{ Description of model names used in the \emph{MCLUST} package. } \usage{ mclustModelNames(model) } \arguments{ \item{model}{A string specifying the model.} } \details{ The following models are available in package \pkg{mclust}: \tabular{lcl}{ \bold{univariate mixture} \tab \tab \cr \code{"E"} \tab = \tab equal variance (one-dimensional) \cr \code{"V"} \tab = \tab variable variance (one-dimensional) \cr \bold{multivariate mixture} \tab \tab \cr \code{"EII"} \tab = \tab spherical, equal volume \cr \code{"VII"} \tab = \tab spherical, unequal volume \cr \code{"EEI"} \tab = \tab diagonal, equal volume and shape\cr \code{"VEI"} \tab = \tab diagonal, varying volume, equal shape\cr \code{"EVI"} \tab = \tab diagonal, equal volume, varying shape \cr \code{"VVI"} \tab = \tab diagonal, varying volume and shape \cr \code{"EEE"} \tab = \tab ellipsoidal, equal volume, shape, and orientation \cr \code{"EVE"} \tab = \tab ellipsoidal, equal volume and orientation (*)\cr \code{"VEE"} \tab = \tab ellipsoidal, equal shape and orientation (*)\cr \code{"VVE"} \tab = \tab ellipsoidal, equal orientation (*)\cr \code{"EEV"} \tab = \tab ellipsoidal, equal volume and equal shape\cr \code{"VEV"} \tab = \tab ellipsoidal, equal shape \cr \code{"EVV"} \tab = \tab ellipsoidal, equal volume (*)\cr \code{"VVV"} \tab = \tab ellipsoidal, varying volume, shape, and orientation \cr \bold{single component} \tab \tab \cr \code{"X"} \tab = \tab univariate normal \cr \code{"XII"} \tab = \tab spherical multivariate normal \cr \code{"XXI"} \tab = \tab diagonal multivariate normal \cr \code{"XXX"} \tab = \tab ellipsoidal multivariate normal } (*) new models in \pkg{mclust} version >= 5.0.0. } \value{Returns a list with the following components: \item{model}{a character string indicating the model (as in input).} \item{type}{the description of the indicated model (see Details section).} } \seealso{ \code{\link{Mclust}}, \code{\link{mclustBIC}} } \examples{ mclustModelNames("E") mclustModelNames("EEE") mclustModelNames("VVV") mclustModelNames("XXI") } \keyword{cluster} mclust/man/randProj.Rd0000644000176200001440000001260413205037107014403 0ustar liggesusers\name{randProj} \alias{randProj} \title{ Random projections of multidimensional data modeled by an MVN mixture. } \description{ Plots random projections given multidimensional data and parameters of an MVN mixture model for the data. } \usage{ randProj(data, seeds=0, parameters=NULL, z=NULL, classification=NULL, truth=NULL, uncertainty=NULL, what = c("classification", "errors", "uncertainty"), quantiles = c(0.75, 0.95), symbols=NULL, colors=NULL, scale = FALSE, xlim=NULL, ylim=NULL, CEX = 1, PCH = ".", main = FALSE, \dots) } \arguments{ \item{data}{ A numeric matrix or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{seeds}{ A vector if integer seeds for random number generation. Elements should be in the range \code{0:1000}. Each seed should produce a different projection. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix in which the \code{[i,k]}th entry gives the probability of observation \emph{i} belonging to the \emph{k}th class. Used to compute \code{classification} and \code{uncertainty} if those arguments aren't available. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. If present argument \code{z} will be ignored. } \item{truth}{ A numeric or character vector giving a known classification of each data point. If \code{classification} or \code{z} is also present, this is used for displaying classification errors. } \item{uncertainty}{ A numeric vector of values in \emph{(0,1)} giving the uncertainty of each data point. If present argument \code{z} will be ignored. } \item{what}{ Choose from one of the following three options: \code{"classification"} (default), \code{"errors"}, \code{"uncertainty"}. } \item{quantiles}{ A vector of length 2 giving quantiles used in plotting uncertainty. The smallest symbols correspond to the smallest quantile (lowest uncertainty), medium-sized (open) symbols to points falling between the given quantiles, and large (filled) symbols to those in the largest quantile (highest uncertainty). The default is \emph{(0.75,0.95)}. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotColors")}. } \item{scale}{ A logical variable indicating whether or not the two chosen dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. Default: \code{scale=FALSE} } \item{xlim, ylim}{ Arguments specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. } \item{CEX}{ An argument specifying the size of the plotting symbols. The default value is 1. } \item{PCH}{ An argument specifying the symbol to be used when a classificatiion has not been specified for the data. The default value is a small dot ".". } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{\dots}{ Other graphics parameters. } } \value{ A plot showing a random two-dimensional projection of the data, together with the location of the mixture components, classification, uncertainty, and/or classification errors. } \seealso{ \code{\link{clPairs}}, \code{\link{coordProj}}, \code{\link{mclust2Dplot}}, \code{\link{mclust.options}} } \examples{ \dontrun{ est <- meVVV(iris[,-5], unmap(iris[,5])) par(pty = "s", mfrow = c(1,1)) randProj(iris[,-5], seeds=1:3, parameters = est$parameters, z = est$z, what = "classification", main = TRUE) randProj(iris[,-5], seeds=1:3, parameters = est$parameters, z = est$z, truth = iris[,5], what = "errors", main = TRUE) randProj(iris[,-5], seeds=1:3, parameters = est$parameters, z = est$z, what = "uncertainty", main = TRUE) } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/summary.mclustBIC.Rd0000644000176200001440000000760413175055217016161 0ustar liggesusers\name{summary.mclustBIC} \alias{summary.mclustBIC} \alias{print.summary.mclustBIC} \alias{summary.mclustBIC} \alias{summaryMclustBIC} \alias{summaryMclustBICn} \alias{printSummaryMclustBIC} \alias{printSummaryMclustBICn} \title{Summary function for model-based clustering via BIC} \description{ Optimal model characteristics and classification for model-based clustering via \code{mclustBIC}. } \usage{ \method{summary}{mclustBIC}(object, data, G, modelNames, \dots) } \arguments{ \item{object}{ An \code{'mclustBIC'} object, which is the result of applying \code{mclustBIC} to \code{data}. } \item{data}{ The matrix or vector of observations used to generate `object'. } \item{G}{ A vector of integers giving the numbers of mixture components (clusters) from which the best model according to BIC will be selected (\code{as.character(G)} must be a subset of the row names of \code{object}). The default is to select the best model for all numbers of mixture components used to obtain \code{object}. } \item{modelNames}{ A vector of integers giving the model parameterizations from which the best model according to BIC will be selected (\code{as.character(model)} must be a subset of the column names of \code{object}). The default is to select the best model for parameterizations used to obtain \code{object}. } \item{\dots}{ Not used. For generic/method consistency. } } \value{ A list giving the optimal (according to BIC) parameters, conditional probabilities \code{z}, and log-likelihood, together with the associated classification and its uncertainty. The details of the output components are as follows: \item{modelName}{ A character string denoting the model corresponding to the optimal BIC. } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The number of mixture components in the model corresponding to the optimal BIC. } \item{bic}{ The optimal BIC value. } \item{loglik}{ The log-likelihood corresponding to the optimal BIC. } \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in the data belongs to the \emph{k}th class. } \item{classification}{ \code{map(z)}: The classification corresponding to \code{z}. } \item{uncertainty}{ The uncertainty associated with the classification. } \item{Attributes:}{ \code{"bestBICvalues"} Some of the best bic values for the analysis.\cr \code{"prior"} The prior as specified in the input.\cr \code{"control"} The control parameters for EM as specified in the input.\cr \code{"initialization"} The parameters used to initial EM for computing the maximum likelihood values used to obtain the BIC. } } \seealso{ \code{\link{mclustBIC}} \code{\link{mclustModel}} } \examples{ irisBIC <- mclustBIC(iris[,-5]) summary(irisBIC, iris[,-5]) summary(irisBIC, iris[,-5], G = 1:6, modelNames = c("VII", "VVI", "VVV")) } \keyword{cluster} % docclass is function mclust/man/mapClass.Rd0000644000176200001440000000240312460535131014365 0ustar liggesusers\name{mapClass} \alias{mapClass} \title{ Correspondence between classifications. } \description{ Best correspondence between classes given two vectors viewed as alternative classifications of the same object. } \usage{ mapClass(a, b) } \arguments{ \item{a}{ A numeric or character vector of class labels. } \item{b}{ A numeric or character vector of class labels. Must have the same length as \code{a}. } } \value{ A list with two named elements, \code{aTOb} and \code{bTOa} which are themselves lists. The \code{aTOb} list has a component corresponding to each unique element of \code{a}, which gives the element or elements of \code{b} that result in the closest class correspondence. The \code{bTOa} list has a component corresponding to each unique element of \code{b}, which gives the element or elements of \code{a} that result in the closest class correspondence. } \seealso{ \code{\link{mapClass}}, \code{\link{classError}}, \code{\link{table}} } \examples{ a <- rep(1:3, 3) a b <- rep(c("A", "B", "C"), 3) b mapClass(a, b) a <- sample(1:3, 9, replace = TRUE) a b <- sample(c("A", "B", "C"), 9, replace = TRUE) b mapClass(a, b) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/emE.Rd0000644000176200001440000001375013175051764013350 0ustar liggesusers\name{emE} \alias{emE} \alias{emV} \alias{emX} \alias{emEII} \alias{emVII} \alias{emEEI} \alias{emVEI} \alias{emEVI} \alias{emVVI} \alias{emEEE} \alias{emEEV} \alias{emVEV} \alias{emVVV} \alias{emEVE} \alias{emEVV} \alias{emVEE} \alias{emVVE} \alias{emXII} \alias{emXXI} \alias{emXXX} \title{ EM algorithm starting with E-step for a parameterized Gaussian mixture model. } \description{ Implements the EM algorithm for a parameterized Gaussian mixture model, starting with the expectation step. } \usage{ emE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emX(data, prior = NULL, warn = NULL, \dots) emEII(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVII(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEEI(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVEI(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEVI(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVVI(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEEE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEEV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVEV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVVV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEVE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEVV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVEE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVVE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emXII(data, prior = NULL, warn = NULL, \dots) emXXI(data, prior = NULL, warn = NULL, \dots) emXXX(data, prior = NULL, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ The parameters of the model: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. There should one more mixing proportion than the number of Gaussian components if the mixture model includes a Poisson noise term. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ An estimate of the reciprocal hypervolume of the data region. The default is determined by applying function \code{hypvol} to the data. Used only when \code{pro} includes an additional mixing proportion for a noise component. } } } \item{prior}{ The default assumes no prior, but this argument allows specification of a conjugate prior on the means and variances through the function \code{priorControl}. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{warn}{ A logical value indicating whether or not a warning should be issued whenever a singularity is encountered. The default is given in \code{mclust.options("warn")}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{me}}, \code{\link{mstep}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}}. } \examples{ \dontrun{ msEst <- mstepEEE(data = iris[,-5], z = unmap(iris[,5])) names(msEst) emEEE(data = iris[,-5], parameters = msEst$parameters)} } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/summary.Mclust.Rd0000644000176200001440000000275113175055207015600 0ustar liggesusers\name{summary.Mclust} \alias{summary.Mclust} \alias{print.summary.Mclust} \title{Summarizing Gaussian Finite Mixture Model Fits} \description{Summary method for class \code{"Mclust"}.} \usage{ \method{summary}{Mclust}(object, parameters = FALSE, classification = FALSE, \dots) \method{print}{summary.Mclust}(x, digits = getOption("digits"), \dots) } \arguments{ \item{object}{An object of class \code{'Mclust'} resulting of a call to \code{\link{Mclust}} or \code{\link{densityMclust}}.} \item{x}{An object of class \code{'summary.Mclust'}, usually, a result of a call to \code{summary.Mclust}.} \item{parameters}{Logical; if \code{TRUE}, the parameters of mixture components are printed.} \item{classification}{Logical; if \code{TRUE}, the MAP classification/clustering of observations is printed.} \item{digits}{The number of significant digits to use when printing.} \item{\dots}{Further arguments passed to or from other methods.} } % \details{} % \value{} \author{Luca Scrucca} % \note{} \seealso{\code{\link{Mclust}}, \code{\link{densityMclust}}.} \examples{ mod1 = Mclust(iris[,1:4]) summary(mod1) summary(mod1, parameters = TRUE, classification = TRUE) mod2 = Mclust(iris[,1:4], G = 1) summary(mod2, parameters = TRUE, classification = TRUE) mod3 = Mclust(iris[,1:4], prior = priorControl()) summary(mod3) mod4 = Mclust(iris[,1:4], prior = priorControl(functionName="defaultPrior", shrinkage=0.1)) summary(mod4, parameters = TRUE, classification = TRUE) } \keyword{cluster} mclust/man/bic.Rd0000644000176200001440000000355113205037164013365 0ustar liggesusers\name{bic} \alias{bic} \title{ BIC for Parameterized Gaussian Mixture Models } \description{ Computes the BIC (Bayesian Information Criterion) for parameterized mixture models given the loglikelihood, the dimension of the data, and number of mixture components in the model. } \usage{ bic(modelName, loglik, n, d, G, noise=FALSE, equalPro=FALSE, ...) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{loglik}{ The log-likelihood for a data set with respect to the Gaussian mixture model specified in the \code{modelName} argument. } \item{n}{ The number of observations in the data used to compute \code{loglik}. } \item{d}{ The dimension of the data used to compute \code{loglik}. } \item{G}{ The number of components in the Gaussian mixture model used to compute \code{loglik}. } \item{noise}{ A logical variable indicating whether or not the model includes an optional Poisson noise component. The default is to assume no noise component. } \item{equalPro}{ A logical variable indicating whether or not the components in the model are assumed to be present in equal proportion. The default is to assume unequal mixing proportions. } \item{\dots}{ Catches unused arguments in an indirect or list call via \code{do.call}. } } \value{ The BIC or Bayesian Information Criterion for the given input arguments. } \seealso{ \code{\link{mclustBIC}}, \code{\link{nVarParams}}, \code{\link{mclustModelNames}}. } \examples{ \dontrun{ n <- nrow(iris) d <- ncol(iris)-1 G <- 3 emEst <- me(modelName="VVI", data=iris[,-5], unmap(iris[,5])) names(emEst) args(bic) bic(modelName="VVI", loglik=emEst$loglik, n=n, d=d, G=G) # do.call("bic", emEst) ## alternative call } } \keyword{cluster} mclust/man/combiTree.Rd0000644000176200001440000000311013107070540014524 0ustar liggesusers\name{combiTree} \alias{combiTree} \title{Tree structure obtained from combining mixture components} \description{The clustering structure The method implemented in \code{\link{clustCombi}} can be used for combining Gaussian mixture components for clustering. This provides a hierarchical structure which can be graphically represented as a tree.} \usage{ combiTree(object, what = c("entropy", "step"), type = c("triangle", "rectangle"), edgePar = list(col = "darkgray", lwd = 2), \dots) } \arguments{ \item{object}{ An object of class \code{'clustCombi'} resulting from a call to \code{\link{clustCombi}}. } \item{what}{ A string specifying the quantity used to draw the vertical axis. Possible values are \code{"entropy"} (default), and \code{"step"}. } \item{type}{ A string specifying the dendrogram's type. Possible values are \code{"triangle"} (default), and \code{"rectangle"}. } \item{edgePar}{ A list of plotting parameters. See \code{\link[stats]{dendrogram}}. } \item{\dots}{Further arguments passed to or from other methods.} } %\details{} \value{ The function always draw a tree and invisibly returns an object of class \code{'dendrogram'} for fine tuning. } %\references{} \author{L. Scrucca} %\note{} \seealso{\code{\link{clustCombi}}} \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) output <- clustCombi(data = ex4.1) combiTree(output) combiTree(output, type = "rectangle") combiTree(output, what = "step") combiTree(output, what = "step", type = "rectangle")} } \keyword{cluster} \keyword{hplot} mclust/man/emControl.Rd0000644000176200001440000000442413175051742014576 0ustar liggesusers\name{emControl} \alias{emControl} \title{ Set control values for use with the EM algorithm. } \description{ Supplies a list of values including tolerances for singularity and convergence assessment, for use functions involving EM within \emph{MCLUST}. } \usage{ emControl(eps, tol, itmax, equalPro) } \arguments{ \item{eps}{ A scalar tolerance associated with deciding when to terminate computations due to computational singularity in covariances. Smaller values of \code{eps} allow computations to proceed nearer to singularity. The default is the relative machine precision \code{.Machine$double.eps}, which is approximately \eqn{2e-16} on IEEE-compliant machines. } \item{tol}{ A vector of length two giving relative convergence tolerances for the log-likelihood and for parameter convergence in the inner loop for models with iterative M-step ("VEI", "EVE", "VEE", "VVE", "VEV"), respectively. The default is \code{c(1.e-5,sqrt(.Machine$double.eps))}. If only one number is supplied, it is used as the tolerance for the outer iterations and the tolerance for the inner iterations is as in the default. } \item{itmax}{ A vector of length two giving integer limits on the number of EM iterations and on the number of iterations in the inner loop for models with iterative M-step ("VEI", "EVE", "VEE", "VVE", "VEV"), respectively. The default is \code{c(.Machine$integer.max, .Machine$integer.max)} allowing termination to be completely governed by \code{tol}. If only one number is supplied, it is used as the iteration limit for the outer iteration only. } \item{equalPro}{ Logical variable indicating whether or not the mixing proportions are equal in the model. Default: \code{equalPro = FALSE}. } } \value{ A named list in which the names are the names of the arguments and the values are the values supplied to the arguments. } \details{ \code{emControl} is provided for assigning values and defaults for EM within \emph{MCLUST}. } \seealso{ \code{\link{em}}, \code{\link{estep}}, \code{\link{me}}, \code{\link{mstep}}, \code{\link{mclustBIC}} } \examples{ irisBIC <- mclustBIC(iris[,-5], control = emControl(tol = 1.e-6)) summary(irisBIC, iris[,-5]) } \keyword{cluster} mclust/man/cdensE.Rd0000644000176200001440000001053313175050476014036 0ustar liggesusers\name{cdensE} \alias{cdensE} \alias{cdensV} \alias{cdensX} \alias{cdensEII} \alias{cdensVII} \alias{cdensEEI} \alias{cdensVEI} \alias{cdensEVI} \alias{cdensVVI} \alias{cdensEEE} \alias{cdensEEV} \alias{cdensVEV} \alias{cdensVVV} \alias{cdensEVE} \alias{cdensEVV} \alias{cdensVEE} \alias{cdensVVE} \alias{cdensXII} \alias{cdensXXI} \alias{cdensXXX} \title{ Component Density for a Parameterized MVN Mixture Model } \description{ Computes component densities for points in a parameterized MVN mixture model. } \usage{ cdensE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensX(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEII(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVII(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEEI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVEI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEVI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVVI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEEE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEEV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVEV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVVV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEVE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEVV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVEE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVVE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensXII(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensXXI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensXXX(data, logarithm = FALSE, parameters, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{logarithm}{ A logical value indicating whether or not the logarithm of the component densities should be returned. The default is to return the component densities, obtained from the log component densities by exponentiation. } \item{parameters}{ The parameters of the model: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{pro}}{ Mixing proportions for the components of the mixture. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } } } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A numeric matrix whose \code{[i,j]}th entry is the density of observation \emph{i} in component \emph{j}. The densities are not scaled by mixing proportions. } \note{ When one or more component densities are very large in magnitude, then it may be possible to compute the logarithm of the component densities but not the component densities themselves due to overflow. } \seealso{ \code{\link{cdens}}, \code{\link{dens}}, \code{\link{mclustVariance}}, \code{\link{mstep}}, \code{\link{mclust.options}}, \code{\link{do.call}}. } \examples{ \dontrun{ z2 <- unmap(hclass(hcVVV(faithful),2)) # initial value for 2 class case model <- meVVV(data=faithful, z=z2) cdensVVV(data=faithful, logarithm = TRUE, parameters = model$parameters) data(cross) z2 <- unmap(cross[,1]) model <- meEEV(data = cross[,-1], z = z2) EEVdensities <- cdensEEV( data = cross[,-1], parameters = model$parameters) cbind(cross[,-1],map(EEVdensities))} } \keyword{cluster} mclust/man/randomPairs.Rd0000644000176200001440000000216113175055134015106 0ustar liggesusers\name{randomPairs} \alias{randomPairs} \title{Random hierarchical structure} \description{Create a hierarchical structure using a random partition of the data.} \usage{ randomPairs(data, seed, \dots) } \arguments{ \item{data}{ A numeric matrix or data frame of observations. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{seed}{ Optional single value, interpreted as an integer, specifying the seed for random partition. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A numeric two-column matrix in which the \emph{i}th row gives the minimum index for observations in each of the two clusters merged at the \emph{i}th stage of a random agglomerative hierarchical clustering. } \seealso{ \code{\link{hc}}, \code{\link{hclass}} \code{\link{hcVVV}} } \examples{ data <- iris[,1:4] randPairs <- randomPairs(data) str(randPairs) # start model-based clustering from a random partition mod <- Mclust(data, initialization = list(hcPairs = randPairs)) summary(mod) } \keyword{cluster} mclust/man/partconv.Rd0000644000176200001440000000164012460535131014460 0ustar liggesusers\name{partconv} \alias{partconv} \title{Numeric Encoding of a Partitioning} \description{ Converts a vector interpreted as a classification or partitioning into a numeric vector. } \usage{ partconv(x, consec=TRUE) } \arguments{ \item{x}{ A vector interpreted as a classification or partitioning. } \item{consec}{ Logical value indicating whether or not to consecutive class numbers should be used . } } \value{ Numeric encoding of \code{x}. When \code{consec = TRUE}, the distinct values in \code{x} are numbered by the order in which they appear. When \code{consec = FALSE}, each distinct value in \code{x} is numbered by the index corresponding to its first appearance in \code{x}. } \seealso{ \code{\link{partuniq}} } \examples{ partconv(iris[,5]) set.seed(0) cl <- sample(LETTERS[1:9], 25, replace=TRUE) partconv(cl, consec=FALSE) partconv(cl, consec=TRUE) } \keyword{cluster} mclust/man/mclust-internal.Rd0000644000176200001440000000131512674302624015752 0ustar liggesusers\name{mclust-internal} \title{Internal MCLUST functions} \alias{pickBIC} \alias{bicFill} \alias{grid1} \alias{grid2} \alias{mvn2plot} \alias{vecnorm} \alias{traceW} \alias{qclass} \alias{unchol} \alias{shapeO} \alias{orth2} \alias{charconv} \alias{[.mclustBIC} \alias{checkModelName} \alias{balanced.folds} \alias{permute.rows} \alias{projpar.MclustDR} \alias{projdir.MclustDR} \alias{mvdnorm} \alias{ellipse} \alias{eigen.decomp} \alias{dmvnorm} \alias{getParameters.MclustDA} \alias{as.Mclust} \alias{as.Mclust.default} \alias{as.densityMclust} \alias{as.densityMclust.default} \alias{as.densityMclust.Mclust} \description{ Internal functions not intended to be called directly by users. } \keyword{internal} mclust/man/entPlot.Rd0000644000176200001440000000535612511765456014275 0ustar liggesusers\name{entPlot} \alias{entPlot} \title{ Plot Entropy Plots } \description{ Plot "entropy plots" to help select the number of classes from a hierarchy of combined clusterings. } \usage{ entPlot(z, combiM, abc = c("standard", "normalized"), reg = 2, ...) } \arguments{ \item{z}{ A matrix whose \code{[i,k]}th entry is the probability that observation \emph{i} in the data belongs to the \emph{k}th class, for the initial solution (ie before any combining). Typically, the one returned by \code{Mclust}/BIC. } \item{combiM}{ A list of "combining matrices" (as provided by \code{clustCombi}), ie \code{combiM[[K]]} is the matrix whose \emph{k}th row contains only zeros, but in columns corresponding to the labels of the classes in the \emph{(K+1)}-classes solution to be merged to get the \emph{K}-classes combined solution. \code{combiM} must contain matrices from \code{K} = number of classes in \code{z} to one. } \item{abc}{ Choose one or more of: "standard", "normalized", to specify whether the number of observations involved in each combining step should be taken into account to scale the plots or not. } \item{reg}{ The number of parts of the piecewise linear regression for the entropy plots. Choose one or more of: 2 (for 1 change-point), 3 (for 2 change-points). } \item{\dots}{ Other graphical arguments to be passed to the plot functions. } } \details{ Please see the article cited in the references for more details. A clear elbow in the "entropy plot" should suggest the user to consider the corresponding number(s) of class(es). } \value{ if \code{abc = "standard"}, plots the entropy against the number of clusters and the difference between the entropy of successive combined solutions against the number of clusters. if \code{abc = "normalized"}, plots the entropy against the cumulated number of observations involved in the successive combining steps and the difference between the entropy of successive combined solutions divided by the number of observations involved in the corresponding combining step against the number of clusters. } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{plot.clustCombi}}, \code{\link{combiPlot}}, \code{\link{clustCombi}} } \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) # run Mclust to get the MclustOutput output <- clustCombi(ex4.2, modelNames = "VII") entPlot(output$MclustOutput$z, output$combiM, reg = c(2,3)) # legend: in red, the single-change-point piecewise linear regression; # in blue, the two-change-point piecewise linear regression. } } \keyword{ cluster } mclust/man/GvHD.Rd0000644000176200001440000000411512460535131013414 0ustar liggesusers\name{GvHD} \alias{GvHD} \alias{GvHD.pos} \alias{GvHD.control} \docType{data} \title{GvHD Dataset} \description{ GvHD (Graft-versus-Host Disease) data of Brinkman et al. (2007). Two samples of this flow cytometry data, one from a patient with the GvHD, and the other from a control patient. The GvHD positive and control samples consist of 9083 and 6809 observations, respectively. Both samples include four biomarker variables, namely, CD4, CD8b, CD3, and CD8. The objective of the analysis is to identify CD3+ CD4+ CD8b+ cell sub-populations present in the GvHD positive sample. A treatment of this data by combining mixtures is proposed in Baudry et al. (2010). } \usage{data(GvHD)} \format{ GvHD.pos (positive patient) is a data frame with 9083 observations on the following 4 variables, which are biomarker measurements. \describe{ \item{CD4}{} \item{CD8b}{} \item{CD3}{} \item{CD8}{} } GvHD.control (control patient) is a data frame with 6809 observations on the following 4 variables, which are biomarker measurements. \describe{ \item{CD4}{} \item{CD8b}{} \item{CD3}{} \item{CD8}{} } } \references{ R. R. Brinkman, M. Gasparetto, S.-J. J. Lee, A. J. Ribickas, J. Perkins, W. Janssen, R. Smiley and C. Smith (2007). High-content flow cytometry and temporal data analysis for defining a cellular signature of Graft-versus-Host Disease. \emph{Biology of Blood and Marrow Transplantation, 13: 691-700.} K. Lo, R. R. Brinkman, R. Gottardo (2008). Automated gating of flow cytometry data via robust model-based clustering. \emph{Cytometry A, 73: 321-332.} J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \examples{ \dontrun{ data(GvHD) dat <- GvHD.pos[1:500,] # only a few lines for a quick example output <- clustCombi(dat) output # is of class clustCombi # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes plot(output, dat) } } \keyword{datasets} mclust/man/majorityVote.Rd0000644000176200001440000000125613107132441015317 0ustar liggesusers\name{majorityVote} \alias{majorityVote} \title{Majority vote} \description{ A function to compute the majority vote (some would say plurality) label in a vector of labels, breaking ties at random.} \usage{ majorityVote(x) } \arguments{ \item{x}{A vector of values, either numerical or not.} } \value{A list with the following components: \item{table}{A table of votes for each unique value of \code{x}.} \item{ind}{An integer specifying which unique value of \code{x} corresponds to the majority vote.} \item{majority}{A string specifying the majority vote label.} } %\seealso{} \author{L. Scrucca} \examples{ x <- c("A", "C", "A", "B", "C", "B", "A") majorityVote(x) } mclust/man/predict.Mclust.Rd0000644000176200001440000000317113175055063015532 0ustar liggesusers\name{predict.Mclust} \alias{predict.Mclust} \title{Cluster multivariate observations by Gaussian finite mixture modeling} \description{Cluster prediction for multivariate observations based on Gaussian finite mixture models estimated by \code{\link{Mclust}}.} \usage{ \method{predict}{Mclust}(object, newdata, \dots) } \arguments{ \item{object}{an object of class \code{'Mclust'} resulting from a call to \code{\link{Mclust}}.} \item{newdata}{a data frame or matrix giving the data. If missing the clustering data obtained from the call to \code{\link{Mclust}} are classified.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{ Returns a list of with the following components: \item{classification}{a factor of predicted cluster labels for \code{newdata}.} \item{z}{a matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in \code{newdata} belongs to the \emph{k}th cluster.} } \author{Luca Scrucca} % \note{} \seealso{\code{\link{Mclust}}.} \examples{ model <- Mclust(faithful) # predict cluster for the observed data pred <- predict(model) str(pred) pred$z # equal to model$z pred$classification # equal to plot(faithful, col = pred$classification, pch = pred$classification) # predict cluster over a grid grid <- apply(faithful, 2, function(x) seq(min(x), max(x), length = 50)) grid <- expand.grid(eruptions = grid[,1], waiting = grid[,2]) pred <- predict(model, grid) plot(grid, col = mclust.options("classPlotColors")[pred$classification], pch = 15, cex = 0.5) points(faithful, pch = model$classification) } \keyword{multivariate} mclust/man/decomp2sigma.Rd0000644000176200001440000000422413175051176015205 0ustar liggesusers\name{decomp2sigma} \alias{decomp2sigma} \title{ Convert mixture component covariances to matrix form. } \description{ Converts covariances from a parameterization by eigenvalue decomposition or cholesky factorization to representation as a 3-D array. } \usage{ decomp2sigma(d, G, scale, shape, orientation, \dots) } \arguments{ \item{d}{ The dimension of the data. } \item{G}{ The number of components in the mixture model. } \item{scale}{ Either a \emph{G}-vector giving the scale of the covariance (the \emph{d}th root of its determinant) for each component in the mixture model, or a single numeric value if the scale is the same for each component. } \item{shape}{ Either a \emph{G} by \emph{d} matrix in which the \emph{k}th column is the shape of the covariance matrix (normalized to have determinant 1) for the \emph{k}th component, or a \emph{d}-vector giving a common shape for all components. } \item{orientation}{ Either a \emph{d} by \emph{d} by \emph{G} array whose \code{[,,k]}th entry is the orthonomal matrix whose columns are the eigenvectors of the covariance matrix of the \emph{k}th component, or a \emph{d} by \emph{d} orthonormal matrix if the mixture components have a common orientation. The \code{orientation} component of \code{decomp} can be omitted in spherical and diagonal models, for which the principal components are parallel to the coordinate axes so that the orientation matrix is the identity. } \item{\dots}{ Catches unused arguments from an indirect or list call via \code{do.call}. } } \value{ A 3-D array whose \code{[,,k]}th component is the covariance matrix of the \emph{k}th component in an MVN mixture model. } \seealso{ \code{\link{sigma2decomp}} } \examples{ meEst <- meVEV(iris[,-5], unmap(iris[,5])) names(meEst) meEst$parameters$variance dec <- meEst$parameters$variance decomp2sigma(d=dec$d, G=dec$G, shape=dec$shape, scale=dec$scale, orientation = dec$orientation) \dontrun{ do.call("decomp2sigma", dec) ## alternative call } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/MclustDA.Rd0000644000176200001440000002064313175053251014306 0ustar liggesusers\name{MclustDA} \alias{MclustDA} \alias{print.MclustDA} \title{MclustDA discriminant analysis} \description{ Discriminant analysis based on Gaussian finite mixture modeling. } \usage{ MclustDA(data, class, G = NULL, modelNames = NULL, modelType = c("MclustDA", "EDDA"), prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), verbose = interactive(), \dots) } \arguments{ \item{data}{ A data frame or matrix giving the training data. } \item{class}{ A vector giving the class labels for the observations in the training data.} \item{G}{ An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated within each class. The default is \code{G = 1:5}.\cr A different set of mixture components for each class can be specified by providing this argument with a list of integers for each class. See the examples below. } \item{modelNames}{ A vector of character strings indicating the models to be fitted by EM within each class (see the description in \code{\link{mclustModelNames}}). A different set of mixture models for each class can be specified by providing this argument with a list of character strings. See the examples below. } \item{modelType}{ A character string specifying whether the models given in \code{modelNames} should fit a different number of mixture components and covariance structures for each class (\code{"MclustDA"}, the default) or should be constrained to have a single component for each class with the same covariance structure among classes (\code{"EDDA"}). See Details section and the examples below. } \item{prior}{ The default assumes no prior, but this argument allows specification of a conjugate prior on the means and variances through the function \code{\link{priorControl}}. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{initialization}{ A list containing zero or more of the following components: \describe{ \item{\code{hcPairs}}{ A matrix of merge pairs for hierarchical clustering such as produced by function \code{hc}. The default is to compute a hierarchical clustering tree by applying function \code{hc} with \code{modelName = "E"} to univariate data and \code{modelName = "VVV"} to multivariate data or a subset as indicated by the \code{subset} argument. The hierarchical clustering results are used as starting values for EM.} \item{\code{subset}}{ A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase.} } } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when estimation fails. The default is controlled by \code{\link{mclust.options}}. } \item{verbose}{ A logical controlling if a text progress bar is displayed during the fitting procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.. } \item{\dots }{Further arguments passed to or from other methods.} } \value{ An object of class \code{'MclustDA'} providing the optimal (according to BIC) mixture model. The details of the output components are as follows: \item{call}{The matched call.} \item{data}{The input data matrix.} \item{class}{The input class labels.} \item{type}{A character string specifying the \code{modelType} estimated.} \item{models}{A list of \code{\link{Mclust}} objects containing information on fitted model for each class.} \item{n}{The total number of observations in the data.} \item{d}{The dimension of the data.} % \item{BIC}{All BIC values.} \item{bic}{Optimal BIC value.} \item{loglik}{Log-likelihood for the selected model.} \item{df}{Number of estimated parameters.} } \details{ The \code{"EDDA"} method for discriminant analysis is described in Bensmail and Celeux (1996), while \code{"MclustDA"} in Fraley and Raftery (2002). } \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. Bensmail, H., and Celeux, G. (1996) Regularized Gaussian Discriminant Analysis Through Eigenvalue Decomposition.\emph{Journal of the American Statistical Association}, 91, 1743-1748. } \author{Luca Scrucca} \seealso{ \code{\link{summary.MclustDA}}, \code{\link{plot.MclustDA}}, \code{\link{predict.MclustDA}}, \code{\link{classError}} } \examples{ odd <- seq(from = 1, to = nrow(iris), by = 2) even <- odd + 1 X.train <- iris[odd,-5] Class.train <- iris[odd,5] X.test <- iris[even,-5] Class.test <- iris[even,5] # common EEE covariance structure (which is essentially equivalent to linear discriminant analysis) irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA", modelNames = "EEE") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # common covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # general covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train) summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) plot(irisMclustDA) plot(irisMclustDA, dimens = 3:4) plot(irisMclustDA, dimens = 4) plot(irisMclustDA, what = "classification") plot(irisMclustDA, what = "classification", newdata = X.test) plot(irisMclustDA, what = "classification", dimens = 3:4) plot(irisMclustDA, what = "classification", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "classification", dimens = 4) plot(irisMclustDA, what = "classification", dimens = 4, newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 4) plot(irisMclustDA, what = "error") plot(irisMclustDA, what = "error", dimens = 3:4) plot(irisMclustDA, what = "error", dimens = 4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 3:4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 4) \dontrun{ # simulated 1D data n <- 250 set.seed(1) triModal <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5)) triClass <- c(rep(1,n), rep(2,n), rep(3,n)) odd <- seq(from = 1, to = length(triModal), by = 2) even <- odd + 1 triMclustDA <- MclustDA(triModal[odd], triClass[odd]) summary(triMclustDA, parameters = TRUE) summary(triMclustDA, newdata = triModal[even], newclass = triClass[even]) plot(triMclustDA, what = "scatterplot") plot(triMclustDA, what = "classification") plot(triMclustDA, what = "classification", newdata = triModal[even]) plot(triMclustDA, what = "train&test", newdata = triModal[even]) plot(triMclustDA, what = "error") plot(triMclustDA, what = "error", newdata = triModal[even], newclass = triClass[even]) # simulated 2D cross data data(cross) odd <- seq(from = 1, to = nrow(cross), by = 2) even <- odd + 1 crossMclustDA <- MclustDA(cross[odd,-1], cross[odd,1]) summary(crossMclustDA, parameters = TRUE) summary(crossMclustDA, newdata = cross[even,-1], newclass = cross[even,1]) plot(crossMclustDA, what = "scatterplot") plot(crossMclustDA, what = "classification") plot(crossMclustDA, what = "classification", newdata = cross[even,-1]) plot(crossMclustDA, what = "train&test", newdata = cross[even,-1]) plot(crossMclustDA, what = "error") plot(crossMclustDA, what = "error", newdata =cross[even,-1], newclass = cross[even,1]) } } \keyword{multivariate} mclust/man/mstep.Rd0000644000176200001440000000671513175053444013772 0ustar liggesusers\name{mstep} \alias{mstep} \title{ M-step for parameterized Gaussian mixture models. } \description{ Maximization step in the EM algorithm for parameterized Gaussian mixture models. } \usage{ mstep(modelName, data, z, prior = NULL, warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. In analyses involving noise, this should not include the conditional probabilities for the noise component. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is given by \code{mclust.options("warn")}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{Attributes:}{ \code{"info"} For those models with iterative M-steps (\code{"VEI"} and \code{"VEV"}), information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \note{ This function computes the M-step only for MVN mixtures, so in analyses involving noise, the conditional probabilities input should exclude those for the noise component. \cr In contrast to \code{me} for the EM algorithm, computations in \code{mstep} are carried out unless failure due to overflow would occur. To impose stricter tolerances on a single \code{mstep}, use \code{me} with the \emph{itmax} component of the \code{control} argument set to 1. } \seealso{ \code{\link{mstepE}}, \dots, \code{\link{mstepVVV}}, \code{\link{emControl}}, \code{\link{me}}, \code{\link{estep}}, \code{\link{mclust.options}}. } \examples{ \dontrun{ mstep(modelName = "VII", data = iris[,-5], z = unmap(iris[,5]))} } \keyword{cluster} mclust/man/nVarParams.Rd0000644000176200001440000000317313175052444014706 0ustar liggesusers\name{nVarParams} \alias{nVarParams} \title{ Number of Variance Parameters in Gaussian Mixture Models } \description{ Gives the number of variance parameters for parameterizations of the Gaussian mixture model that are used in MCLUST. } \usage{ nVarParams(modelName, d, G, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{d}{ The dimension of the data. Not used for models in which neither the shape nor the orientation varies. } \item{G}{ The number of components in the Gaussian mixture model used to compute \code{loglik}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ The number of variance parameters in the corresponding Gaussian mixture model. } \details{ To get the total number of parameters in model, add \code{G*d} for the means and \code{G-1} for the mixing proportions if they are unequal. } \references{ C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611:631}. C. Fraley, A. E. Raftery, T. B. Murphy and L. Scrucca (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. } \seealso{ \code{\link{bic}}, \code{\link{nMclustParams}}. } \examples{ mapply(nVarParams, mclust.options("emModelNames"), d = 2, G = 3) } \keyword{cluster} mclust/man/predict.MclustDR.Rd0000644000176200001440000000363413175055104015760 0ustar liggesusers\name{predict.MclustDR} \alias{predict.MclustDR} \alias{predict2D.MclustDR} \title{Classify multivariate observations on a dimension reduced subspace by Gaussian finite mixture modeling} \description{Classify multivariate observations on a dimension reduced subspace estimated from a Gaussian finite mixture model.} \usage{ \method{predict}{MclustDR}(object, dim = 1:object$numdir, newdata, eval.points, \dots) } \arguments{ \item{object}{an object of class \code{'MclustDR'} resulting from a call to \code{\link{MclustDR}.}} \item{dim}{the dimensions of the reduced subspace used for prediction.} \item{newdata}{a data frame or matrix giving the data. If missing the data obtained from the call to \code{\link{MclustDR}} are used.} \item{eval.points}{a data frame or matrix giving the data projected on the reduced subspace. If provided \code{newdata} is not used.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{ Returns a list of with the following components: \item{dir}{a matrix containing the data projected onto the \code{dim} dimensions of the reduced subspace.} \item{density}{densities from mixture model for each data point.} \item{z}{a matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in \code{newdata} belongs to the \emph{k}th class.} \item{uncertainty}{The uncertainty associated with the classification.} \item{classification}{A vector of values giving the MAP classification.} } \references{ Scrucca, L. (2010) Dimension reduction for model-based clustering. \emph{Statistics and Computing}, 20(4), pp. 471-484. } \author{Luca Scrucca} % \note{} \seealso{\code{\link{MclustDR}}.} \examples{ mod = Mclust(iris[,1:4]) dr = MclustDR(mod) pred = predict(dr) str(pred) data(banknote) mod = MclustDA(banknote[,2:7], banknote$Status) dr = MclustDR(mod) pred = predict(dr) str(pred) } \keyword{multivariate} mclust/man/cdens.Rd0000644000176200001440000000600313175050450013716 0ustar liggesusers\name{cdens} \alias{cdens} \title{ Component Density for Parameterized MVN Mixture Models } \description{ Computes component densities for observations in MVN mixture models parameterized by eigenvalue decomposition. } \usage{ cdens(modelName, data, logarithm = FALSE, parameters, warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{logarithm}{ A logical value indicating whether or not the logarithm of the component densities should be returned. The default is to return the component densities, obtained from the log component densities by exponentiation. } \item{parameters}{ The parameters of the model: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A numeric matrix whose \code{[i,k]}th entry is the density or log density of observation \emph{i} in component \emph{k}. The densities are not scaled by mixing proportions. } \note{ When one or more component densities are very large in magnitude, it may be possible to compute the logarithm of the component densities but not the component densities themselves due to overflow. } \seealso{ \code{\link{cdensE}}, \ldots, \code{\link{cdensVVV}}, \code{\link{dens}}, \code{\link{estep}}, \code{\link{mclustModelNames}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}}, \code{\link{do.call}} } \examples{ z2 <- unmap(hclass(hcVVV(faithful),2)) # initial value for 2 class case model <- me(modelName = "EEE", data = faithful, z = z2) cdens(modelName = "EEE", data = faithful, logarithm = TRUE, parameters = model$parameters)[1:5,] data(cross) odd <- seq(1, nrow(cross), by = 2) oddBIC <- mclustBIC(cross[odd,-1]) oddModel <- mclustModel(cross[odd,-1], oddBIC) ## best parameter estimates names(oddModel) even <- odd + 1 densities <- cdens(modelName = oddModel$modelName, data = cross[even,-1], parameters = oddModel$parameters) cbind(class = cross[even,1], densities)[1:5,] } \keyword{cluster} mclust/man/unmap.Rd0000644000176200001440000000331013175055342013745 0ustar liggesusers\name{unmap} \alias{unmap} \title{ Indicator Variables given Classification } \description{ Converts a classification into a matrix of indicator variables. } \usage{ unmap(classification, groups=NULL, noise=NULL, \dots) } \arguments{ \item{classification}{ A numeric or character vector. Typically the distinct entries of this vector would represent a classification of observations in a data set. } \item{groups}{ A numeric or character vector indicating the groups from which \code{classification} is drawn. If not supplied, the default is to assumed to be the unique entries of classification. } \item{noise}{ A single numeric or character value used to indicate the value of \code{groups} corresponding to noise. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ An \emph{n} by \emph{m} matrix of \emph{(0,1)} indicator variables, where \emph{n} is the length of \code{classification} and \emph{m} is the number of unique values or symbols in \code{classification}. Columns are labeled by the unique values in \code{classification}, and the \code{[i,j]}th entry is \emph{1} if \code{classification[i]} is the \emph{j}th unique value or symbol in sorted order \code{classification}. If a \code{noise} value of symbol is designated, the corresponding indicator variables are relocated to the last column of the matrix. } \seealso{ \code{\link{map}}, \code{\link{estep}}, \code{\link{me}} } \examples{ z <- unmap(iris[,5]) z[1:5, ] emEst <- me(modelName = "VVV", data = iris[,-5], z = z) emEst$z[1:5,] map(emEst$z) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/mclustModel.Rd0000644000176200001440000000667413175053361015134 0ustar liggesusers\name{mclustModel} \alias{mclustModel} \title{ Best model based on BIC } \description{ Determines the best model from clustering via \code{mclustBIC} for a given set of model parameterizations and numbers of components. } \usage{ mclustModel(data, BICvalues, G, modelNames, \dots) } \arguments{ \item{data}{ The matrix or vector of observations used to generate `object'. } \item{BICvalues}{ An \code{'mclustBIC'} object, which is the result of applying \code{mclustBIC} to \code{data}. } \item{G}{ A vector of integers giving the numbers of mixture components (clusters) from which the best model according to BIC will be selected (\code{as.character(G)} must be a subset of the row names of \code{BICvalues}). The default is to select the best model for all numbers of mixture components used to obtain \code{BICvalues}. } \item{modelNames}{ A vector of integers giving the model parameterizations from which the best model according to BIC will be selected (\code{as.character(model)} must be a subset of the column names of \code{BICvalues}). The default is to select the best model for parameterizations used to obtain \code{BICvalues}. } \item{\dots}{ Not used. For generic/method consistency. } } \value{ A list giving the optimal (according to BIC) parameters, conditional probabilities \code{z}, and log-likelihood, together with the associated classification and its uncertainty. The details of the output components are as follows: \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The number of components in the Gaussian mixture model corresponding to the optimal BIC. } \item{bic}{ The optimal BIC value. } \item{loglik}{ The log-likelihood corresponding to the optimal BIC. } \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{z}{ A matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in the test data belongs to the \emph{k}th class. } } \seealso{ \code{\link{mclustBIC}} } \examples{ irisBIC <- mclustBIC(iris[,-5]) mclustModel(iris[,-5], irisBIC) mclustModel(iris[,-5], irisBIC, G = 1:6, modelNames = c("VII", "VVI", "VVV")) } \keyword{cluster} % docclass is function mclust/man/hcE.Rd0000644000176200001440000000606113175052531013327 0ustar liggesusers\name{hcE} \alias{hcE} \alias{hcV} \alias{hcEII} \alias{hcVII} \alias{hcEEE} \alias{hcVVV} \title{ Model-based Hierarchical Clustering } \description{ Agglomerative hierarchical clustering based on maximum likelihood for a Gaussian mixture model parameterized by eigenvalue decomposition. } \usage{ hcE(data, partition, minclus=1, \dots) hcV(data, partition, minclus = 1, alpha = 1, \dots) hcEII(data, partition, minclus = 1, \dots) hcVII(data, partition, minclus = 1, alpha = 1, \dots) hcEEE(data, partition, minclus = 1, \dots) hcVVV(data, partition, minclus = 1, alpha = 1, beta = 1, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{partition}{ A numeric or character vector representing a partition of observations (rows) of \code{data}. If provided, group merges will start with this partition. Otherwise, each observation is assumed to be in a cluster by itself at the start of agglomeration. } \item{minclus}{ A number indicating the number of clusters at which to stop the agglomeration. The default is to stop when all observations have been merged into a single cluster. } \item{alpha, beta}{ Additional tuning parameters needed for initializatiion in some models. For details, see Fraley 1998. The defaults provided are usually adequate. } \item{\dots}{ Catch unused arguments from a \code{do.call} call. } } \value{ A numeric two-column matrix in which the \emph{i}th row gives the minimum index for observations in each of the two clusters merged at the \emph{i}th stage of agglomerative hierarchical clustering. } \details{ Most models have memory usage of the order of the square of the number groups in the initial partition for fast execution. Some models, such as equal variance or \code{"EEE"}, do not admit a fast algorithm under the usual agglomerative hierachical clustering paradigm. These use less memory but are much slower to execute. } \references{ J. D. Banfield and A. E. Raftery (1993). Model-based Gaussian and non-Gaussian Clustering. \emph{Biometrics 49:803-821}. C. Fraley (1998). Algorithms for model-based Gaussian hierarchical clustering. \emph{SIAM Journal on Scientific Computing 20:270-281}. C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611-631}. } \seealso{ \code{\link{hc}}, \code{\link{hclass}} \code{\link{randomPairs}} } \examples{ hcTree <- hcEII(data = iris[,-5]) cl <- hclass(hcTree,c(2,3)) \dontrun{ par(pty = "s", mfrow = c(1,1)) clPairs(iris[,-5],cl=cl[,"2"]) clPairs(iris[,-5],cl=cl[,"3"]) par(mfrow = c(1,2)) dimens <- c(1,2) coordProj(iris[,-5], classification=cl[,"2"], dimens=dimens) coordProj(iris[,-5], classification=cl[,"3"], dimens=dimens) } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/hc.Rd0000644000176200001440000000661413175052517013232 0ustar liggesusers\name{hc} \alias{hc} \alias{print.hc} \title{Model-based Hierarchical Clustering} \description{ Agglomerative hierarchical clustering based on maximum likelihood criteria for Gaussian mixture models parameterized by eigenvalue decomposition. } \usage{ hc(data, modelName = mclust.options("hcModelNames")[1], use = mclust.options("hcUse"), \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{modelName}{ A character string indicating the model to be used.\cr Possible models are: \cr\cr \code{"E"}: equal variance (one-dimensional) \cr \code{"V"} : spherical, variable variance (one-dimensional) \cr \code{"EII"}: spherical, equal volume \cr \code{"VII"}: spherical, unequal volume \cr \code{"EEE"}: ellipsoidal, equal volume, shape, and orientation \cr \code{"VVV"}: ellipsoidal, varying volume, shape, and orientation.\cr\cr By default the first model listed in \code{mclust.options("hcModelNames")}, i.e. \code{"VVV"}, is used. } \item{use}{ A character string specifying what type of data/transformation should be used for model-based hierarchical clustering.\cr This is experimental and it is only useful for the initialization of EM algorithm. By default it uses the method specified in \code{mclust.options("hcUse")}, which is set to \code{"VARS"}, i.e. the original input variables. } \item{\dots}{ Arguments for the method-specific \code{hc} functions. See for example \code{\link{hcE}}. } } \value{ A numeric two-column matrix in which the \emph{i}th row gives the minimum index for observations in each of the two clusters merged at the \emph{i}th stage of agglomerative hierarchical clustering. } \details{ Most models have memory usage of the order of the square of the number groups in the initial partition for fast execution. Some models, such as equal variance or \code{"EEE"}, do not admit a fast algorithm under the usual agglomerative hierarchical clustering paradigm. These use less memory but are much slower to execute. } \note{ If \code{modelName = "E"} (univariate with equal variances) or \code{modelName = "EII"} (multivariate with equal spherical covariances), then the method is equivalent to Ward's method for hierarchical clustering. } \references{ J. D. Banfield and A. E. Raftery (1993). Model-based Gaussian and non-Gaussian Clustering. \emph{Biometrics 49:803-821}. C. Fraley (1998). Algorithms for model-based Gaussian hierarchical clustering. \emph{SIAM Journal on Scientific Computing 20:270-281}. C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611-631}. } \seealso{ \code{\link{hcE}},..., \code{\link{hcVVV}}, \code{\link{hclass}}, \code{\link{mclust.options}} } \examples{ hcTree <- hc(modelName = "VVV", data = iris[,-5]) cl <- hclass(hcTree,c(2,3)) \dontrun{ par(pty = "s", mfrow = c(1,1)) clPairs(iris[,-5],cl=cl[,"2"]) clPairs(iris[,-5],cl=cl[,"3"]) par(mfrow = c(1,2)) dimens <- c(1,2) coordProj(iris[,-5], dimens = dimens, classification=cl[,"2"]) coordProj(iris[,-5], dimens = dimens, classification=cl[,"3"]) } } \keyword{cluster} mclust/man/coordProj.Rd0000644000176200001440000001235713205036625014576 0ustar liggesusers\name{coordProj} \alias{coordProj} \title{ Coordinate projections of multidimensional data modeled by an MVN mixture. } \description{ Plots coordinate projections given multidimensional data and parameters of an MVN mixture model for the data. } \usage{ coordProj(data, dimens = c(1,2), parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "errors", "uncertainty"), addEllipses = TRUE, symbols = NULL, colors = NULL, scale = FALSE, xlim = NULL, ylim = NULL, CEX = 1, PCH = ".", main = FALSE, \dots) } \arguments{ \item{data}{ A numeric matrix or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{dimens}{ A vector of length 2 giving the integer dimensions of the desired coordinate projections. The default is \code{c(1,2)}, in which the first dimension is plotted against the second. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix in which the \code{[i,k]}th entry gives the probability of observation \emph{i} belonging to the \emph{k}th class. Used to compute \code{classification} and \code{uncertainty} if those arguments aren't available. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. If present argument \code{z} will be ignored. } \item{truth}{ A numeric or character vector giving a known classification of each data point. If \code{classification} or \code{z} is also present, this is used for displaying classification errors. } \item{uncertainty}{ A numeric vector of values in \emph{(0,1)} giving the uncertainty of each data point. If present argument \code{z} will be ignored. } \item{what}{ Choose from one of the following three options: \code{"classification"} (default), \code{"errors"}, \code{"uncertainty"}. } \item{addEllipses}{ A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances in case of \code{"classification"} or \code{"uncertainty"} plots. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotColors")}. } \item{scale}{ A logical variable indicating whether or not the two chosen dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. Default: \code{scale=FALSE} } \item{xlim, ylim}{ Arguments specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. } \item{CEX}{ An argument specifying the size of the plotting symbols. The default value is 1. } \item{PCH}{ An argument specifying the symbol to be used when a classification has not been specified for the data. The default value is a small dot ".". } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{\dots}{ Other graphics parameters. } } \value{ A plot showing a two-dimensional coordinate projection of the data, together with the location of the mixture components, classification, uncertainty, and/or classification errors. } \seealso{ \code{\link{clPairs}}, \code{\link{randProj}}, \code{\link{mclust2Dplot}}, \code{\link{mclust.options}} } \examples{ \dontrun{ est <- meVVV(iris[,-5], unmap(iris[,5])) par(pty = "s", mfrow = c(1,1)) coordProj(iris[,-5], dimens=c(2,3), parameters = est$parameters, z = est$z, what = "classification", main = TRUE) coordProj(iris[,-5], dimens=c(2,3), parameters = est$parameters, z = est$z, truth = iris[,5], what = "errors", main = TRUE) coordProj(iris[,-5], dimens=c(2,3), parameters = est$parameters, z = est$z, what = "uncertainty", main = TRUE) } } \keyword{cluster} mclust/man/cdfMclust.Rd0000644000176200001440000000405213175050523014551 0ustar liggesusers\name{cdfMclust} \alias{cdfMclust} \alias{quantileMclust} \title{ Cumulative Distribution and Quantiles for a univariate Gaussian mixture distribution } \description{ Compute the cumulative density function (cdf) or quantiles from an estimated one-dimensional Gaussian mixture fitted using \code{\link{densityMclust}}.} \usage{ cdfMclust(object, data, ngrid = 100, \dots) quantileMclust(object, p, ...) } \arguments{ \item{object}{a \code{densityMclust} model object.} \item{data}{a numeric vector of evaluation points.} \item{ngrid}{the number of points in a regular grid to be used as evaluation points if no \code{data} are provided.} \item{p}{a numeric vector of probabilities.} \item{\dots}{further arguments passed to or from other methods.} } \details{The cdf is evaluated at points given by the optional argument \code{data}. If not provided, a regular grid of length \code{ngrid} for the evaluation points is used. The quantiles are computed using interpolating splines on an adaptive finer grid. } \value{ \code{cdfMclust} returns a list of \code{x} and \code{y} values providing, respectively, the evaluation points and the estimated cdf. \code{quantileMclust} returns a vector of quantiles. } \author{Luca Scrucca} \seealso{ \code{\link{densityMclust}}, \code{\link{plot.densityMclust}}. } \examples{ x <- c(rnorm(100), rnorm(100, 3, 2)) dens <- densityMclust(x) summary(dens, parameters = TRUE) cdf <- cdfMclust(dens) str(cdf) q <- quantileMclust(dens, p = c(0.01, 0.1, 0.5, 0.9, 0.99)) cbind(quantile = q, cdf = cdfMclust(dens, q)$y) plot(cdf, type = "l", xlab = "x", ylab = "CDF") points(q, cdfMclust(dens, q)$y, pch = 20, col = "red3") par(mfrow = c(2,2)) dens.waiting <- densityMclust(faithful$waiting) plot(dens.waiting) plot(cdfMclust(dens.waiting), type = "l", xlab = dens.waiting$varname, ylab = "CDF") dens.eruptions <- densityMclust(faithful$eruptions) plot(dens.eruptions) plot(cdfMclust(dens.eruptions), type = "l", xlab = dens.eruptions$varname, ylab = "CDF") par(mfrow = c(1,1)) } \keyword{cluster} \keyword{dplot} mclust/man/densityMclust.diagnostic.Rd0000644000176200001440000000467213205037244017626 0ustar liggesusers\name{densityMclust.diagnostic} \alias{densityMclust.diagnostic} \title{Diagnostic plots for \code{mclustDensity} estimation} \description{ Diagnostic plots for density estimation. Only available for the one-dimensional case. } \usage{ densityMclust.diagnostic(object, type = c("cdf", "qq"), col = c("black", "green4"), lwd = c(2,2), lty = c(1,2), legend = TRUE, grid = TRUE, main = TRUE, \dots) } \arguments{ \item{object}{An object of class \code{'mclustDensity'} obtained from a call to \code{\link{densityMclust}} function.} \item{type}{The type of graph requested: \describe{ \item{\code{"cdf"} =}{a plot of the estimated CDF versus the empirical distribution function.} \item{\code{"qq"} =}{a Q-Q plot of sample quantiles versus the quantiles obtained from the inverse of the estimated cdf.} } } \item{col}{A pair of values for the color to be used for plotting, respectively, the estimated CDF and the empirical cdf.} \item{lwd}{A pair of values for the line width to be used for plotting, respectively, the estimated CDF and the empirical cdf.} \item{lty}{A pair of values for the line type to be used for plotting, respectively, the estimated CDF and the empirical cdf.} \item{legend}{A logical indicating if a legend must be added to the plot of fitted CDF vs the empirical CDF.} \item{grid}{A logical indicating if a \code{\link{grid}} should be added to the plot.} \item{main}{A logical indicating if a title should be added to the plot.} \item{\dots}{Additional arguments.} } \details{ The two diagnostic plots for density estimation in the one-dimensional case are discussed in Loader (1999, pp- 87-90). } % \value{} \references{ Loader C. (1999), Local Regression and Likelihood. New York, Springer. C. Fraley, A. E. Raftery, T. B. Murphy and L. Scrucca (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. } \author{Luca Scrucca} \seealso{ \code{\link{densityMclust}}, \code{\link{plot.densityMclust}}. } \examples{ \dontrun{ x <- faithful$waiting dens <- densityMclust(x) plot(dens, x, what = "diagnostic") # or densityMclust.diagnostic(dens, type = "cdf") densityMclust.diagnostic(dens, type = "qq") } } \keyword{cluster} \keyword{dplot} mclust/man/cvMclustDA.Rd0000644000176200001440000000334713205036760014641 0ustar liggesusers\name{cvMclustDA} \alias{cvMclustDA} \title{MclustDA cross-validation} \description{ K-fold cross-validation for discriminant analysis based on Gaussian finite mixture modeling. } \usage{ cvMclustDA(object, nfold = 10, verbose = interactive(), \dots) } \arguments{ \item{object}{ An object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}. } \item{nfold}{ An integer specifying the number of folds. } \item{verbose}{ A logical controlling if a text progress bar is displayed during the cross-validation procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise. } \item{\dots }{Further arguments passed to or from other methods.} } \value{ The function returns a list with the following components: \item{classification}{a factor of cross-validated class labels.} \item{error}{the cross-validation error.} \item{se}{the standard error of cv error.} } %\details{} \author{Luca Scrucca} \seealso{ \code{\link{summary.MclustDA}}, \code{\link{plot.MclustDA}}, \code{\link{predict.MclustDA}}, \code{\link{classError}} } \examples{ \dontrun{ X <- iris[,-5] Class <- iris[,5] # common EEE covariance structure (which is essentially equivalent to linear discriminant analysis) irisMclustDA <- MclustDA(X, Class, modelType = "EDDA", modelNames = "EEE") cv <- cvMclustDA(irisMclustDA) # default 10-fold CV cv[c("error", "se")] cv <- cvMclustDA(irisMclustDA, nfold = length(Class)) # LOO-CV cv[c("error", "se")] # compare with # cv1EMtrain(X, Class, "EEE") # general covariance structure selected by BIC irisMclustDA <- MclustDA(X, Class) cv <- cvMclustDA(irisMclustDA) # default 10-fold CV cv[c("error", "se")] } } \keyword{multivariate} mclust/man/combiPlot.Rd0000644000176200001440000000473313107071764014570 0ustar liggesusers\name{combiPlot} \alias{combiPlot} \title{ Plot Classifications Corresponding to Successive Combined Solutions } \description{ Plot classifications corresponding to successive combined solutions. } \usage{ combiPlot(data, z, combiM, \dots) } \arguments{ \item{data}{ The data. } \item{z}{ A matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class, for the initial solution (ie before any combining). Typically, the one returned by \code{Mclust}/BIC. } \item{combiM}{ A "combining matrix" (as provided by \code{\link{clustCombi}}), ie a matrix whose kth row contains only zeros, but in columns corresponding to the labels of the classes in the initial solution to be merged together to get the combined solution. } \item{\dots}{ Other arguments to be passed to the \code{\link{Mclust}} plot functions. } } \value{ Plot the classifications obtained by MAP from the matrix \code{t(combiM \%*\% t(z))}, which is the matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class, according to the combined solution obtained by merging (according to \code{combiM}) the initial solution described by \code{z}. } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{clustCombi}}, \code{\link{combMat}}, \code{\link{clustCombi}} } \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) MclustOutput <- Mclust(ex4.1) MclustOutput$G # Mclust/BIC selected 6 classes par(mfrow=c(2,2)) combiM0 = diag(6) # is the identity matrix # no merging: plot the initial solution, given by z combiPlot(ex4.1, MclustOutput$z, combiM0, cex = 3) title("No combining") combiM1 = combMat(6, 1, 2) # let's merge classes labeled 1 and 2 combiM1 combiPlot(ex4.1, MclustOutput$z, combiM1) title("Combine 1 and 2") # let's merge classes labeled 1 and 2, and then components labeled (in this # new 5-classes combined solution...) 1 and 2 combiM2 = combMat(5, 1, 2) \%*\% combMat(6, 1, 2) combiM2 combiPlot(ex4.1, MclustOutput$z, combiM2) title("Combine 1, 2 and then 1 and 2 again") plot(0,0,type="n", xlab = "", ylab = "", axes = FALSE) legend("center", legend = 1:6, col = mclust.options("classPlotColors"), pch = mclust.options("classPlotSymbols"), title = "Class labels:")} } \keyword{ cluster } mclust/man/mclust1Dplot.Rd0000644000176200001440000001211113205036606015213 0ustar liggesusers\name{mclust1Dplot} \alias{mclust1Dplot} \title{ Plot one-dimensional data modeled by an MVN mixture. } \description{ Plot one-dimensional data given parameters of an MVN mixture model for the data. } \usage{ mclust1Dplot(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "density", "errors", "uncertainty"), symbols = NULL, colors = NULL, ngrid = length(data), xlab = NULL, xlim = NULL, CEX = 1, main = FALSE, \dots) } \arguments{ \item{data}{ A numeric vector of observations. Categorical variables are not allowed. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. There should one more mixing proportion than the number of Gaussian components if the mixture model includes a Poisson noise term. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix in which the \code{[i,k]}th entry gives the probability of observation \emph{i} belonging to the \emph{k}th class. Used to compute \code{classification} and \code{uncertainty} if those arguments aren't available. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. If present argument \code{z} will be ignored. } \item{truth}{ A numeric or character vector giving a known classification of each data point. If \code{classification} or \code{z} is also present, this is used for displaying classification errors. } \item{uncertainty}{ A numeric vector of values in \emph{(0,1)} giving the uncertainty of each data point. If present argument \code{z} will be ignored. } \item{what}{ Choose from one of the following three options: \code{"classification"} (default), \code{"density"}, \code{"errors"}, \code{"uncertainty"}. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class \code{classification}. Elements in \code{symbols} correspond to classes in \code{classification} in order of appearance in the observations (the order used by the function \code{unique}). The default is to use a single plotting symbol \emph{|}. Classes are delineated by showing them in separate lines above the whole of the data. } \item{colors}{ Either an integer or character vector assigning a color to each unique class \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the observations (the order used by the function \code{unique}). The default is given is \code{mclust.options("classPlotColors")}. } \item{ngrid}{ Number of grid points to use for density computation over the interval spanned by the data. The default is the length of the data set. } \item{xlab}{ An argument specifying a label for the horizontal axis. } \item{xlim}{ An argument specifying bounds of the plot. This may be useful for when comparing plots. } \item{CEX}{ An argument specifying the size of the plotting symbols. The default value is 1. } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{\dots}{ Other graphics parameters. } } \value{ A plot showing location of the mixture components, classification, uncertainty, density and/or classification errors. Points in the different classes are shown in separated levels above the whole of the data. } \seealso{ \code{\link{mclust2Dplot}}, \code{\link{clPairs}}, \code{\link{coordProj}} } \examples{ \dontrun{ n <- 250 ## create artificial data set.seed(1) y <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5)) yclass <- c(rep(1,n), rep(2,n), rep(3,n)) yModel <- Mclust(y) mclust1Dplot(y, parameters = yModel$parameters, z = yModel$z, what = "classification", main = TRUE) mclust1Dplot(y, parameters = yModel$parameters, z = yModel$z, truth = yclass, what = "errors", main = TRUE) mclust1Dplot(y, parameters = yModel$parameters, z = yModel$z, what = "density", main = TRUE) mclust1Dplot(y, z = yModel$z, parameters = yModel$parameters, what = "uncertainty", main = TRUE) } } \keyword{cluster} mclust/man/clPairs.Rd0000644000176200001440000000643113175055507014234 0ustar liggesusers\name{clPairs} \alias{clPairs} \alias{clPairsLegend} \title{Pairwise Scatter Plots showing Classification} \description{ Creates a scatter plot for each pair of variables in given data. Observations in different classes are represented by different colors and symbols. } \usage{ clPairs(data, classification, symbols, colors, labels = dimnames(data)[[2]], CEX = 1, gap = 0.2, \dots) clPairsLegend(x, y, class, col, pch, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{symbols} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotColors")}. } \item{labels}{ A vector of character strings for labeling the variables. The default is to use the column dimension names of \code{data}. } \item{CEX}{ An argument specifying the size of the plotting symbols. The default value is 1. } \item{gap}{ An argument specifying the distance between subplots (see \code{\link{pairs}}). } \item{x,y}{ The x and y co-ordinates with respect to a graphic device having plotting region coordinates \code{par("usr" = c(0,1,0,1))}. } \item{class}{ The class labels. } \item{col, pch}{ The colors and plotting symbols appearing in the legend. } \item{\dots}{ For a \code{clPairs} call may be additional arguments to be passed to \code{\link{pairs}}. For a \code{clPairsLegend} call may be additional arguments to be passed to \code{\link{legend}}. } } \details{ The function \code{clPairs()} draws scatter plots on the current graphics device for each combination of variables in \code{data}. Observations of different classifications are labeled with different symbols. The function \code{clPairsLegend()} can be used to add a legend. See examples below. } \value{ The function \code{clPairs()} invisibly returns a list with the following components: \item{class}{A character vector of class labels.} \item{col}{A vector of colors used for each class.} \item{pch}{A vector of plotting symbols used for each class.} } \seealso{ \code{\link{pairs}}, \code{\link{coordProj}}, \code{\link{mclust.options}} } \examples{ clPairs(iris[,1:4], cl = iris$Species) clp <- clPairs(iris[,1:4], cl = iris$Species, lower.panel = NULL) clPairsLegend(0.1, 0.4, class = clp$class, col = clp$col, pch = clp$pch, title = "Iris data") } \keyword{cluster} mclust/man/banknote.Rd0000644000176200001440000000141012501077123014416 0ustar liggesusers\name{banknote} \alias{banknote} \docType{data} \title{Swiss banknotes data} \description{ The data set contains six measurements made on 100 genuine and 100 counterfeit old-Swiss 1000-franc bank notes.} \usage{data(banknote)} \format{A data frame with the following variables: \describe{ \item{Status}{the status of the banknote: \code{genuine} or \code{counterfeit}} \item{Length}{Length of bill (mm)} \item{Left}{Width of left edge (mm)} \item{Right}{Width of right edge (mm)} \item{Bottom}{Bottom margin width (mm)} \item{Top}{Top margin width (mm)} \item{Diagonal}{Length of diagonal (mm)} } } \source{Flury, B. and Riedwyl, H. (1988). \emph{Multivariate Statistics: A practical approach.} London: Chapman & Hall, Tables 1.1 and 1.2, pp. 5-8.} \keyword{datasets} mclust/man/plot.mclustICL.Rd0000644000176200001440000000126113175055043015442 0ustar liggesusers\name{plot.mclustICL} \alias{plot.mclustICL} \title{ICL Plot for Model-Based Clustering} \description{ Plots the ICL values returned by the \code{\link{mclustICL}} function. } \usage{ \method{plot}{mclustICL}(x, ylab = "ICL", \dots) } \arguments{ \item{x}{ Output from \code{\link{mclustICL}}. } \item{ylab}{ Label for the vertical axis of the plot. } \item{\dots}{ Further arguments passed to the \code{\link{plot.mclustBIC}} function. } } \value{ A plot of the ICL values. } \seealso{ \code{\link{mclustICL}} } \examples{ \dontrun{ data(faithful) faithful.ICL = mclustICL(faithful) plot(faithful.ICL) } } \keyword{cluster} % docclass is function mclust/man/covw.Rd0000644000176200001440000000210313175055370013603 0ustar liggesusers\name{covw} \alias{covw} \title{Weighted means, covariance and scattering matrices conditioning on a weighted matrix.} \description{ Compute efficiently (via Fortran code) the means, covariance and scattering matrices conditioning on a weighted or indicator matrix } \usage{ covw(X, Z, normalize = TRUE) } \arguments{ \item{X}{A \eqn{(n x p)} data matrix, with \eqn{n} observations on \eqn{p} variables.} \item{Z}{A \eqn{(n x G)} matrix of weights, with \eqn{G} number of groups.} \item{normalize}{A logical indicating if rows of \code{Z} should be normalized to sum to one.} } \value{A list with the following components: \item{mean}{A \eqn{(p x G)} matrix of weighted means.} \item{S}{A \eqn{(p x p x G)} array of weighted covariance matrices.} \item{W}{A \eqn{(p x p x G)} array of weighted scattering matrices.} } %\seealso{} \author{M. Fop and L. Scrucca} \examples{ # Z as an indicator matrix X = iris[,1:4] Z = unmap(iris$Species) str(covw(X, Z)) # Z as a matrix of weights mod = Mclust(X, G = 3, modelNames = "VVV") str(covw(X, mod$z)) } \keyword{multivariate} mclust/man/summary.MclustDR.Rd0000644000176200001440000000214713175055251016024 0ustar liggesusers\name{summary.MclustDR} \alias{summary.MclustDR} \alias{print.summary.MclustDR} \title{Summarizing dimension reduction method for model-based clustering and classification} \description{Summary method for class \code{"MclustDR"}.} \usage{ \method{summary}{MclustDR}(object, numdir, std = FALSE, \dots) \method{print}{summary.MclustDR}(x, digits = max(5, getOption("digits") - 3), \dots) } \arguments{ \item{object}{An object of class \code{'MclustDR'} resulting from a call to \code{\link{MclustDR}}.} \item{x}{An object of class \code{'summary.MclustDR'}, usually, a result of a call to \code{summary.MclustDR}.} \item{numdir}{An integer providing the number of basis directions to be printed.} \item{std}{if \code{TRUE} the coefficients basis are scaled such that all predictors have unit standard deviation.} \item{digits}{The number of significant digits to use when printing.} \item{\dots}{Further arguments passed to or from other methods.} } %\details{} %\value{} \author{Luca Scrucca} %\note{} \seealso{ \code{\link{MclustDR}}, \code{\link{plot.MclustDR}} } %\examples{} %\keyword{} mclust/man/predict.MclustDA.Rd0000644000176200001440000000277113175055072015744 0ustar liggesusers\name{predict.MclustDA} \alias{predict.MclustDA} \title{Classify multivariate observations by Gaussian finite mixture modeling} \description{Classify multivariate observations based on Gaussian finite mixture models estimated by \code{\link{MclustDA}}.} \usage{ \method{predict}{MclustDA}(object, newdata, prior, \dots) } \arguments{ \item{object}{an object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}.} \item{newdata}{a data frame or matrix giving the data. If missing the train data obtained from the call to \code{\link{MclustDA}} are classified.} \item{prior}{the prior probabilities of the classes; by default, this is set at the proportions in the training data.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{ Returns a list of with the following components: \item{classification}{a factor of predicted class labels for \code{newdata}.} \item{z}{a matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in \code{newdata} belongs to the \emph{k}th class.} } \author{Luca Scrucca} % \note{} \seealso{\code{\link{MclustDA}}.} \examples{ \dontrun{ odd <- seq(from = 1, to = nrow(iris), by = 2) even <- odd + 1 X.train <- iris[odd,-5] Class.train <- iris[odd,5] X.test <- iris[even,-5] Class.test <- iris[even,5] irisMclustDA <- MclustDA(X.train, Class.train) predTrain <- predict(irisMclustDA) predTrain predTest <- predict(irisMclustDA, X.test) predTest } } \keyword{multivariate} mclust/man/thyroid.Rd0000644000176200001440000000331512504554764014323 0ustar liggesusers\name{thyroid} \alias{thyroid} \docType{data} \title{Thyroid gland data} \description{ Data on five laboratory tests administered to a sample of 215 patients. The tests are used to predict whether a patient's thyroid can be classified as euthyroidism (normal thyroid gland function), hypothyroidism (underactive thyroid not producing enough thyroid hormone) or hyperthyroidism (overactive thyroid producing and secreting excessive amounts of the free thyroid hormones T3 and/or thyroxine T4). Diagnosis of thyroid operation was based on a complete medical record, including anamnesis, scan, etc..} \usage{data(thyroid)} \format{A data frame with the following variables: \describe{ \item{Diagnosis}{Diagnosis of thyroid operation: \code{Hypo}, \code{Normal}, and \code{Hyper}.} \item{RT3U}{T3-resin uptake test (percentage).} \item{T4}{Total Serum thyroxin as measured by the isotopic displacement method.} \item{T3}{Total serum triiodothyronine as measured by radioimmuno assay.} \item{TSH}{Basal thyroid-stimulating hormone (TSH) as measured by radioimmuno assay.} \item{DTSH}{Maximal absolute difference of TSH value after injection of 200 micro grams of thyrotropin-releasing hormone as compared to the basal value.} } } \source{UCI \url{ftp://ftp.ics.uci.edu/pub/machine-learning-databases/thyroid-disease/}} \references{ Coomans, D., Broeckaert, M. Jonckheer M. and Massart D.L. (1983) Comparison of Multivariate Discriminant Techniques for Clinical Data - Application to the Thyroid Functional State, \emph{Meth. Inform. Med.} 22, pp. 93-101. Coomans, D. and I. Broeckaert (1986) \emph{Potential Pattern Recognition in Cemical and Medical Decision Making}, Research Studies Press, Letchworth, England. } \keyword{datasets} mclust/man/Mclust.Rd0000644000176200001440000002001113175414714014073 0ustar liggesusers\name{Mclust} \alias{Mclust} \alias{print.Mclust} \title{Model-Based Clustering} \description{ The optimal model according to BIC for EM initialized by hierarchical clustering for parameterized Gaussian mixture models. } \usage{ Mclust(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), x = NULL, verbose = interactive(), \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{G}{ An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated. The default is \code{G=1:9}. } \item{modelNames}{ A vector of character strings indicating the models to be fitted in the EM phase of clustering. The default is: \tabular{ll}{ for univariate data \tab \code{c("E", "V")} \cr for multivariate data (\eqn{n > d}) \tab \code{mclust.options("emModelNames")} \cr for multivariate data (\eqn{n <= d}) the spherical and diagonal models \tab \code{c("EII", "VII", "EEI", "EVI", "VEI", "VVI")} } The help file for \code{\link{mclustModelNames}} describes the available models. } \item{prior}{ The default assumes no prior, but this argument allows specification of a conjugate prior on the means and variances through the function \code{\link{priorControl}}. \cr Note that, as described in \code{\link{defaultPrior}}, in the multivariate case only 10 out of 14 models may be used in conjunction with a prior, i.e. those available in \emph{MCLUST} up to version 4.4. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{initialization}{ A list containing zero or more of the following components: \describe{ \item{\code{hcPairs}}{ A matrix of merge pairs for hierarchical clustering such as produced by function \code{hc}. For multivariate data, the default is to compute a hierarchical clustering tree by applying function \code{hc} with \code{modelName = "VVV"} to the data or a subset as indicated by the \code{subset} argument. The hierarchical clustering results are to start EM. For univariate data, the default is to use quantiles to start EM.} \item{\code{subset}}{ A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. By default no subset is used unless the number of observations exceeds the value specified by \code{mclust.options("subset")}. Note that to guarantee exact reproducibility of results a seed must be specified (see \code{\link{set.seed}}).} \item{\code{noise}}{ A logical or numeric vector indicating an initial guess as to which observations are noise in the data. If numeric the entries should correspond to row indexes of the data. If supplied, a noise term will be added to the model in the estimation.} } } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued. The default is controlled by \code{\link{mclust.options}}. } \item{x}{ An object of class \code{'mclustBIC'}. If supplied, BIC values for models that have already been computed and are available in \code{x} are not recomputed. All arguments, with the exception of \code{data}, \code{G} and \code{modelName}, are ignored and their values are set as specified in the attributes of \code{x}. Defaults for \code{G} and \code{modelNames} are taken from \code{x}. } \item{verbose}{ A logical controlling if a text progress bar is displayed during the fitting procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ An object of class \code{'Mclust'} providing the optimal (according to BIC) mixture model estimation. The details of the output components are as follows: \item{call}{The matched call} \item{data}{The input data matrix.} \item{modelName}{ A character string denoting the model at which the optimal BIC occurs. } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The optimal number of mixture components. } \item{BIC}{ All BIC values. } \item{bic}{ Optimal BIC value. } \item{loglik}{ The log-likelihood corresponding to the optimal BIC. } \item{df}{ The number of estimated parameters. } \item{hypvol}{ The hypervolume parameter for the noise component if required, otherwise set to \code{NULL} (see \code{\link{hypvol}}). } \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in the test data belongs to the \emph{k}th class. } \item{classification}{ The classification corresponding to \code{z}, i.e. \code{map(z)}. } \item{uncertainty}{ The uncertainty associated with the classification. } } \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. C. Fraley and A. E. Raftery (2007) Bayesian regularization for normal mixture estimation and model-based clustering. \emph{Journal of Classification}, 24, 155-181. } \seealso{ \code{\link{summary.Mclust}}, \code{\link{plot.Mclust}}, \code{\link{priorControl}}, \code{\link{emControl}}, \code{\link{hc}}, \code{\link{mclustBIC}}, \code{\link{mclustModelNames}}, \code{\link{mclust.options}} } \examples{ mod1 = Mclust(iris[,1:4]) summary(mod1) mod2 = Mclust(iris[,1:4], G = 3) summary(mod2, parameters = TRUE) # Using prior mod3 = Mclust(iris[,1:4], prior = priorControl()) summary(mod3) mod4 = Mclust(iris[,1:4], prior = priorControl(functionName="defaultPrior", shrinkage=0.1)) summary(mod4) # Clustering of faithful data with some artificial noise added nNoise = 100 set.seed(0) # to make it reproducible Noise = apply(faithful, 2, function(x) runif(nNoise, min = min(x)-.1, max = max(x)+.1)) data = rbind(faithful, Noise) plot(faithful) points(Noise, pch = 20, cex = 0.5, col = "lightgrey") set.seed(0) NoiseInit = sample(c(TRUE,FALSE), size = nrow(faithful)+nNoise, replace = TRUE, prob = c(3,1)/4) mod5 = Mclust(data, initialization = list(noise = NoiseInit)) summary(mod5, parameter = TRUE) plot(mod5, what = "classification") } \keyword{cluster} mclust/man/plot.MclustDA.Rd0000644000176200001440000001357113175053667015300 0ustar liggesusers\name{plot.MclustDA} \alias{plot.MclustDA} \title{ Plotting method for MclustDA discriminant analysis } \description{ Graphical tools for training and test data, known training data classification, mclustDA test data classification, and/or training errors. } \usage{ \method{plot}{MclustDA}(x, what = c("scatterplot", "classification", "train&test", "error"), newdata, newclass, dimens, symbols, colors, \dots) } \arguments{ \item{x}{ An object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}. } \item{what}{ The type of graph requested: \describe{ \item{\code{"scatterplot"} =}{a plot of training data with points marked based the known classification. Ellipses corresponding to covariances of mixture components are also drawn.} \item{\code{"classification"} =}{a plot of data with points marked based the predicted classification; if \code{newdata} is provided then the test set is shown otherwise the training set.} \item{\code{"train&test"} =}{a plot of training and test data with points marked according to the type of set.} \item{\code{"error"} =}{a plot of training set (or test set if \code{newdata} and \code{newclass} are provided) with misclassified points marked.} } } \item{newdata}{ A data frame or matrix for test data. } \item{newclass}{ A vector giving the class labels for the observations in the test data (if known). } \item{dimens}{ A vector of integers giving the dimensions of the desired coordinate projections for multivariate data. The default is to take all the the available dimensions for plotting. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{factor}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{factor}). The default is given by \code{mclust.options("classPlotColors")}. } \item{\dots}{further arguments passed to or from other methods.} } %\value{} \details{ For more flexibility in plotting, use \code{mclust1Dplot}, \code{mclust2Dplot}, \code{surfacePlot}, \code{coordProj}, or \code{randProj}. } \author{Luca Scrucca} \seealso{ \code{\link{MclustDA}}, \code{\link{surfacePlot}}, \code{\link{coordProj}}, \code{\link{randProj}} } \examples{ \dontrun{ odd <- seq(from = 1, to = nrow(iris), by = 2) even <- odd + 1 X.train <- iris[odd,-5] Class.train <- iris[odd,5] X.test <- iris[even,-5] Class.test <- iris[even,5] # common EEE covariance structure (which is essentially equivalent to linear discriminant analysis) irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA", modelNames = "EEE") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # common covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # general covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train) summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) plot(irisMclustDA) plot(irisMclustDA, dimens = 3:4) plot(irisMclustDA, dimens = 4) plot(irisMclustDA, what = "classification") plot(irisMclustDA, what = "classification", newdata = X.test) plot(irisMclustDA, what = "classification", dimens = 3:4) plot(irisMclustDA, what = "classification", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "classification", dimens = 4) plot(irisMclustDA, what = "classification", dimens = 4, newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 4) plot(irisMclustDA, what = "error") plot(irisMclustDA, what = "error", dimens = 3:4) plot(irisMclustDA, what = "error", dimens = 4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 3:4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 4) # simulated 1D data n <- 250 set.seed(1) triModal <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5)) triClass <- c(rep(1,n), rep(2,n), rep(3,n)) odd <- seq(from = 1, to = length(triModal), by = 2) even <- odd + 1 triMclustDA <- MclustDA(triModal[odd], triClass[odd]) summary(triMclustDA, parameters = TRUE) summary(triMclustDA, newdata = triModal[even], newclass = triClass[even]) plot(triMclustDA) plot(triMclustDA, what = "classification") plot(triMclustDA, what = "classification", newdata = triModal[even]) plot(triMclustDA, what = "train&test", newdata = triModal[even]) plot(triMclustDA, what = "error") plot(triMclustDA, what = "error", newdata = triModal[even], newclass = triClass[even]) # simulated 2D cross data data(cross) odd <- seq(from = 1, to = nrow(cross), by = 2) even <- odd + 1 crossMclustDA <- MclustDA(cross[odd,-1], cross[odd,1]) summary(crossMclustDA, parameters = TRUE) summary(crossMclustDA, newdata = cross[even,-1], newclass = cross[even,1]) plot(crossMclustDA) plot(crossMclustDA, what = "classification") plot(crossMclustDA, what = "classification", newdata = cross[even,-1]) plot(crossMclustDA, what = "train&test", newdata = cross[even,-1]) plot(crossMclustDA, what = "error") plot(crossMclustDA, what = "error", newdata =cross[even,-1], newclass = cross[even,1]) } } \keyword{multivariate} mclust/man/mvn.Rd0000644000176200001440000000606613175053562013442 0ustar liggesusers\name{mvn} \alias{mvn} \title{ Univariate or Multivariate Normal Fit } \description{ Computes the mean, covariance, and log-likelihood from fitting a single Gaussian to given data (univariate or multivariate normal). } \usage{ mvn( modelName, data, prior = NULL, warn = NULL, \ldots) } \arguments{ \item{modelName}{ A character string representing a model name. This can be either \code{"Spherical"}, \code{"Diagonal"}, or \code{"Ellipsoidal"} or else \cr \code{"X"} for one-dimensional data,\cr \code{"XII"} for a spherical Gaussian, \cr \code{"XXI"} for a diagonal Gaussian \cr \code{"XXX"} for a general ellipsoidal Gaussian } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{warn}{ A logical value indicating whether or not a warning should be issued whenever a singularity is encountered. The default is given by \code{mclust.options("warn")}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{parameters}{ \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{mvnX}}, \code{\link{mvnXII}}, \code{\link{mvnXXI}}, \code{\link{mvnXXX}}, \code{\link{mclustModelNames}} } \examples{ n <- 1000 set.seed(0) x <- rnorm(n, mean = -1, sd = 2) mvn(modelName = "X", x) mu <- c(-1, 0, 1) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% (2*diag(3)), MARGIN = 2, STATS = mu, FUN = "+") mvn(modelName = "XII", x) mvn(modelName = "Spherical", x) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% diag(1:3), MARGIN = 2, STATS = mu, FUN = "+") mvn(modelName = "XXI", x) mvn(modelName = "Diagonal", x) Sigma <- matrix(c(9,-4,1,-4,9,4,1,4,9), 3, 3) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% chol(Sigma), MARGIN = 2, STATS = mu, FUN = "+") mvn(modelName = "XXX", x) mvn(modelName = "Ellipsoidal", x) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/imputeData.Rd0000644000176200001440000000325713205036743014732 0ustar liggesusers\name{imputeData} \alias{imputeData} \alias{matchCluster} \title{Missing data imputation via the \pkg{mix} package} \description{ Imputes missing data using the \pkg{mix} package. } \usage{ imputeData(data, categorical = NULL, seed = NULL, verbose = interactive()) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations containing missing values. Categorical variables are allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{categorical}{ A logical vectors whose \emph{i}th entry is \code{TRUE} if the \emph{i}th variable or column of \code{data} is to be interpreted as categorical and \code{FALSE} otherwise. The default is to assume that a variable is to be interpreted as categorical only if it is a factor. } \item{seed}{ A seed for the function \code{rngseed} that is used to initialize the random number generator in \pkg{mix}. By default, a seed is chosen uniformly in the interval \code{(.Machine$integer.max/1024, .Machine$integer.max)}. } \item{verbose}{ A logical, if \code{TRUE} reports info about iterations of the algorithm. } } \value{ A dataset of the same dimensions as \code{data} with missing values filled in. } \references{ Schafer J. L. (1997). Analysis of Imcomplete Multivariate Data, Chapman and Hall. } \seealso{ \code{\link{imputePairs}} } \examples{ \dontrun{ # Note that package 'mix' must be installed data(stlouis, package = "mix") # impute the continuos variables in the stlouis data stlimp <- imputeData(stlouis[,-(1:3)]) # plot imputed values imputePairs(stlouis[,-(1:3)], stlimp) } } \keyword{cluster} mclust/man/diabetes.Rd0000644000176200001440000000144612577017527014425 0ustar liggesusers\name{diabetes} \alias{diabetes} \docType{data} \title{Diabetes data} \description{The data set contains three measurements made on 145 non-obese adult patients classified into three groups.} \usage{data(diabetes)} \format{A data frame with the following variables: \describe{ \item{class}{The type of diabete: \code{Normal}, \code{Overt}, and \code{Chemical}.} \item{glucose}{Area under plasma glucose curve after a three hour oral glucose tolerance test (OGTT).} \item{insulin}{Area under plasma insulin curve after a three hour oral glucose tolerance test (OGTT).} \item{sspg}{Steady state plasma glucose.} } } \source{Reaven, G. M. and Miller, R. G. (1979). An attempt to define the nature of chemical diabetes using a multidimensional analysis. \emph{Diabetologia} 16:17-24.} \keyword{datasets} mclust/man/clustCombiOptim.Rd0000644000176200001440000000415213107072272015743 0ustar liggesusers\name{clustCombiOptim} \alias{clustCombiOptim} \title{Optimal number of clusters obtained by combining mixture components} \description{ Return the optimal number of clusters by combining mixture components based on the entropy method discussed in the reference given below. } \usage{ clustCombiOptim(object, reg = 2, plot = FALSE, \dots) } \arguments{ \item{object}{ An object of class \code{'clustCombi'} resulting from a call to \code{\link{clustCombi}}. } \item{reg}{ The number of parts of the piecewise linear regression for the entropy plots. Choose 2 for a two-segment piecewise linear regression model (i.e. 1 change-point), and 3 for a three-segment piecewise linear regression model (i.e. 3 change-points). } \item{plot}{ Logical, if \code{TRUE} an entropy plot is also produced. } \item{\dots}{Further arguments passed to or from other methods.} } \value{ The function returns a list with the following components: \item{numClusters.combi}{The estimated number of clusters.} \item{z.combi}{A matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in the data belongs to the \emph{k}th cluster.} \item{cluster.combi}{The clustering labels.} } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{combiPlot}}, \code{\link{entPlot}}, \code{\link{clustCombi}} } \examples{ data(Baudry_etal_2010_JCGS_examples) output <- clustCombi(data = ex4.1) combiOptim <- clustCombiOptim(output) str(combiOptim) # plot optimal clustering with alpha color transparency proportional to uncertainty zmax <- apply(combiOptim$z.combi, 1, max) col <- mclust.options("classPlotColors")[combiOptim$cluster.combi] vadjustcolor <- Vectorize(adjustcolor) alphacol = (zmax - 1/combiOptim$numClusters.combi)/(1-1/combiOptim$numClusters.combi) col <- vadjustcolor(col, alpha.f = alphacol) plot(ex4.1, col = col, pch = mclust.options("classPlotSymbols")[combiOptim$cluster.combi]) } \keyword{ cluster } mclust/man/MclustDR.Rd0000644000176200001440000001151213175053277014332 0ustar liggesusers\name{MclustDR} \alias{MclustDR} \alias{print.MclustDR} \title{Dimension reduction for model-based clustering and classification} \description{ A dimension reduction method for visualizing the clustering or classification structure obtained from a finite mixture of Gaussian densities. } \usage{ MclustDR(object, normalized = TRUE, Sigma, lambda = 0.5, tol = sqrt(.Machine$double.eps)) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{An object of class \code{'Mclust'} or \code{'MclustDA'} resulting from a call to, respectively, \code{\link{Mclust}} or \code{\link{MclustDA}}. } \item{normalized}{Logical. If \code{TRUE} directions are normalized to unit norm. } \item{Sigma}{Marginal covariance matrix of data. If not provided is estimated by the MLE of observed data. } \item{lambda}{A tuning parameter in the range [0,1] described in Scrucca (2014). The default 0.5 gives equal importance to differences in means and covariances among clusters/classes. To recover the directions that mostly separate the estimated clusters or classes set this parameter to 1. } \item{tol}{A tolerance value.} } \details{ The method aims at reducing the dimensionality by identifying a set of linear combinations, ordered by importance as quantified by the associated eigenvalues, of the original features which capture most of the clustering or classification structure contained in the data. Information on the dimension reduction subspace is obtained from the variation on group means and, depending on the estimated mixture model, on the variation on group covariances (see Scrucca, 2010). Observations may then be projected onto such a reduced subspace, thus providing summary plots which help to visualize the underlying structure. The method has been extended to the supervised case, i.e. when the true classification is known (see Scrucca, 2013). This implementation doesn't provide a formal procedure for the selection of dimensionality. A future release will include one or more methods. } \value{ An object of class \code{'MclustDR'} with the following components: \item{call}{The matched call} \item{type}{A character string specifying the type of model for which the dimension reduction is computed. Currently, possible values are \code{"Mclust"} for clustering, and \code{"MclustDA"} or \code{"EDDA"} for classification.} \item{x}{The data matrix.} \item{Sigma}{The covariance matrix of the data.} \item{mixcomp}{A numeric vector specifying the mixture component of each data observation.} \item{class}{A factor specifying the classification of each data observation. For model-based clustering this is equivalent to the corresponding mixture component. For model-based classification this is the known classification.} \item{G}{The number of mixture components.} \item{modelName}{The name of the parameterization of the estimated mixture model(s). See \code{\link{mclustModelNames}}.} \item{mu}{A matrix of means for each mixture component.} \item{sigma}{An array of covariance matrices for each mixture component.} \item{pro}{The estimated prior for each mixture component.} \item{M}{The kernel matrix.} \item{lambda}{The tuning parameter.} \item{evalues}{The eigenvalues from the generalized eigen-decomposition of the kernel matrix.} \item{raw.evectors}{The raw eigenvectors from the generalized eigen-decomposition of the kernel matrix, ordered according to the eigenvalues.} \item{basis}{The basis of the estimated dimension reduction subspace.} \item{std.basis}{The basis of the estimated dimension reduction subspace standardized to variables having unit standard deviation.} \item{numdir}{The dimension of the projection subspace.} \item{dir}{The estimated directions, i.e. the data projected onto the estimated dimension reduction subspace.} } \references{ Scrucca, L. (2010) Dimension reduction for model-based clustering. \emph{Statistics and Computing}, 20(4), pp. 471-484. Scrucca, L. (2014) Graphical Tools for Model-based Mixture Discriminant Analysis. \emph{Advances in Data Analysis and Classification}, 8(2), pp. 147-165. } \author{Luca Scrucca} %\note{} \seealso{ \code{\link{summary.MclustDR}}, \code{\link{plot.MclustDR}}, \code{\link{Mclust}}, \code{\link{MclustDA}}. } \examples{ # clustering data(diabetes) mod = Mclust(diabetes[,-1]) summary(mod) dr = MclustDR(mod) summary(dr) plot(dr, what = "scatterplot") plot(dr, what = "evalues") # adjust the tuning parameter to show the most separating directions dr1 = MclustDR(mod, lambda = 1) summary(dr1) plot(dr1, what = "scatterplot") plot(dr1, what = "evalues") # classification data(banknote) da = MclustDA(banknote[,2:7], banknote$Status, modelType = "EDDA") dr = MclustDR(da) summary(dr) da = MclustDA(banknote[,2:7], banknote$Status) dr = MclustDR(da) summary(dr) } \keyword{multivariate} mclust/man/summary.MclustDA.Rd0000644000176200001440000000262413175055240016001 0ustar liggesusers\name{summary.MclustDA} \alias{summary.MclustDA} \alias{print.summary.MclustDA} \title{Summarizing discriminant analysis based on Gaussian finite mixture modeling.} \description{Summary method for class \code{"MclustDA"}.} \usage{ \method{summary}{MclustDA}(object, parameters = FALSE, newdata, newclass, \dots) \method{print}{summary.MclustDA}(x, digits = getOption("digits"), \dots) } \arguments{ \item{object}{An object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}.} \item{x}{An object of class \code{'summary.MclustDA'}, usually, a result of a call to \code{summary.MclustDA}.} \item{parameters}{Logical; if \code{TRUE}, the parameters of mixture components are printed.} \item{newdata}{A data frame or matrix giving the test data.} \item{newclass}{A vector giving the class labels for the observations in the test data.} \item{digits}{The number of significant digits to use when printing.} \item{\dots}{Further arguments passed to or from other methods.} } % \details{} \value{The function \code{summary.MclustDA} computes and returns a list of summary statistics of the estimated MclustDA or EDDA model for classification.} \author{Luca Scrucca} % \note{} \seealso{\code{\link{MclustDA}}, \code{\link{plot.MclustDA}}.} \examples{ mod = MclustDA(data = iris[,1:4], class = iris$Species) summary(mod) summary(mod, parameters = TRUE) } \keyword{multivariate} mclust/man/summary.MclustBootstrap.Rd0000644000176200001440000000254113175055232017471 0ustar liggesusers\name{summary.MclustBootstrap} \alias{summary.MclustBootstrap} \alias{print.summary.MclustBootstrap} \title{Summary Function for Bootstrap Inference for Gaussian Finite Mixture Models} \description{Summary of bootstrap distribution for the parameters of a Gaussian mixture model providing either standard errors or percentile bootstrap confidence intervals.} \usage{ \method{summary}{MclustBootstrap}(object, what = c("se", "ci"), conf.level = 0.95, \dots) } \arguments{ \item{object}{An object of class \code{'MclustBootstrap'} as returned by \code{\link{MclustBootstrap}}.} \item{what}{A character string: \code{"se"} for the standard errors; \code{"ci"} for the confidence intervals.} \item{conf.level}{A value specifying the confidence level of the interval.} \item{\dots}{Further arguments passed to or from other methods.} } \details{For details about the procedure used to obtain the bootstrap distribution see \code{\link{MclustBootstrap}}.} %\value{} \seealso{\code{\link{MclustBootstrap}}.} \examples{ \dontrun{ data(diabetes) X = diabetes[,-1] modClust = Mclust(X) bootClust = MclustBootstrap(modClust) summary(bootClust, what = "se") summary(bootClust, what = "ci") data(acidity) modDens = densityMclust(acidity) modDens = MclustBootstrap(modDens) summary(modDens, what = "se") summary(modDens, what = "ci") } } \keyword{htest} \keyword{cluster} mclust/man/plot.Mclust.Rd0000644000176200001440000000555413175053646015072 0ustar liggesusers\name{plot.Mclust} \alias{plot.Mclust} \title{ Plot Model-Based Clustering Results } \description{ Plot model-based clustering results: BIC, classification, uncertainty and (for univariate and bivariate data) density. } \usage{ \method{plot}{Mclust}(x, what = c("BIC", "classification", "uncertainty", "density"), dimens = NULL, xlab = NULL, ylab = NULL, ylim = NULL, addEllipses = TRUE, main = TRUE, \dots) } \arguments{ \item{x}{ Output from \code{Mclust}. } \item{what}{ The type of graph requested: \describe{ \item{\code{"BIC"}}{} \item{\code{"classification"}}{} \item{\code{"uncertainty"}}{} \item{\code{"density"}}{} } By default, all the above graphs are produced. See the description below. } \item{dimens}{ A vector of length one or two giving the integer dimensions of the desired coordinate projections for multivariate data in case of \code{"classification"} or \code{"uncertainty"} plots. } \item{xlab, ylab}{ Optional labels for the x-axis and the y-axis. } \item{ylim}{ Optional limits for the vertical axis of the BIC plot. } \item{addEllipses}{ A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances in case of \code{"classification"} or \code{"uncertainty"} plots. } \item{main}{ A logical or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{\dots}{ Other graphics parameters. } } \value{ Model-based clustering plots: \describe{ \item{\code{"BIC"} =}{BIC values used for choosing the number of clusters.} \item{\code{"classification"} =}{a plot showing the clustering. For data in more than two dimensions a pairs plot is produced, followed by a coordinate projection plot using specified \code{dimens}.} \item{\code{"uncertainty"} =}{a plot of classification uncertainty. For data in more than two dimensions a coordinate projection plot is drawn using specified \code{dimens}.} \item{\code{"density"} =}{a plot of estimated density. For two dimensional a contour plot is drawn, while for data in more than two dimensions a matrix of contours for pairs of variables is produced.} } } \details{ For more flexibility in plotting, use \code{mclust1Dplot}, \code{mclust2Dplot}, \code{surfacePlot}, \code{coordProj}, or \code{randProj}. } \seealso{ \code{\link{Mclust}}, \code{\link{plot.mclustBIC}}, \code{\link{plot.mclustICL}}, \code{\link{mclust1Dplot}}, \code{\link{mclust2Dplot}}, \code{\link{surfacePlot}}, \code{\link{coordProj}}, \code{\link{randProj}}. } \examples{ precipMclust <- Mclust(precip) plot(precipMclust) faithfulMclust <- Mclust(faithful) plot(faithfulMclust) irisMclust <- Mclust(iris[,-5]) plot(irisMclust) } \keyword{cluster} % docclass is function mclust/man/uncerPlot.Rd0000644000176200001440000000275413175055331014611 0ustar liggesusers\name{uncerPlot} \alias{uncerPlot} \title{ Uncertainty Plot for Model-Based Clustering } \description{ Displays the uncertainty in converting a conditional probablility from EM to a classification in model-based clustering. } \usage{ uncerPlot(z, truth, \dots) } \arguments{ \item{z}{ A matrix whose \emph{[i,k]}th entry is the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. } \item{truth}{ A numeric or character vector giving the true classification of the data. } \item{\dots }{ Provided to allow lists with elements other than the arguments can be passed in indirect or list calls with \code{do.call}. } } \value{ A plot of the uncertainty profile of the data, with uncertainties in increasing order of magnitude. If \code{truth} is supplied and the number of classes is the same as the number of columns of \code{z}, the uncertainty of the misclassified data is marked by vertical lines on the plot. } \details{ When \code{truth} is provided and the number of classes is compatible with \code{z}, the function \code{compareClass} is used to to find best correspondence between classes in \code{truth} and \code{z}. } \seealso{ \code{\link{mclustBIC}}, \code{\link{em}}, \code{\link{me}}, \code{\link{mapClass}} } \examples{ irisModel3 <- Mclust(iris[,-5], G = 3) uncerPlot(z = irisModel3$z) uncerPlot(z = irisModel3$z, truth = iris[,5]) } \keyword{cluster} % docclass is function mclust/man/sigma2decomp.Rd0000644000176200001440000000555613175055163015216 0ustar liggesusers\name{sigma2decomp} \alias{sigma2decomp} \title{ Convert mixture component covariances to decomposition form. } \description{ Converts a set of covariance matrices from representation as a 3-D array to a parameterization by eigenvalue decomposition. } \usage{ sigma2decomp(sigma, G = NULL, tol = sqrt(.Machine$double.eps), \dots) } \arguments{ \item{sigma}{ Either a 3-D array whose [,,k]th component is the covariance matrix for the kth component in an MVN mixture model, or a single covariance matrix in the case that all components have the same covariance. } \item{G}{ The number of components in the mixture. When \code{sigma} is a 3-D array, the number of components can be inferred from its dimensions. } \item{tol}{ Tolerance for determining whether or not the covariances have equal volume, shape, and or orientation. The default is the square root of the relative machine precision, \code{sqrt(.Machine$double.eps)}, which is about \code{1.e-8}. } \item{\dots}{ Catches unused arguments from an indirect or list call via \code{do.call}. } } \value{ The covariance matrices for the mixture components in decomposition form, including the following components: \item{modelName}{ A character string indicating the infered model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{d}{ The dimension of the data. } \item{G}{ The number of components in the mixture model. } \item{scale}{ Either a \emph{G}-vector giving the scale of the covariance (the \emph{d}th root of its determinant) for each component in the mixture model, or a single numeric value if the scale is the same for each component. } \item{shape}{ Either a \emph{G} by \emph{d} matrix in which the \emph{k}th column is the shape of the covariance matrix (normalized to have determinant 1) for the \emph{k}th component, or a \emph{d}-vector giving a common shape for all components. } \item{orientation}{ Either a \emph{d} by \emph{d} by \emph{G} array whose \code{[,,k]}th entry is the orthonomal matrix whose columns are the eigenvectors of the covariance matrix of the \emph{k}th component, or a \emph{d} by \emph{d} orthonormal matrix if the mixture components have a common orientation. The \code{orientation} component of \code{decomp} can be omitted in spherical and diagonal models, for which the principal components are parallel to the coordinate axes so that the orientation matrix is the identity. } } \seealso{ \code{\link{decomp2sigma}} } \examples{ meEst <- meEEE(iris[,-5], unmap(iris[,5])) names(meEst$parameters$variance) meEst$parameters$variance$Sigma sigma2decomp(meEst$parameters$variance$Sigma, G = length(unique(iris[,5]))) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/nMclustParams.Rd0000644000176200001440000000275613175053603015431 0ustar liggesusers\name{nMclustParams} \alias{nMclustParams} \title{Number of Estimated Parameters in Gaussian Mixture Models} \description{ Gives the number of estimated parameters for parameterizations of the Gaussian mixture model that are used in MCLUST. } \usage{ nMclustParams(modelName, d, G, noise = FALSE, equalPro = FALSE, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{d}{ The dimension of the data. Not used for models in which neither the shape nor the orientation varies. } \item{G}{ The number of components in the Gaussian mixture model used to compute \code{loglik}. } \item{noise}{ A logical variable indicating whether or not the model includes an optional Poisson noise component. } \item{equalPro}{ A logical variable indicating whether or not the components in the model are assumed to be present in equal proportion. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ The number of variance parameters in the corresponding Gaussian mixture model. } \details{ To get the total number of parameters in model, add \code{G*d} for the means and \code{G-1} for the mixing proportions if they are unequal. } \seealso{ \code{\link{bic}}, \code{\link{nVarParams}}. } \examples{ mapply(nMclustParams, mclust.options("emModelNames"), d = 2, G = 3) } \keyword{cluster} mclust/man/mclust.options.Rd0000644000176200001440000001324713205002422015621 0ustar liggesusers\name{mclust.options} \alias{mclust.options} \title{Default values for use with MCLUST package} \description{Set or retrieve default values for use with MCLUST package.} \usage{ mclust.options(\dots) } \arguments{ \item{\dots}{ one or more arguments provided in the \code{name = value} form, or no argument at all may be given. \cr Available arguments are described in the Details section below.} } \details{ \code{mclust.options} is provided for assigning values to the \code{.mclust} variable list, which is used to supply default values to various functions in \code{MCLUST}. \cr Available options are: \describe{ \item{\code{emModelNames}}{ A vector of 3-character strings that are associated with multivariate models for which EM estimation is available in MCLUST. \cr The current default is all of the multivariate mixture models supported in MCLUST. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{\code{hcModelNames}}{ A vector of character strings associated with multivariate models for which model-based hierarchical clustering is available in MCLUST. \cr The available models are the following:\cr \code{"EII"} = spherical, equal volume \cr \code{"EEE"} = ellipsoidal, equal volume, shape, and orientation \cr \code{"VII"} = spherical, unequal volume \cr \code{"VVV"} = ellipsoidal, varying volume, shape, and orientation. \cr The last model in this list is used as default for initialization of EM-algorithm. } \item{\code{hcUse}}{ A string specifying the type of input variables to be used for model-based hierarchical clustering to start the EM-algorithm. Possible values are:\cr \code{"VARS"} = original variables (default);\cr \code{"STD"} = standardized variables;\cr \code{"SPH"} = sphered variables (centered, scaled, uncorrelated) computed using SVD;\cr \code{"PCS"} = principal components computed using SVD on centered variables (i.e. using the covariance matrix);\cr \code{"PCR"} = principal components computed using SVD on standardized (center and scaled) variables (i.e. using the correlation matrix);\cr \code{"SVD"} = scaled SVD transformation.\cr For further details see Scrucca and Raftery (2015), Scrucca et al. (2016). } \item{\code{subset}}{ A value specifying the maximal sample size to be used in the model-based hierarchical clustering to start the EM-algorithm. If data sample size exceeds this value, a random sample is drawn of size specified by \code{subset}. } \item{\code{bicPlotSymbols}}{ A vector whose entries correspond to graphics symbols for plotting the BIC values output from \code{\link{Mclust}} and \code{\link{mclustBIC}}. These are displayed in the legend which appears at the lower right of the BIC plots. } \item{\code{bicPlotColors}}{ A vector whose entries correspond to colors for plotting the BIC curves from output from \code{\link{Mclust}} and \code{\link{mclustBIC}}. These are displayed in the legend which appears at the lower right of the BIC plots. } \item{\code{classPlotSymbols}}{ A vector whose entries are either integers corresponding to graphics symbols or single characters for indicating classifications when plotting data. Classes are assigned symbols in the given order. } \item{\code{classPlotColors}}{ A vector whose entries correspond to colors for indicating classifications when plotting data. Classes are assigned colors in the given order. } \item{\code{warn}}{ A logical value indicating whether or not to issue certain warnings. Most of these warnings have to do with situations in which singularities are encountered. The default is \code{warn = FALSE}. } } The parameter values set via a call to this function will remain in effect for the rest of the session, affecting the subsequent behaviour of the functions for which the given parameters are relevant. } \value{ If the argument list is empty the function returns the current list of values. If the argument list is not empty, the returned list is invisible. } \seealso{ \code{\link{Mclust}}, \code{\link{MclustDA}}, \code{\link{densityMclust}}, \code{\link{emControl}} } \references{ Scrucca L. and Raftery A. E. (2015) Improved initialisation of model-based clustering using Gaussian hierarchical partitions. \emph{Advances in Data Analysis and Classification}, 9/4, pp. 447-460. Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. } \examples{ opt <- mclust.options() # save default values irisBIC <- mclustBIC(iris[,-5]) summary(irisBIC, iris[,-5]) mclust.options(emModelNames = c("EII", "EEI", "EEE")) irisBIC <- mclustBIC(iris[,-5]) summary(irisBIC, iris[,-5]) mclust.options(opt) # restore default values mclust.options() oldpar <- par(mfrow = c(2,1), no.readonly = TRUE) n <- with(mclust.options(), max(sapply(list(bicPlotSymbols, bicPlotColors),length))) plot(seq(n), rep(1,n), ylab = "", xlab = "", yaxt = "n", pch = mclust.options("bicPlotSymbols"), col = mclust.options("bicPlotColors")) title("mclust.options(\"bicPlotSymbols\") \n mclust.options(\"bicPlotColors\")") n <- with(mclust.options(), max(sapply(list(classPlotSymbols, classPlotColors),length))) plot(seq(n), rep(1,n), ylab = "", xlab = "", yaxt = "n", pch = mclust.options("classPlotSymbols"), col = mclust.options("classPlotColors")) title("mclust.options(\"classPlotSymbols\") \n mclust.options(\"classPlotColors\")") par(oldpar) } \keyword{cluster} mclust/man/icl.Rd0000644000176200001440000000161613205036712013375 0ustar liggesusers\name{icl} \alias{icl} \title{ ICL for an estimated Gaussian Mixture Model } \description{ Computes the ICL (Integrated Complete-data Likelihood) for criterion for a Gaussian Mixture Model fitted by \code{\link{Mclust}}. } \usage{ icl(object, \dots) } \arguments{ \item{object}{ An object of class \code{'Mclust'} resulting from a call to \code{\link{Mclust}}. } \item{\dots}{Further arguments passed to or from other methods.} } \value{ The ICL for the given input MCLUST model. } \references{ Biernacki, C., Celeux, G., Govaert, G. (2000). Assessing a mixture model for clustering with the integrated completed likelihood. \emph{IEEE Trans. Pattern Analysis and Machine Intelligence}, 22 (7), 719-725. } \seealso{ \code{\link{Mclust}}, \code{\link{mclustBIC}}, \code{\link{mclustICL}}, \code{\link{bic}}. } \examples{ mod <- Mclust(iris[,1:4]) icl(mod) } \keyword{cluster} mclust/man/me.weighted.Rd0000644000176200001440000001105513175053427015034 0ustar liggesusers\name{me.weighted} \alias{me.weighted} \title{EM algorithm with weights starting with M-step for parameterized MVN mixture models} \description{ Implements the EM algorithm for fitting MVN mixture models parameterized by eigenvalue decomposition, when observations have weights, starting with the maximization step. } \usage{ me.weighted(modelName, data, z, weights = NULL, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is an initial estimate of the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. } \item{weights}{ A vector of positive weights, where the \code{[i]}th entry is the weight for the ith observation. If any of the weights are greater than one, then they are scaled so that the maximum weight is one. } \item{prior}{ Specification of a conjugate prior on the means and variances. See the help file for \code{priorControl} for further information. The default assumes no prior. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{\link{emControl}}. } \item{Vinv}{ If the model is to include a noise term, \code{Vinv} is an estimate of the reciprocal hypervolume of the data region. If set to a negative value or 0, the model will include a noise term with the reciprocal hypervolume estimated by the function \code{hypvol}. The default is not to assume a noise term in the model through the setting \code{Vinv=NULL}. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is set by \code{warn} using \code{\link{mclust.options}}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \author{Thomas Brendan Murphy} \seealso{ \code{\link{me}}, \code{\link{meE}},..., \code{\link{meVVV}}, \code{\link{em}}, \code{\link{mstep}}, \code{\link{estep}}, \code{\link{priorControl}}, \code{\link{mclustModelNames}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}} } \examples{ \dontrun{ w <- rep(1,150) w[1] <- 0 me.weighted(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5]),weights=w)} } \keyword{cluster} mclust/man/mclust-deprecated.Rd0000644000176200001440000000132612506737557016253 0ustar liggesusers\name{mclust-deprecated} \alias{cv.MclustDA} \alias{cv1EMtrain} \alias{bicEMtrain} \title{Deprecated Functions in mclust package} \description{ These functions are provided for compatibility with older versions of the \pkg{mclust} package only, and may be removed eventually. } \usage{ cv.MclustDA(\dots) cv1EMtrain(data, labels, modelNames=NULL) bicEMtrain(data, labels, modelNames=NULL) } \arguments{ \item{\dots}{pass arguments down.} \item{data}{A numeric vector or matrix of observations.} \item{labels}{Labels for each element or row in the dataset.} \item{modelNames}{Vector of model names that should be tested. The default is to select all available model names.} } %\details{} mclust/man/plot.MclustBoostrap.Rd0000644000176200001440000000330512544446446016577 0ustar liggesusers\name{plot.MclustBootstrap} \alias{plot.MclustBootstrap} \title{Plot of bootstrap distributions for mixture model parameters} \description{ Plots the bootstrap distribution of parameters as returned by the \code{\link{MclustBootstrap}} function. } \usage{ \method{plot}{MclustBootstrap}(x, what = c("pro", "mean", "var"), hist.col = "grey", hist.border = "lightgrey", breaks = "Sturges", col = "forestgreen", lwd = 2, lty = 3, xlab = NULL, xlim = NULL, ylim = NULL, \dots) } \arguments{ \item{x}{Object returned by \code{MclustBootstrap}.} \item{what}{Character string specifying if mixing proportions (\code{"pro"}), component means (\code{"mean"}) or component variances (\code{"var"}) should be drawn.} \item{hist.col}{The color to be used to fill the bars of the histograms.} \item{hist.border}{The color of the border around the bars of the histograms.} \item{breaks}{See the argument in function \code{\link[graphics]{hist}}.} \item{col, lwd, lty}{The color, line width and line type to be used to represent the estimated parameters.} \item{xlab}{Optional label for the horizontal axis.} \item{xlim, ylim}{A two-values vector of axis range for, respectively, horizontal and vertical axis.} \item{\dots}{Other graphics parameters.} } \value{ A plot for each variable/component of the selected parameters. } \seealso{ \code{\link{MclustBootstrap}} } \examples{ data(diabetes) X = diabetes[,-1] modClust = Mclust(X, G = 3, modelNames = "VVV") bootClust = MclustBootstrap(modClust, nboot = 99) par(mfrow = c(1,3), mar = c(4,2,2,0.5)) plot(bootClust, what = "pro") par(mfrow = c(3,3), mar = c(4,2,2,0.5)) plot(bootClust, what = "mean") } \keyword{cluster} mclust/man/sim.Rd0000644000176200001440000000711213175055171013421 0ustar liggesusers\name{sim} \alias{sim} \title{ Simulate from Parameterized MVN Mixture Models } \description{ Simulate data from parameterized MVN mixture models. } \usage{ sim(modelName, parameters, n, seed = NULL, ...) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{n}{ An integer specifying the number of data points to be simulated. } \item{seed}{ An optional integer argument to \code{set.seed} for reproducible random class assignment. By default the current seed will be used. Reproducibility can also be achieved by calling \code{set.seed} before calling \code{sim}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A matrix in which first column is the classification and the remaining columns are the \code{n} observations simulated from the specified MVN mixture model. \item{Attributes:}{ \code{"modelName"} A character string indicating the variance model used for the simulation. } } \details{ This function can be used with an indirect or list call using \code{do.call}, allowing the output of e.g. \code{mstep}, \code{em}, \code{me}, \code{Mclust} to be passed directly without the need to specify individual parameters as arguments. } \seealso{ \code{\link{simE}}, \ldots, \code{\link{simVVV}}, \code{\link{Mclust}}, \code{\link{mstep}}, \code{\link{do.call}} } \examples{ irisBIC <- mclustBIC(iris[,-5]) irisModel <- mclustModel(iris[,-5], irisBIC) names(irisModel) irisSim <- sim(modelName = irisModel$modelName, parameters = irisModel$parameters, n = nrow(iris)) \dontrun{ do.call("sim", irisModel) # alternative call } par(pty = "s", mfrow = c(1,2)) dimnames(irisSim) <- list(NULL, c("dummy", (dimnames(iris)[[2]])[-5])) dimens <- c(1,2) lim1 <- apply(iris[,dimens],2,range) lim2 <- apply(irisSim[,dimens+1],2,range) lims <- apply(rbind(lim1,lim2),2,range) xlim <- lims[,1] ylim <- lims[,2] coordProj(iris[,-5], parameters=irisModel$parameters, classification=map(irisModel$z), dimens=dimens, xlim=xlim, ylim=ylim) coordProj(iris[,-5], parameters=irisModel$parameters, classification=map(irisModel$z), truth = irisSim[,-1], dimens=dimens, xlim=xlim, ylim=ylim) irisModel3 <- mclustModel(iris[,-5], irisBIC, G=3) irisSim3 <- sim(modelName = irisModel3$modelName, parameters = irisModel3$parameters, n = 500, seed = 1) \dontrun{ irisModel3$n <- NULL irisSim3 <- do.call("sim",c(list(n=500,seed=1),irisModel3)) # alternative call } clPairs(irisSim3[,-1], cl = irisSim3[,1]) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/gmmhd.Rd0000644000176200001440000001202013107075206013713 0ustar liggesusers\name{gmmhd} \alias{gmmhd} \alias{print.gmmhd} \alias{summary.gmmhd} \alias{print.summary.gmmhd} \alias{plot.gmmhd} \alias{gmmhdClusterCores} \alias{gmmhdClassify} \title{Identifying Connected Components in Gaussian Finite Mixture Models for Clustering} \description{ Starting with the density estimate obtained from a fitted Gaussian finite mixture model, cluster cores are identified from the connected components at a given density level. Once cluster cores are identified, the remaining observations are allocated to those cluster cores for which the probability of cluster membership is the highest. } \usage{ gmmhd(object, ngrid = min(round((log(nrow(data)))*10), nrow(data)), dr = list(d = 3, lambda = 1, cumEvalues = NULL, mindir = 2), classify = list(G = 1:5, modelNames = mclust.options("emModelNames")[-c(8, 10)]), \dots) \method{plot}{gmmhd}(x, what = c("mode", "cores", "clusters"), \dots) } \arguments{ \item{object}{An object returned by \code{\link{Mclust}}.} \item{ngrid}{An integer specifying the number of grid points used to compute the density levels.} \item{dr}{A list of parameters used in the dimension reduction step.} \item{classify}{A list of parameters used in the classification step.} \item{x}{An object of class \code{'gmmhd'} as returned by the function \code{gmmhd}.} \item{what}{A string specifying the type of plot to be produced. See Examples section.} \item{\dots}{further arguments passed to or from other methods.} } \details{ Model-based clustering associates each component of a finite mixture distribution to a group or cluster. An underlying implicit assumption is that a one-to-one correspondence exists between mixture components and clusters. However, a single Gaussian density may not be sufficient, and two or more mixture components could be needed to reasonably approximate the distribution within a homogeneous group of observations. This function implements the methodology proposed by Scrucca (2016) based on the identification of high density regions of the underlying density function. Starting with an estimated Gaussian finite mixture model, the corresponding density estimate is used to identify the cluster cores, i.e. those data points which form the core of the clusters. These cluster cores are obtained from the connected components at a given density level \eqn{c}. A mode function gives the number of connected components as the level \eqn{c} is varied. Once cluster cores are identified, the remaining observations are allocated to those cluster cores for which the probability of cluster membership is the highest. The method usually improves the identification of non-Gaussian clusters compared to a fully parametric approach. Furthermore, it enables the identification of clusters which cannot be obtained by merging mixture components, and it can be straightforwardly extended to cases of higher dimensionality. } \value{ A list of class \code{gmmhd} with the following components: \item{Mclust}{The input object of class \code{"Mclust"} representing an estimated Gaussian finite mixture model.} \item{MclustDA}{An object of class \code{"MclustDA"} containing the model used for the classification step.} \item{MclustDR}{An object of class \code{"MclustDR"} containing the dimension reduction step if performed, otherwise \code{NULL}.} \item{x}{The data used in the algorithm. This can be the input data or a projection if a preliminary dimension reduction step is performed.} \item{density}{The density estimated from the input Gaussian finite mixture model evaluated at the input data.} \item{con}{A list of connected components at each step.} \item{nc}{A vector giving the number of connected components (i.e. modes) at each step.} \item{pn}{Vector of values over a uniform grid of proportions of length \code{ngrid}.} \item{qn}{Vector of density quantiles corresponding to proportions \code{pn}.} \item{pc}{Vector of empirical proportions corresponding to quantiles \code{qn}.} \item{clusterCores}{Vector of cluster cores numerical labels; \code{NA}s indicate that an observation does not belong to any cluster core.} \item{clusterCores}{Vector of numerical labels giving the final clustering.} \item{numClusters}{An integer giving the number of clusters.} } \references{ Scrucca, L. (2016) Identifying connected components in Gaussian finite mixture models for clustering. \emph{Computational Statistics & Data Analysis}, 93, 5-17. } \author{ Luca Scrucca \email{luca.scrucca@unipg.it} } %\note{} \seealso{\code{\link{Mclust}}} \examples{ \dontrun{ data(faithful) mod <- Mclust(faithful) summary(mod) plot(as.densityMclust(mod), faithful, what = "density", points.pch = mclust.options("classPlotSymbols")[mod$classification], points.col = mclust.options("classPlotColors")[mod$classification]) GMMHD <- gmmhd(mod) summary(GMMHD) plot(GMMHD, what = "mode") plot(GMMHD, what = "cores") plot(GMMHD, what = "clusters") } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. Use one of RShowDoc("KEYWORDS") \keyword{cluster} mclust/man/predict.densityMclust.Rd0000644000176200001440000000303213205037072017120 0ustar liggesusers\name{predict.densityMclust} \alias{predict.densityMclust} \title{Density estimate of multivariate observations by Gaussian finite mixture modeling} \description{Compute density estimation for multivariate observations based on Gaussian finite mixture models estimated by \code{\link{densityMclust}}.} \usage{ \method{predict}{densityMclust}(object, newdata, what = c("dens", "cdens"), logarithm = FALSE, \dots) } \arguments{ \item{object}{an object of class \code{'densityMclust'} resulting from a call to \code{\link{densityMclust}}.} \item{newdata}{a vector, a data frame or matrix giving the data. If missing the density is computed for the input data obtained from the call to \code{\link{densityMclust}}.} \item{what}{a character string specifying what to retrieve: \code{"dens"} returns a vector of values for the mixture density, \code{cdens} returns a matrix of component densities for each mixture component (along the columns).} \item{logarithm}{A logical value indicating whether or not the logarithm of the density or component densities should be returned.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{ Returns a vector or a matrix of densities evaluated at \code{newdata} depending on the argument \code{what} (see above). } \author{Luca Scrucca} % \note{} \seealso{\code{\link{Mclust}}.} \examples{ \dontrun{ x <- faithful$waiting dens <- densityMclust(x) x0 <- seq(50, 100, by = 10) d0 <- predict(dens, x0) plot(dens) points(x0, d0, pch = 20) } } \keyword{multivariate} mclust/man/defaultPrior.Rd0000644000176200001440000001003113175052444015263 0ustar liggesusers\name{defaultPrior} \alias{defaultPrior} \title{ Default conjugate prior for Gaussian mixtures. } \description{ Default conjugate prior specification for Gaussian mixtures. } \usage{ defaultPrior(data, G, modelName, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{G}{ The number of mixture components. } \item{modelName}{ A character string indicating the model: \cr\cr \code{"E"}: equal variance (univariate) \cr \code{"V"}: variable variance (univariate)\cr \code{"EII"}: spherical, equal volume \cr \code{"VII"}: spherical, unequal volume \cr \code{"EEI"}: diagonal, equal volume and shape\cr \code{"VEI"}: diagonal, varying volume, equal shape\cr \code{"EVI"}: diagonal, equal volume, varying shape \cr \code{"VVI"}: diagonal, varying volume and shape \cr \code{"EEE"}: ellipsoidal, equal volume, shape, and orientation \cr \code{"EEV"}: ellipsoidal, equal volume and equal shape\cr \code{"VEV"}: ellipsoidal, equal shape \cr \code{"VVV"}: ellipsoidal, varying volume, shape, and orientation. \cr\cr A description of the models above is provided in the help of \code{\link{mclustModelNames}}. Note that in the multivariate case only 10 out of 14 models may be used in conjunction with a prior, i.e. those available in \emph{MCLUST} up to version 4.4. } \item{\dots}{ One or more of the following: \describe{ \item{\code{dof}}{ The degrees of freedom for the prior on the variance. The default is \code{d + 2}, where \code{d} is the dimension of the data. } \item{\code{scale}}{ The scale parameter for the prior on the variance. The default is \code{var(data)/G^(2/d)}, where \code{d} is the dimension of the data. } \item{\code{shrinkage}}{ The shrinkage parameter for the prior on the mean. The default value is 0.01. If 0 or NA, no prior is assumed for the mean. } \item{\code{mean}}{ The mean parameter for the prior. The default value is \code{colMeans(data)}. } } } } \value{ A list giving the prior degrees of freedom, scale, shrinkage, and mean. } \details{ \code{defaultPrior} is a function whose default is to output the default prior specification for EM within \emph{MCLUST}.\cr Furthermore, \code{defaultPrior} can be used as a template to specify alternative parameters for a conjugate prior. } \references{ C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association} 97:611-631. C. Fraley and A. E. Raftery (2005, revised 2009). Bayesian regularization for normal mixture estimation and model-based clustering. Technical Report, Department of Statistics, University of Washington. C. Fraley and A. E. Raftery (2007). Bayesian regularization for normal mixture estimation and model-based clustering. \emph{Journal of Classification} 24:155-181. } \seealso{ \code{\link{mclustBIC}}, \code{\link{me}}, \code{\link{mstep}}, \code{\link{priorControl}} } \examples{ # default prior irisBIC <- mclustBIC(iris[,-5], prior = priorControl()) summary(irisBIC, iris[,-5]) # equivalent to previous example irisBIC <- mclustBIC(iris[,-5], prior = priorControl(functionName = "defaultPrior")) summary(irisBIC, iris[,-5]) # no prior on the mean; default prior on variance irisBIC <- mclustBIC(iris[,-5], prior = priorControl(shrinkage = 0)) summary(irisBIC, iris[,-5]) # equivalent to previous example irisBIC <- mclustBIC(iris[,-5], prior = priorControl(functionName="defaultPrior", shrinkage=0)) summary(irisBIC, iris[,-5]) defaultPrior( iris[-5], G = 3, modelName = "VVV") } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21.