BradleyTerry2/0000755000176200001440000000000013160432062012741 5ustar liggesusersBradleyTerry2/inst/0000755000176200001440000000000013160423701013716 5ustar liggesusersBradleyTerry2/inst/CITATION0000744000176200001440000000130413152515662015063 0ustar liggesuserscitHeader("To cite BradleyTerry2 in publications use:") citEntry(entry = "Article", title = "Bradley-Terry Models in {R}: The {BradleyTerry2} Package", author = personList(as.person("Heather Turner"), as.person("David Firth")), journal = "Journal of Statistical Software", year = "2012", volume = "48", number = "9", pages = "1--21", url = "http://www.jstatsoft.org/v48/i09/", textVersion = paste("Heather Turner, David Firth (2012).", "Bradley-Terry Models in R: The BradleyTerry2 Package.", "Journal of Statistical Software, 48(9), 1-21.", "URL http://www.jstatsoft.org/v48/i09/.") ) BradleyTerry2/inst/doc/0000755000176200001440000000000013160423701014463 5ustar liggesusersBradleyTerry2/inst/doc/BradleyTerry.Rnw0000744000176200001440000013105413160423164017573 0ustar liggesusers% \VignetteIndexEntry{Bradley-Terry models in R} % \VignetteKeyword{generalized linear model} % \VignetteKeyword{logistic regression} % \VignetteKeyword{penalized quasi-likelihood} % \VignetteKeyword{ranking} % \VignetteKeyword{tournament analysis} % \VignetteKeyword{working residuals} % \VignettePackage{BradleyTerry2} %%% For jss: %% \documentclass{jss} %% \newcommand{\pkginfo}{} %%% uncomment for vignette version \documentclass[nojss]{jss} \newcommand{\pkginfo}{\small \\[12pt]For \pkg{BradleyTerry2} version \Sexpr{packageDescription("BradleyTerry2")[["Version"]]}, \Sexpr{Sys.Date()}\\\url{https://github.com/hturner/BradleyTerry2}\\[-12pt]} %% need no \usepackage{Sweave.sty} \usepackage[english]{babel} % to avoid et~al with texi2pdf \usepackage{amsmath} \usepackage{booktabs} \usepackage{thumbpdf} \setkeys{Gin}{width=0.6\textwidth} \SweaveOpts{keep.source=TRUE} %http://www.stat.auckland.ac.nz/~ihaka/downloads/Sweave-customisation.pdf \newcommand{\R}{\proglang{R}} \newcommand{\BT}{\pkg{BradleyTerry2}} \newcommand{\logit}{\mathop{\rm logit}} \newcommand{\pr}{\mathop{\rm pr}} \author{Heather Turner\\University of Warwick \And David Firth\\University of Warwick} \Plainauthor{Heather Turner, David Firth} \title{Bradley-Terry Models in \proglang{R}: The \BT\ Package \pkginfo} \Plaintitle{Bradley-Terry Models in R: The BradleyTerry2 Package} \Shorttitle{\pkg{BradleyTerry2}: Bradley-Terry Models in \proglang{R}} \Abstract{ This is a short overview of the \R\ add-on package \BT, which facilitates the specification and fitting of Bradley-Terry logit, probit or cauchit models to pair-comparison data. Included are the standard `unstructured' Bradley-Terry model, structured versions in which the parameters are related through a linear predictor to explanatory variables, and the possibility of an order or `home advantage' effect or other `contest-specific' effects. Model fitting is either by maximum likelihood, by penalized quasi-likelihood (for models which involve a random effect), or by bias-reduced maximum likelihood in which the first-order asymptotic bias of parameter estimates is eliminated. Also provided are a simple and efficient approach to handling missing covariate data, and suitably-defined residuals for diagnostic checking of the linear predictor. } \Keywords{generalized linear model, logistic regression, penalized quasi-likelihood, ranking, tournament analysis, working residuals} \Address{ David Firth\\ Department of Statistics\\ University of Warwick\\ Coventry\\ CV4 7AL, United Kingdom\\ E-mail: \email{d.firth@warwick.ac.uk}\\ URL: \url{http://go.warwick.ac.uk/dfirth} } \begin{document} @ <>= options(prompt = "R> ", continue = "+ ", width = 70, useFancyQuotes = FALSE, digits = 7) @ %def \section{Introduction} The Bradley-Terry model \citep{brad:terr:52} assumes that in a `contest' between any two `players', say player $i$ and player $j$ $(i, j \in \{1,\ldots,K\})$, the odds that $i$ beats $j$ are $\alpha_i/\alpha_j$, where $\alpha_i$ and $\alpha_j$ are positive-valued parameters which might be thought of as representing `ability'. A general introduction can be found in \citet{brad:84} or \citet{agre:02}. Applications are many, ranging from experimental psychology to the analysis of sports tournaments to genetics (for example, the allelic transmission/disequilibrium test of \citealp{sham:curt:95} is based on a Bradley-Terry model in which the `players' are alleles). In typical psychometric applications the `contests' are comparisons, made by different human subjects, between pairs of items. The model can alternatively be expressed in the logit-linear form \begin{equation} \logit[\pr(i\ \mathrm{beats}\ j)]=\lambda_i-\lambda_j, \label{eq:unstructured} \end{equation} where $\lambda_i=\log\alpha_i$ for all $i$. Thus, assuming independence of all contests, the parameters $\{\lambda_i\}$ can be estimated by maximum likelihood using standard software for generalized linear models, with a suitably specified model matrix. The primary purpose of the \BT\ package \citep{turn:12}, implemented in the \R\ statistical computing environment \citep{ihak:gent:96, R}, is to facilitate the specification and fitting of such models and some extensions. The \BT\ package supersedes the earlier \pkg{BradleyTerry} package \citep{firt:05}, providing a more flexible user interface to allow a wider range of models to be fitted. In particular, \BT\ allows the inclusion of simple random effects so that the ability parameters can be related to available explanatory variables through a linear predictor of the form \begin{equation} \lambda_i=\sum_{r=1}^p\beta_rx_{ir} + U_i. \end{equation} The inclusion of the prediction error $U_i$ allows for variability between players with equal covariate values and induces correlation between comparisons with a common player. \BT\ also allows for general contest-specific effects to be included in the model and allows the logit link to be replaced, if required, by a different symmetric link function (probit or cauchit). The remainder of the paper is organised as follows. Section~\ref{sec:BTmodel} demonstrates how to use the \pkg{BradleyTerry2} package to fit a standard (i.e., unstructured) Bradley-Terry model, with a separate ability parameter estimated for each player, including the use of bias-reduced estimation for such models. Section~\ref{sec:covariates} considers variations of the standard model, including the use of player-specific variables to model ability and allowing for contest-specific effects such as an order effect or judge effects. Sections~\ref{sec:ability} and \ref{sec:residuals} explain how to obtain important information about a fitted model, in particular the estimates of ability and their standard errors, and player-level residuals, whilst Section~\ref{sec:model} notes the functions available to aid model search. Section~\ref{sec:data} explains in more detail how set up data for use with the \BT\ package, Section~\ref{sec:functions} lists the functions provided by the package and finally Section~\ref{sec:finalremarks} comments on two directions for further development of the software. \section{Standard Bradley-Terry model} \label{sec:BTmodel} \subsection{Example: Analysis of journal citations} \label{citations} The following data come from page 448 of \citet{agre:02}, extracted from the larger table of \citet{stig:94}. The data are counts of citations among four prominent journals of statistics and are included the \BT\ package as the data set \code{citations}: @ <>= library("BradleyTerry2") @ @ <>= data("citations", package = "BradleyTerry2") @ @ <>= citations @ %def Thus, for example, \emph{Biometrika} was cited 498 times by papers in \emph{Journal of the American Statistical Association} (JASA) during the period under study. In order to fit a Bradley-Terry model to these data using \code{BTm} from the \BT\ package, the data must first be converted to binomial frequencies. That is, the data need to be organised into pairs (\code{player1}, \code{player2}) and corresponding frequencies of wins and losses for \code{player1} against \code{player2}. The \BT\ package provides the utility function \code{countsToBinomial} to convert a contingency table of wins to the format just described: @ <>= citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") citations.sf @ %def Note that the self-citation counts are ignored -- these provide no information on the ability parameters, since the abilities are relative rather than absolute quantities. The binomial response can then be modelled by the difference in player abilities as follows: @ <>= citeModel <- BTm(cbind(win1, win2), journal1, journal2, ~ journal, id = "journal", data = citations.sf) citeModel @ %def The coefficients here are maximum likelihood estimates of $\lambda_2, \lambda_3, \lambda_4$, with $\lambda_1$ (the log-ability for \emph{Biometrika}) set to zero as an identifying convention. The one-sided model formula \begin{verbatim} ~ journal \end{verbatim} specifies the model for player ability, in this case the `citeability' of the journal. The \code{id} argument specifies that \code{"journal"} is the name to be used for the factor that identifies the player -- the values of which are given here by \code{journal1} and \code{journal2} for the first and second players respectively. Therefore in this case a separate citeability parameter is estimated for each journal. If a different `reference' journal is required, this can be achieved using the optional \code{refcat} argument: for example, making use of \code{update} to avoid re-specifying the whole model, @ <>= update(citeModel, refcat = "JASA") @ %def -- the same model in a different parameterization. The use of the standard Bradley-Terry model for this application might perhaps seem rather questionable -- for example, citations within a published paper can hardly be considered independent, and the model discards potentially important information on self-citation. \citet{stig:94} provides arguments to defend the model's use despite such concerns. \subsection{Bias-reduced estimates} %\label{sec:bias} Estimation of the standard Bradley-Terry model in \code{BTm} is by default computed by maximum likelihood, using an internal call to the \code{glm} function. An alternative is to fit by bias-reduced maximum likelihood \citep{firt:93}: this requires additionally the \pkg{brglm} package \citep{kosm:07}, and is specified by the optional argument \code{br = TRUE}. The resultant effect, namely removal of first-order asymptotic bias in the estimated coefficients, is often quite small. One notable feature of bias-reduced fits is that all estimated coefficients and standard errors are necessarily finite, even in situations of `complete separation' where maximum likelihood estimates take infinite values \citep{hein:sche:02}. For the citation data, the parameter estimates are only very slightly changed in the bias-reduced fit: @ <>= update(citeModel, br = TRUE) @ %def Here the bias of maximum likelihood is small because the binomial counts are fairly large. In more sparse arrangements of contests -- that is, where there is less or no replication of the contests -- the effect of bias reduction would typically be more substantial than the insignificant one seen here. \section{Abilities predicted by explanatory variables} \label{sec:covariates} \subsection{`Player-specific' predictor variables} In some application contexts there may be `player-specific' explanatory variables available, and it is then natural to consider model simplification of the form \begin{equation} \lambda_i=\sum_{r=1}^p\beta_rx_{ir} + U_i, \end{equation} in which ability of each player $i$ is related to explanatory variables $x_{i1},\ldots,x_{ip}$ through a linear predictor with coefficients $\beta_1,\ldots,\beta_p$; the $\{U_i\}$ are independent errors. Dependence of the player abilities on explanatory variables can be specified via the \code{formula} argument, using the standard \emph{S}-language model formulae. The difference in the abilities of player $i$ and player $j$ is modelled by \begin{equation} \sum_{r=1}^p\beta_rx_{ir} - \sum_{r=1}^p\beta_rx_{jr} + U_i - U_j, \label{eq:structured} \end{equation} where $U_i \sim N(0, \sigma^2)$ for all $i$. The Bradley-Terry model is then a generalized linear mixed model, which the \code{BTm} function currently fits by using the penalized quasi-likelihood algorithm of \citet{bres:93}. As an illustration, consider the following simple model for the \code{flatlizards} data, which predicts the fighting ability of Augrabies flat lizards by body size (snout to vent length): @ <>= options(show.signif.stars = FALSE) data("flatlizards", package = "BradleyTerry2") lizModel <- BTm(1, winner, loser, ~ SVL[..] + (1|..), data = flatlizards) @ %def Here the winner of each fight is compared to the loser, so the outcome is always 1. The special name `\code{..}' appears in the formula as the default identifier for players, in the absence of a user-specified \code{id} argument. The values of this factor are given by \code{winner} for the winning lizard and \code{loser} for the losing lizard in each contest. %Since \code{winner} %and \code{loser} are specific instances of the factor \code{..}, they must %share the same set of levels (one for each lizard). %The factors \code{winner}and \code{loser} These factors are provided in the data frame \code{contests} that is the first element of the list object \code{flatlizards}. The second element of \code{flatlizards} is another data frame, \code{predictors}, containing measurements on the observed lizards, including \code{SVL}, which is the snout to vent length. Thus \code{SVL[..]} represents the snout to vent length indexed by lizard (\code{winner} or \code{loser} as appropriate). Finally a random intercept for each lizard is included using the bar notation familiar to users of the \pkg{lme4} package \citep{bate:11}. (Note that a random intercept is the only random effect structure currently implemented in \pkg{BradleyTerry2}.) The fitted model is summarized below: @ <>= summary(lizModel) @ %def The coefficient of snout to vent length is weakly significant; however, the standard deviation of the random effect is quite large, suggesting that this simple model has fairly poor explanatory power. A more appropriate model is considered in the next section. \subsection{Missing values} The contest data may include all possible pairs of players and hence rows of missing data corresponding to players paired with themselves. Such rows contribute no information to the Bradley-Terry model and are simply discarded by \code{BTm}. Where there are missing values in player-specific \emph{predictor} (or \emph{explanatory}) variables which appear in the formula, it will typically be very wasteful to discard all contests involving players for which some values are missing. Instead, such cases are accommodated by the inclusion of one or more parameters in the model. If, for example, player $1$ has one or more of its predictor values $x_{11},\ldots,x_{1p}$ missing, then the combination of Equations~\ref{eq:unstructured} and \ref{eq:structured} above yields \begin{equation} \logit[\pr(1\ \mathrm{beats}\ j)]=\lambda_1 - \left(\sum_{r=1}^p\beta_rx_{jr} + U_j\right), \end{equation} for all other players $j$. This results in the inclusion of a `direct' ability parameter for each player having missing predictor values, in addition to the common coefficients $\beta_1,\ldots,\beta_p$ -- an approach which will be appropriate when the missingness mechanism is unrelated to contest success. The same device can be used also to accommodate any user-specified departures from a structured Bradley-Terry model, whereby some players have their abilities determined by the linear predictor but others do not. In the original analysis of the \code{flatlizards} data \citep{whit:06}, the final model included the first and third principal components of the spectral reflectance from the throat (representing brightness and UV intensity respectively) as well as head length and the snout to vent length seen in our earlier model. The spectroscopy data was missing for two lizards, therefore the ability of these lizards was estimated directly. The following fits this model, with the addition of a random intercept as before: @ <>= lizModel2 <- BTm(1, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), data = flatlizards) summary(lizModel2) @ %def Note that \code{BTm} detects that lizards 96 and 99 have missing values in the specified predictors and automatically includes separate ability parameters for these lizards. This model was found to be the single best model based on the principal components of reflectance and the other predictors available and indeed the standard deviation of the random intercept is much reduced, but still highly significant. Allowing for this significant variation between lizards with the same predictor values produces more realistic (i.e., larger) standard errors for the parameters when compared to the original analysis of \citet{whit:06}. Although this affects the significance of the morphological variables, it does not affect the significance of the principal components, so in this case does not affect the main conclusions of the study. \subsection{Order effect} \label{sec:order} In certain types of application some or all contests have an associated `bias', related to the order in which items are presented to a judge or with the location in which a contest takes place, for example. A natural extension of the Bradley-Terry model (Equation~\ref{eq:unstructured}) is then \begin{equation} \logit[\pr(i\ \mathrm{beats}\ j)]=\lambda_i-\lambda_j + \delta z, \end{equation} where $z=1$ if $i$ has the supposed advantage and $z=-1$ if $j$ has it. (If the `advantage' is in fact a disadvantage, $\delta$ will be negative.) The scores $\lambda_i$ then relate to ability in the absence of any such advantage. As an example, consider the baseball data given in \citet{agre:02}, page 438: @ <>= data("baseball", package = "BradleyTerry2") head(baseball) @ %def The data set records the home wins and losses for each baseball team against each of the 6 other teams in the data set. The \code{head} function is used to show the first 6 records, which are the Milwaukee home games. We see for example that Milwaukee played 7 home games against Detroit and won 4 of them. The `standard' Bradley-Terry model without a home-advantage parameter will be fitted if no formula is specified in the call to \code{BTm}: @ <>= baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, data = baseball, id = "team") summary(baseballModel1) @ %def The reference team is Baltimore, estimated to be the weakest of these seven, with Milwaukee and Detroit the strongest. In the above, the ability of each team is modelled simply as \code{~ team} where the values of the factor \code{team} are given by \code{home.team} for the first team and \code{away.team} for the second team in each game. To estimate the home-advantage effect, an additional variable is required to indicate whether the team is at home or not. Therefore data frames containing both the team factor and this new indicator variable are required in place of the factors \code{home.team} and \code{away.team} in the call to \code{BTm}. This is achieved here by over-writing the \code{home.team} and \code{away.team} factors in the \code{baseball} data frame: @ <>= baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) @ %def The \code{at.home} variable is needed for both the home team and the away team, so that it can be differenced as appropriate in the linear predictor. With the data organised in this way, the ability formula can now be updated to include the \code{at.home} variable as follows: @ <>= baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) summary(baseballModel2) @ %def \vspace*{-0.3cm} This reproduces the results given on page 438 of \citet{agre:02}: the home team has an estimated odds-multiplier of $\exp(0.3023) = 1.35$ in its favour. \vspace*{-0.2cm} \subsection{More general (contest-specific) predictors} \label{sec:CEMS} The `home advantage' effect is a simple example of a contest-specific predictor. Such predictors are necessarily interactions, between aspects of the contest and (aspects of) the two `players' involved. For more elaborate examples of such effects, see \code{?chameleons} and \code{?CEMS}. The former includes an `experience' effect, which changes through time, on the fighting ability of male chameleons. The latter illustrates a common situation in psychometric applications of the Bradley-Terry model, where \emph{subjects} express preference for one of two \emph{objects} (the `players'), and it is the influence on the results of subject attributes that is of primary interest. As an illustration of the way in which such effects are specified, consider the following model specification taken from the examples in \code{?CEMS}, where data on students' preferences in relation to six European management schools is analysed. \vspace*{-0.3cm} @ <>= data("CEMS", package = "BradleyTerry2") table8.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * LAT[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) @ %def This model reproduces results from Table~8 of \cite{ditt:01} apart from minor differences due to the different treatment of ties. Here the outcome is the binomial frequency of preference for \code{school1} over \code{school2}, with ties counted as half a `win' and half a `loss'. The formula specifies the model for school `ability' or worth. In this formula, the default label `\code{..}' represents the school (with values given by \code{school1} or \code{school2} as appropriate) and \code{student} is a factor specifying the student that made the comparison. The remaining variables in the formula use \proglang{R}'s standard indexing mechanism to include student-specific variables, e.g., \code{WOR}: whether or not the student was in full-time employment, and school-specific variables, e.g., \code{LAT}: whether the school was in a `Latin' city. Thus there are three types of variables: contest-specific (\code{school1}, \code{school2}, \code{student}), subject-specific (\code{WOR}, \code{DEG}, \ldots) and object-specific (\code{LAT}, \code{St.Gallen}, \ldots). These three types of variables are provided in three data frames, contained in the list object \code{CEMS}. \section{Ability scores} \label{sec:ability} The function \code{BTabilities} extracts estimates and standard errors for the log-ability scores $\lambda_1, \ldots,\lambda_K$. These will either be `direct' estimates, in the case of the standard Bradley-Terry model or for players with one or more missing predictor values, or `model-based' estimates of the form $\hat\lambda_i=\sum_{r=1}^p\hat\beta_rx_{ir}$ for players whose ability is predicted by explanatory variables. As a simple illustration, team ability estimates in the home-advantage model for the \code{baseball} data are obtained by: @ <>= BTabilities(baseballModel2) @ %def This gives, for each team, the estimated ability when the team enjoys no home advantage. Similarly, estimates of the fighting ability of each lizard in the \code{flatlizards} data under the model based on the principal components of the spectral reflectance from the throat are obtained as follows: @ <>= head(BTabilities(lizModel2), 4) @ %def % The ability estimates in an unstructured Bradley-Terry model are particularly well suited to presentation using the device of \emph{quasi-variances} \citep{firt:04}. The \pkg{qvcalc} package \citep[][version 0.8-5 or later]{firt:10} contains a function of the same name which does the necessary work: \begin{Sinput} > library("qvcalc") > baseball.qv <- qvcalc(BTabilities(baseballModel2)) > plot(baseball.qv, + levelNames = c("Bal", "Bos", "Cle", "Det", "Mil", "NY", "Tor")) \end{Sinput} % \begin{figure}[t!] \centering \includegraphics[width=0.67\textwidth]{baseball-qvplot.pdf} \caption{Estimated relative abilities of baseball teams.\label{fig:qvplot}} \end{figure} % The `comparison intervals' as shown in Figure~\ref{fig:qvplot} are based on `quasi standard errors', and can be interpreted as if they refer to \emph{independent} estimates of ability for the journals. This has the advantage that comparison between any pair of journals is readily made (i.e., not only comparisons with the `reference' journal). For details of the theory and method of calculation see \citet{firt:04}. \section{Residuals} \label{sec:residuals} There are two main types of residuals available for a Bradley-Terry model object. First, there are residuals obtained by the standard methods for models of class \code{"glm"}. These all deliver one residual for each contest or type of contest. For example, Pearson residuals for the model \code{lizModel2} can be obtained simply by \vspace*{0.2cm} @ <>= res.pearson <- round(residuals(lizModel2), 3) head(cbind(flatlizards$contests, res.pearson), 4) @ %def \vspace*{-0.2cm} More useful for diagnostics on the linear predictor $\sum\beta_rx_{ir}$ are `player'-level residuals, obtained by using the function \code{residuals} with argument \code{type = "grouped"}. These residuals can then be plotted against other player-specific variables. \vspace*{-0.2cm} @ <>= res <- residuals(lizModel2, type = "grouped") # with(flatlizards$predictors, plot(throat.PC2, res)) # with(flatlizards$predictors, plot(head.width, res)) @ %def \vspace*{-0.2cm} These residuals estimate the error in the linear predictor; they are obtained by suitable aggregation of the so-called `working' residuals from the model fit. The \code{weights} attribute indicates the relative information in these residuals -- weight is roughly inversely proportional to variance -- which may be useful for plotting and/or interpretation; for example, a large residual may be of no real concern if based on very little information. Weighted least-squares regression of these residuals on any variable already in the model is null. For example: \vspace*{-0.2cm} @ <>= lm(res ~ throat.PC1, weights = attr(res, "weights"), data = flatlizards$predictors) lm(res ~ head.length, weights = attr(res, "weights"), data = flatlizards$predictors) @ %def %$ \vspace*{-0.2cm} As an illustration of evident \emph{non-null} residual structure, consider the unrealistically simple model \code{lizModel} that was fitted in Section~\ref{sec:covariates} above. That model lacks the clearly significant predictor variable \code{throat.PC3}, and the plot shown in Figure~\ref{fig:residuals} demonstrates this fact graphically: \begin{Sinput} > lizModel.residuals <- residuals(lizModel, type = "grouped") > plot(flatlizards$predictors$throat.PC3, lizModel.residuals) \end{Sinput} % \begin{figure}[t!] \centering \includegraphics[width=0.69\textwidth]{residuals.pdf} \caption{Lizard residuals for the simple model \code{lizModel}, plotted against \code{throat.PC3}.\label{fig:residuals}} \end{figure} % The residuals in the plot exhibit a strong, positive regression slope in relation to the omitted predictor variable \code{throat.PC3}. \section{Model search} \label{sec:model} In addition to \code{update()} as illustrated in preceding sections, methods for the generic functions \code{add1()}, \code{drop1()} and \code{anova()} are provided. These can be used to investigate the effect of adding or removing a variable, whether that variable is contest-specific, such as an order effect, or player-specific; and to compare the fit of nested models. %These can be used in the standard way for model elaboration or specialization, %and their availability also allows the use of \texttt{\color{black} step()} for %automated exploration of a set of candidate player-specific predictors. \section{Setting up the data} \label{sec:data} \subsection{Contest-specific data} \label{sec:contest} The \code{outcome} argument of \code{BTm} represents a binomial response and can be supplied in any of the formats allowed by the \code{glm} function. That is, either a two-column matrix with the columns giving the number of wins and losses (for \code{player1} vs.\ \code{player2}), a factor where the first level denotes a loss and all other levels denote a win, or a binary variable where 0 denotes a loss and 1 denotes a win. Each row represents either a single contest or a set of contests between the same two players. The \code{player1} and \code{player2} arguments are either factors specifying the two players in each contest, or data frames containing such factors, along with any contest-specific variables that are also player-specific, such as the \code{at.home} variable seen in Section~\ref{sec:order}. If given in data frames, the factors identifying the players should be named as specified by the \code{id} argument and should have identical levels, since they represent a particular sample of the full set of players. Thus for the model \code{baseballModel2}, which was specified by the following call: @ <>= baseballModel2$call @ %def the data are provided in the \code{baseball} data frame, which has the following structure: @ <>= str(baseball, vec.len = 2) @ %def In this case \code{home.team} and \code{away.team} are both data frames, with the factor \code{team} specifying the team and the variable \code{at.home} specifying whether or not the team was at home. So the first comparison @ <>= baseball$home.team[1,] baseball$away.team[1,] @ %def is Milwaukee playing at home against Detroit. The outcome is given by @ <>= baseball[1, c("home.wins", "away.wins")] @ %def Contest-specific variables that are \emph{not} player-specific -- for example, whether it rained or not during a contest -- should only be used in interactions with variables that \emph{are} player-specific, otherwise the effect on ability would be the same for both players and would cancel out. Such variables can conveniently be provided in a single data frame along with the \code{outcome}, \code{player1} and \code{player2} data. An offset in the model can be specified by using the \code{offset} argument to \code{BTm}\null. This facility is provided for completeness: the authors have not yet encountered an application where it is needed. To use only certain rows of the contest data in the analysis, the \code{subset} argument may be used in the call to \code{BTm}. This should either be a logical vector of the same length as the binomial response, or a numeric vector containing the indices of rows to be used. \subsection{Non contest-specific data} \label{sec:non-contest} Some variables do not vary by contest directly, but rather vary by a factor that is contest-specific, such as the player ID or the judge making the paired comparison. For such variables, it is more economical to store the data by the levels of the contest-specific factor and use indexing to obtain the values for each contest. The \code{CEMS} example in Section~\ref{sec:CEMS} provides an illustration of such variables. In this example student-specific variables are indexed by \code{student} and school-specific variables are indexed by \code{..}, i.e., the first or second school in the comparison as appropriate. There are then two extra sets of variables in addition to the usual contest-specific data as described in the last section. A good way to provide these data to \code{BTm} is as a list of data frames, one for each set of variables, e.g., @ <>= str(CEMS, vec.len = 2) @ %def The names of the data frames are only used by \code{BTm} if they match the names specified in the \code{player1} and \code{player2} arguments, in which case it is assumed that these are data frames providing the data for the first and second player respectively. The rows of data frames in the list should either correspond to the contests or the levels of the factor used for indexing. Player-specific offsets should be included in the formula by using the \code{offset} function. \subsection{Converting data from a `wide' format} The \code{BTm} function requires data in a `long' format, with one row per contest, provided either directly as in Section~\ref{sec:contest} or via indexing as in Section~\ref{sec:non-contest}. In studies where the same set of paired comparisons are made by several judges, as in a questionnaire for example, the data may be stored in a `wide' format, with one row per judge. As an example, consider the \code{cemspc} data from the \pkg{prefmod} package \citep{hatz:12}, which provides data from the CEMS study in a wide format. Each row corresponds to one student; the first 15 columns give the outcome of all pairwise comparisons between the 6~schools in the study and the last two columns correspond to two of the student-specific variables: \code{ENG} (indicating the student's knowledge of English) and \code{SEX} (indicating the student's gender). The following steps convert these data into a form suitable for analysis with \code{BTm}. First a new data frame is created from the student-specific variables and these variables are converted to factors: @ <>= library("prefmod") student <- cemspc[c("ENG", "SEX")] student$ENG <- factor(student$ENG, levels = 1:2, labels = c("good", "poor")) student$SEX <- factor(student$SEX, levels = 1:2, labels = c("female", "male")) @ %def This data frame is put into a list, which will eventually hold all the necessary data. Then a \code{student} factor is created for indexing the student data to produce contest-level data. This is put in a new data frame that will hold the contest-specific data. @ <>= cems <- list(student = student) student <- gl(303, 1, 303 * 15) #303 students, 15 comparisons contest <- data.frame(student = student) @ %def Next the outcome data is converted to a binomial response, adjusted for ties. The result is added to the \code{contest} data frame. @ <>= win <- cemspc[, 1:15] == 0 lose <- cemspc[, 1:15] == 2 draw <- cemspc[, 1:15] == 1 contest$win.adj <- c(win + draw/2) contest$lose.adj <- c(lose + draw/2) @ %def Then two factors are created identifying the first and second school in each comparison. The comparisons are in the order 1 vs.\ 2, 1 vs.\ 3, 2 vs.\ 3, 1 vs.\ 4, \ldots, so the factors can be created as follows: @ <>= lab <- c("London", "Paris", "Milano", "St. Gallen", "Barcelona", "Stockholm") contest$school1 <- factor(sequence(1:5), levels = 1:6, labels = lab) contest$school2 <- factor(rep(2:6, 1:5), levels = 1:6, labels = lab) @ %def Note that both factors have exactly the same levels, even though only five of the six players are represented in each case. In other words, the numeric factor levels refer to the same players in each case, so that the player is unambiguously identified. This ensures that player-specific parameters and player-specific covariates are correctly specified. Finally the \code{contest} data frame is added to the main list: @ <>= cems$contest <- contest @ %def This creates a single data object that can be passed to the \code{data} argument of \code{BTm}. Of course, such a list could be created on-the-fly as in \code{data = list(contest, student)}, which may be more convenient in practice. \subsection[Converting data from the format required by the earlier BradleyTerry package]{Converting data from the format required by the earlier \pkg{BradleyTerry} package} The \pkg{BradleyTerry} package described in \citet{firt:05} required contest/comparison results to be in a data frame with columns named \code{winner}, \code{loser} and \code{Freq}. The following example shows how \code{xtabs} and \code{countsToBinomial} can be used to convert such data for use with the \code{BTm} function in \pkg{BradleyTerry2}: \begin{Sinput} > library("BradleyTerry") ## the /old/ BradleyTerry package > ## load data frame with columns "winner", "loser", "Freq" > data("citations", package = "BradleyTerry") > ## convert to 2-way table of counts > citations <- xtabs(Freq ~ winner + loser, citations) > ## convert to a data frame of binomial observations > citations.sf <- countsToBinomial(citations) \end{Sinput} The \code{citations.sf} data frame can then be used with \code{BTm} as shown in Section~\ref{citations}. \section[A list of the functions provided in BradleyTerry2]{A list of the functions provided in \pkg{BradleyTerry2}} \label{sec:functions} The standard \R\ help files provide the definitive reference. Here we simply list the main user-level functions and their arguments, as a convenient overview: @ <>= ## cf. prompt options(width = 55) for (fn in getNamespaceExports("BradleyTerry2")) { name <- as.name(fn) args <- formals(fn) n <- length(args) arg.names <- arg.n <- names(args) arg.n[arg.n == "..."] <- "\\dots" is.missing.arg <- function(arg) typeof(arg) == "symbol" && deparse(arg) == "" Call <- paste(name, "(", sep = "") for (i in seq_len(n)) { Call <- paste(Call, arg.names[i], if (!is.missing.arg(args[[i]])) paste(" = ", paste(deparse(args[[i]]), collapse = "\n"), sep = ""), sep = "") if (i != n) Call <- paste(Call, ", ", sep = "") } Call <- paste(Call, ")", sep = "") cat(deparse(parse(text = Call)[[1]], width.cutoff = 50), fill = TRUE) } options(width = 60) @ %def \section{Some final remarks} \label{sec:finalremarks} \subsection[A note on the treatment of ties]{A note on the treatment of ties} The present version of \BT\ provides no sophisticated facilities for handling tied contests/comparisons; the well-known models of \cite{rao:kupp:67} and \cite{davi:70} are not implemented here. At present the \code{BTm} function requires a binary or binomial response variable, the third (`tied') category of response is not allowed. In several of the data examples (e.g., \code{?CEMS}, \code{?springall}, \code{?sound.fields}), ties are handled by the crude but simple device of adding half of a `win' to the tally for each player involved; in each of the examples where this has been done it is found that the result is very similar, after a simple re-scaling, to the more sophisticated analyses that have appeared in the literature. Note that this device when used with \code{BTm} typically gives rise to warnings produced by the back-end \code{glm} function, about non-integer `binomial' counts; such warnings are of no consequence and can be safely ignored. It is likely that a future version of \BT\ will have a more general method for handling ties. \subsection{A note on `contest-specific' random effects} The current version of \BT\ provides facilities for fitting models with random effects in `player-specific' predictor functions, as illustrated in Section~\ref{sec:covariates}. For more general, `contest-specific' random-effect structures, such as random `judge' effects in psychological studies \citep[e.g.,][]{bock:01}, \BT\ provides (through \code{BTm}) the necessary user interface but as yet no back-end calculation. It is hoped that this important generalization can be made successfully in a future version of \BT. \section*{Acknowledgments} This work was supported by the UK Engineering and Physical Sciences Research Council. \begin{thebibliography}{22} \newcommand{\enquote}[1]{``#1''} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \providecommand{\urlprefix}{URL } \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi:\discretionary{}{}{}#1}\else \providecommand{\doi}{doi:\discretionary{}{}{}\begingroup \urlstyle{rm}\Url}\fi \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{Agresti(2002)}]{agre:02} Agresti A (2002). \newblock \emph{Categorical Data Analysis}. \newblock 2nd edition. John Wiley \& Sons. \bibitem[{Bates \emph{et~al.}(2011)Bates, M\"achler, and Bolker}]{bate:11} Bates D, M\"achler M, Bolker B (2011). \newblock \emph{\pkg{lme4}: Linear Mixed-Effects Models Using \proglang{S}4 Classes}. \newblock \proglang{R}~package version~0.999375-42, \urlprefix\url{http://CRAN.R-project.org/package=lme4}. \bibitem[{B\"ockenholt(2001)}]{bock:01} B\"ockenholt U (2001). \newblock \enquote{Hierarchical Modeling of Paired Comparison Data.} \newblock \emph{Psychological Methods}, \textbf{6}(1), 49--66. \bibitem[{Bradley(1984)}]{brad:84} Bradley RA (1984). \newblock \enquote{Paired Comparisons: Some Basic Procedures and Examples.} \newblock In PR~Krishnaiah, PK~Sen (eds.), \emph{Nonparametric Methods}, volume~4 of \emph{Handbook of Statistics}, pp. 299 -- 326. Elsevier. \bibitem[{Bradley and Terry(1952)}]{brad:terr:52} Bradley RA, Terry ME (1952). \newblock \enquote{Rank Analysis of Incomplete Block Designs {I}: {T}he Method of Paired Comparisons.} \newblock \emph{Biometrika}, \textbf{39}, 324--45. \bibitem[{Breslow and Clayton(1993)}]{bres:93} Breslow NE, Clayton DG (1993). \newblock \enquote{Approximate Inference in Generalized Linear Mixed Models.} \newblock \emph{Journal of the American Statistical Association}, \textbf{88}(421), 9--25. \bibitem[{Davidson(1970)}]{davi:70} Davidson RR (1970). \newblock \enquote{On Extending the {B}radley-{T}erry Model to Accommodate Ties in Paired Comparison Experiments.} \newblock \emph{Journal of the American Statistical Association}, \textbf{65}, 317--328. \bibitem[{Dittrich \emph{et~al.}(2001)Dittrich, Hatzinger, and Katzenbeisser}]{ditt:01} Dittrich R, Hatzinger R, Katzenbeisser W (2001). \newblock \enquote{Corrigendum: {M}odelling the Effect of Subject-Specific Covariates in Paired Comparison Studies with an Application to University Rankings.} \newblock \emph{Applied Statistics}, \textbf{50}, 247--249. \bibitem[{Firth(1993)}]{firt:93} Firth D (1993). \newblock \enquote{Bias Reduction of Maximum Likelihood Estimates.} \newblock \emph{Biometrika}, \textbf{80}, 27--38. \bibitem[{Firth(2005)}]{firt:05} Firth D (2005). \newblock \enquote{Bradley-Terry Models in \proglang{R}.} \newblock \emph{Journal of Statistical Software}, \textbf{12}(1), 1--12. \newblock \urlprefix\url{http://www.jstatsoft.org/v12/i01/}. \bibitem[{Firth(2010)}]{firt:10} Firth D (2010). \newblock \emph{\pkg{qvcalc}: Quasi-Variances for Factor Effects in Statistical Models}. \newblock \proglang{R}~package version~0.8-7, \urlprefix\url{http://CRAN.R-project.org/package=qvcalc}. \bibitem[{Firth and {de Menezes}(2004)}]{firt:04} Firth D, {de Menezes} RX (2004). \newblock \enquote{Quasi-Variances.} \newblock \emph{Biometrika}, \textbf{91}, 65--80. \bibitem[{Hatzinger and Dittrich(2012)}]{hatz:12} Hatzinger R, Dittrich R (2012). \newblock \enquote{\pkg{prefmod}: An \proglang{R} Package for Modeling Preferences Based on Paired Comparisons, Rankings, or Ratings.} \newblock \emph{Journal of Statistical Software}, \textbf{48}(10), 1--31. \newblock \urlprefix\url{http://www.jstatsoft.org/v48/i10/}. \bibitem[{Heinze and Schemper(2002)}]{hein:sche:02} Heinze G, Schemper M (2002). \newblock \enquote{A Solution to the Problem of Separation in Logistic Regression.} \newblock \emph{Statistics in Medicine}, \textbf{21}, 2409--2419. \bibitem[{Ihaka and Gentleman(1996)}]{ihak:gent:96} Ihaka R, Gentleman R (1996). \newblock \enquote{\proglang{R}: A Language for Data Analysis and Graphics.} \newblock \emph{Journal of Computational and Graphical Statistics}, \textbf{5}(3), 299--314. \bibitem[{Kosmidis(2007)}]{kosm:07} Kosmidis I (2007). \newblock \emph{\pkg{brglm}: Bias Reduction in Binary-Response GLMs}. \newblock \proglang{R}~package version~0.5-6, \urlprefix\url{http://www.ucl.ac.uk/~ucakiko/software.html}. \bibitem[{Rao and Kupper(1967)}]{rao:kupp:67} Rao PV, Kupper LL (1967). \newblock \enquote{Ties in Paired-Comparison Experiments: {A} Generalization of the {B}radley-{T}erry Model.} \newblock \emph{Journal of the American Statistical Association}, \textbf{62}, 194--204. \bibitem[{{\proglang{R} Development Core Team}(2012)}]{R} {\proglang{R} Development Core Team} (2012). \newblock \emph{\proglang{R}: A Language and Environment for Statistical Computing}. \newblock \proglang{R} Foundation for Statistical Computing, Vienna, Austria. \newblock {ISBN} 3-900051-07-0, \urlprefix\url{http://www.R-project.org/}. \bibitem[{Sham and Curtis(1995)}]{sham:curt:95} Sham PC, Curtis D (1995). \newblock \enquote{An Extended Transmission/Disequilibrium Test ({TDT}) for Multi-Allele Marker Loci.} \newblock \emph{Annals of Human Genetics}, \textbf{59}(3), 323--336. \bibitem[{Stigler(1994)}]{stig:94} Stigler S (1994). \newblock \enquote{Citation Patterns in the Journals of Statistics and Probability.} \newblock \emph{Statistical Science}, \textbf{9}, 94--108. \bibitem[{Turner and Firth(2012)}]{turn:12} Turner H, Firth D (2012). \newblock \enquote{Bradley-Terry Models in \proglang{R}: The \pkg{BradleyTerry2} Package.} \newblock \emph{Journal of Statistical Software}, \textbf{48}(9), 1--21. \newblock \urlprefix\url{http://www.jstatsoft.org/v48/i09/}. \bibitem[{Whiting \emph{et~al.}({2006})Whiting, Stuart-Fox, O'Connor, Firth, Bennett, and Blomberg}]{whit:06} Whiting MJ, Stuart-Fox DM, O'Connor D, Firth D, Bennett NC, Blomberg SP ({2006}). \newblock \enquote{{Ultraviolet Signals Ultra-Aggression in a Lizard}.} \newblock \emph{Animal Behaviour}, \textbf{{72}}, 353--363. \end{thebibliography} \end{document} BradleyTerry2/inst/doc/BradleyTerry.pdf0000644000176200001440000107441013160432062017575 0ustar liggesusers%PDF-1.5 % 1 0 obj << /Type /ObjStm /Length 4815 /Filter /FlateDecode /N 82 /First 662 >> stream x\[s8~?o&$q%ڪ؎''Ifv灖YPr&__7H8ԦNT {D&FhV˝ȅWAxPBJET9 )7B7x+F!} Q'ў { B9 VP$T M⪅0hv6B\ L]}c5j)L0H ` c3&x Ph/l<48VL)Pk;<2v+lr@Ug(i…-,`R-r܈ܢ"!@z?<Pf ̼:$v`!C vH9T@PB%]@*YA Y9f6SlH%[^3:cf06̎$cfc,)Ej!-ʼClW͠x8&EQꕧ?a1+޸r0cNOl@*DOt4Ϧ_Lt\\z.!_dx'qq'h@T=}]`662F-j_E r:{DBfRҨrNLN_ϟfxdlp}W`v]:=9/F=Œ͸ߌnٛUd#lMfVfŌQ:FT?{jV&h-"~,[ ".̈́$~?i쌘74zhOUTeJ7L"z8Je]gH6٢dJBp%ouC NO2xlmbpu=vIJ$ˠP&kt:q̇Dٰ fI.bzPJNL2ZIBRJp0 sНŧKvQBRy^D#jS3~Nwf`"k`'l4M+i"^86+l_Hb{c!"BD"#$ 8RG.~Ws z8MbZ#bEɏ?|}㪞EMpJ]i&qMO}R}-"|dV3;SJVh׿#:lPz3~8ePUym\mջ~ {s; Nwij߽~'}7pր۰_?]Va# >_tU՟mċm^LIe"l5iə_/ۛegњ5!MLbI~EH.B pOVeI;xF'=xb&p>S}[0 #<¡[L-m% -:HDcN! #MZ.ABA1M"逘H=Fe`dX*ҩa\fVP-!K@AJ0LtV]]L[gP+[Ũ>$a{amRr^<41B/:3h:hK+T#DQ^qh?i_m7uEqq"\{g%=:"o>Ь|%.:b#B: Ht8y0{VMHiXۛEPwsqvrwIoP0)7xPn3/?t𒀟^C_` ɫ~هfvoIN_}+/K4cjm[Ҷ?8`yo&nv)q#HzѹpX:v5yV.tkVؿ5{ KW.1 zxC"G?J+GhAycٵLU3?ÿ<}`.]uXEujѣa`lU9"cEC^Ӟc>z,W)y5m J=n&p[^zEblN}$5iVfA&2GLtB]g4A>!QAaS^&$;#+F]uNxDh5qmf•EmDCFl4~QҾRWɑ8kOYZY]>aQЎxOv$Y)s,zA2ִюxȒ&gإ<#SHy NYǺ`=y9W]'݌>[.bحZE*IZ}K4{>O_e9{:<~tWVt&o3*y|,e-${@&ُ:=' LgTecsܑ\5eӃ9}m\'l_tSFX>%gg#͖? /OfyL1. ޙ%@= DvJB2c~ۺc5J.Jќ_x>%kG]JM8'Z6 M,Awj}UOfgY5ļ5[+-tqW~v ;>؁rjsFE4)9wSp oṮ1e]m>ǀl%dqm &d%jVJx;qb>r28rGh-9{{{kIQ 5[9<0NRݴsSel ޱľ浯cM)ӚQʹx]cz8_:o9Jt9oPUئ5B$s5tN]({ <)ɉ1^ȗZ4g yX/zȡXoȴG;-HϽ =O"L?uzӫtN:1N?7諁8Is˩hvpNJ6jX"i7l*W&êttθ=W?әqg9Sچꧠ0=JE:}MOӳ9?zx\)7WlY5H7A9J?'(U gVt}0\OtNϘuZ}IglP%K#<'JJ6ìc- Z Q F㶤"*}kѺ> sUvM֥07W|;*pAN͢aV~u@f>( @ *W+pkX~Q& Z%g+ڻN0rV@j/ ~ZǴ鰘nz&W9rXnI!8 'iY,a"KlY~cxQuk'U];חdbj}V_\Abԇ۽ާrG\ս뫏KIUYj:O S~] 7ܓuƒ&vg" |OL,vj'hU5Y># = =yKw-KhvH\z}k `[mYaˮ sc֊Kq}XʃW!#6-ހ|J_L{U5U2' EU !]A:e]|t}\Ru9z'&~oVtz'ִ;QUQ,oa|;6|nwV(ZKV-/f? fbXڅw5_KWʬk;7nCj$fK_N)J_=M`-YfѸlVN)°>>M#BX4%/Fzܿr`(B/jG*yzQ G>~Ճxō( F|.ꪸ FL{:ADi`9aW5JCsi{T`3A f6sl;t~_aˢ(HD?NQC'[Lx7-珛jV9,˹k {ac Q㪞 `/d@ U&@jgoYMwRl `P#t{U1˽`+Bi뢎X &u O2cn4pՓaVXS5|,f?#p  _y,U#EgSw>GPXL i =Q}"A{pHAi*F~$>MiYxۓ$ ET,[6RR!}q>(Obsd&`/Vendstream endobj 84 0 obj << /Filter /FlateDecode /Length 3566 >> stream xڭZs6_P3C Ӈmri3wo!Lh뫔do2` b%rM_e;SNM*v2-U1ZScv5kzZI-{r[$?7K $_+TVeAW+4Y4Pxyk|<&/T'b nH@S,OzlٙR.UF\'|~Z cQ̤JR4/dVeEuK&h%nttfx}ŝ؇rW &p&-؏4y6Tf]pnǴ*D}w({2'\8ҞńHZ$1pJ,(H#iޛ8i;yڪkp dcfbIN#QYZeh[(A+yZE_=LcA ڒO n?Sg}D9θV*h 8qvK zWFM\,OM曫gWnHT1HzpB=9UZTvR8LUonގdc︻cDF28͜dmf?ݎ4Ja<2=* Ɏؙ1=ɞxp1 Xv[f-ӢLe3?W" R b,A[ w,- $#d N "䝓ʼnL bGezK`mMDVK&A@YPBD8ZJlh;?E7Z("hgY['n:%_}S .[`}TlLDlXxg4Z {i$H],^1mdLQClDH,1Dw||-;##4b-̑VZ/20I#Rm~ېReHtm'ӎ͟DtTe}[[8.;Ȭ!jkAr@sXT,@NVq\AD,A8W3 D2iQANĔ`jh|^R~B@B\fŀ|>+:ݚΟtڨ<Ȋ -aa,xi.I)i'jhb v]|8>WC=ً%AF?ShkcĮ+Ѿk p xȆБS?{xBG 5?ϴǶB^Ò]##fI{3wlG뾟\+ I"w!*:ZLze4Z"+BfHά}w4$[HK\vw8h]/y㱽jijY`nҕ^GTSK4Di6"(z|K2L.J_``_X |P;Ȩeo 1Mz`kԇ!f:`E 9T#GShWt˷Ȍ)ن$?"{$vLN>6{JP$V&{f)\rIsSr:j@b3eygT~+K>x9{I+գ(2%i:HRL҂e6g7&bFU,vHWR`Z 5fbts)MTUѯEc”M#!L W%ˇ9 + NL6,2FpN 8ޯ$NQ>qpMakpY( Jٯe#YM_r}2c@3Ao#C>&FiZ/!Rc'ՠ#vteA85*>GMVJiLwsY[ qC-HAO}iUOCxc ,5]x0g&or HTM+Q -We^63/Y#o4#1 1Cm ػyvOFK W]X4\@KTy.J)UTy 0$1Q}=E+6VRc ]FlP õQiQھ)>B$v~ (5G{U6yf^AW.w݈h@<u;OOy};EG\Ūr+ڿ$op6/Hg%.Q%od>ZǵJ͢3epo_"](>NGg?әMϘ yUũn LCKݻ8 xd+Fڬ0׿ JLe㓦  vQRK uwW '>M5q9^/|P> c/xy~+w&/2Ɋud__DUU" %7>/9TK.8_Ǿb1r0~d]:Ϋ#0WG擼*ǒVnF<ɥX `Ky$v ](|Q)u5Wޗ>F#;Xi_ A~N !#z9Ya]ETrKA(i`\<]-ľ༷$1_<ĕ'&>,NRuPɃ+%\P8iprK5DU4D*#q><.uz1{8v QvR\J8UoG)uԺ pf{%uoY6l {k~O|l}4lAX\ JiVo"\֙5r]E/{`[VW mwQSD.[Sy=V*VAʿ0+MyL"*-& z͍WDrXTc}M4Ozz\f1݇I< IxMxt\"|{yB##GޕW|msvxn&\XegAɑIut1&y>NPq8UKGGO礗TOR-U7Qۭ/}+t Q"*Uٗ~{#@4/ɧhِ 4-O^Zk e ^Vny:Sej S @6aqY/yGFvѫ `EO,Kd8]~j=GK95.leG!C)G r~c+)~Rֻ E& zδ>eȴ9LD>: qaAjMXlgoNћ# 6yOv )/?۹˪wy?\_zqendstream endobj 85 0 obj << /Type /ObjStm /Length 3209 /Filter /FlateDecode /N 82 /First 716 >> stream xZ[s6~_dvͬ/$Ӥ6یYKJRN҇%[eqc \p%8&XL B`J-gZZ f`dVM1x3,Q ?ԋ2~gyΤx(_ 1A/;e8v`@6P >{zP0=yq$KJBO)@, yU:/TL犚GQZXJ2m,hbJ3 }2L1"8f$ъ £rf )4LkƒS!kjVcyj 9ޤΙBF\H(1Кhn4sHc $6x㘗\<āB`Xzgѡ̓IAfX cVsMXOXD`K( 4'AB0$;؛sG$ S(84'BJoBTd*DF5 ˵?le[t,l,+/Ayjeը++_Y0{[NN3Lj2dwHZd ~`O2>@0'#<@SqWX4b>7n{2"Fr.&mRjJDi&%2e\U}R\DKG1.)en8<*@)]9BJ(kXI,@۷؇GS~^ΙJ=#ɐd֗c0o*=//*n# !=H/bKoaف>ˎ/❢-#짝w?}D5tvgE(׳WvucOm@t8}OJom i=k~/7G7Fe%)+67@IM*Sew ?A?1h9o6eUd=RA xvwVB`%Q2ťGU7*USLϪA~uYb~+'{Sѵ0bL2ߔ>^5wDw1+2K]~~%fw:Rf_t/T=ٞU^urR6dPdRҞ1rTt1/̆5ҶN f0/,q1h >5%e1PȆFk+HOnX@hfQ1CtE칍qx.KxYO- 5!^Gu4Bm5ʲ2Ə2˜5F;.([9w z[M'`Sit5]Pq☤NZDK<頀|;ľ-V0#t!\>(z֐)VƟhaؿEL!mL#ѕStgXDh%(&`pyWtM!s!{E]u&d`]ߛ10D^hzs7 c3R[ZHRUFp?j9vu|uͭS%oUzC.d0g'=JX#ފjXjXh 0X+Y%%"U"C65e9Žv9T7& n0*i-dKqZ#̻g.*j-a7PЖKzOA:?ȼ7᪵rW4@i@P @m0ݹi+Or53$;%<-c-n{Hƽ*͍;B-3gR.U\~gCr# 7՚D76,.-/6 ,pޞv @)(H*W,$ocnirg龹L25"LlyM<:1SMzX ۅm)xΧڛwo!Va#">$υ|5~[b(4ŭ"[0[$뫞ut+v/MMMnNi181!s5yIm1y{ͫ{L Fߔwme6s#<ۿ~}[@=zjWm&A γ{ϟmWtm}TY%dv[s:guAm $͚ #̤?.hXB+Om * L>27!@Ha_ "0; Fy٠8Ϻ,n('Ny-xMxQq_YAm 솦"V9քȣrKcɠBH7SF]5ٳ.>Wdb9"ޜN-h;G>y7hbMt6jzkV|mR̞G[O>0 s6zǬĚvM\H`jB.5ŧtܳSTsEq<ҞvQo=' !]m_VˀtħG7g%sJDJ'Za\1B,.t[3ԘnI}Kڷt(؈ǦoЃ;4Ѥ6TK)j?-]$K|5Ejh z&xTD&$j5#)'yo`Re @ʧE|zAf#w \YN$ 2t+Sv.X˚H7)hC!Ƨ!]Jn㱲CviVD7#ygv٧l ,e'ECeg_g$I6&eVg5Ӳadm֖V_.Κ̺uvY6sx! 8\:|m8HVO4Q Fw]P[^+F`{r jv?2da }H0*Zrq$p*OL*OLkLLX-K 9у8B99Mj9qUhⴻ,%?oqôezXX[.h_4%nqѕ-"wGwK!> stream xZmS8~BB U$THn+̌c=-0C`^–XjXcŠ*b6V a ,xP+"Gk-u]FOE-ϢAQ.5,z(5hiBC 0 2E TOMT[:R<{wóSK?{y=`8>m$b@[5|=0mLp<5N@AzMKN1!0I[FY ?F犂 jШgO\@!0{)YUwjW1+)ZeyC )zXTw\CkrS ϑd6DmbfFVoQEȋS}rZ$3|{~(f ]5W,Y Q _!)%w P-e;)J>-ϏSW0ߗE>n8 2(mnQu%z.$f7 7VMicmMM]hAT86Գgu+ܘ(ECu&/ql:<( 9O-^Hhha^!!HSm9@%( 9消^t@=|7oiRܴnn?~j b+ocZO٩* mgi7bc9hc=,"T8Gw/)O۱r\hͬ4Y Y @ AruE%R6}Ј>c2|W)p )AuJ7(adW&`BBq{7{ \ Ӷ3HIw?(O4c@A2jZwcA-~$B.?ԆGLHB^" @}~e2kT"Mvc:`1ƴ-|J"FW*.Z?Fƃ2H9W,c%ŘCIJt/UzL $8m'lh!GشRy ܵʸ׭U>ukۭ>]ո,!Rv[3yF$$\; *OD0ޞ 9j񚯇L8k`74RVxPv{qh0AӲPyLomLZ2c*z-/ɂ.˛]^B(Yc7xS;T]=)Y|L]:&9ۻy̦'VՃ¢R͊z\0d_˦fIGǏX1m Ţfi[~0d)&u;Y̪%9/'m3gq[QCYN&9ŴhiuŠ6@frr~ZYċi3`2HZ>3uXNv~UOC(iZ%LwKIn$B?7RrGB 8L$e6~v̵`:JRwLyL_f+zNvsrRɅ{iڗ7JwEzeil~:wdUenl(N-"| !It,+_i5\W@e:r({ԒRb:.M ~b> stream xk۸{~O2։Hz@d@K?(m}I}ENעXxMp3r.ϯ>*KVEe^ů^UUTUy/Ul߾ u8㣠(u"3Ï˕!>_W2 +ctneG=GلZ]YU&,ety <<<I<7!V2u taRXKe^$Ͷ".ppw@g> 0N7 ;괨 vLοuY /=J~U%`+Re&zneȀK''C~6+9gm+e|8nAN:t=1>d*G|Ad]eq URW$#1FD;>ONs2yt 9AB0;$Yo<gQM hoG^'E>(MuQ;=jr2sne3jG|鎼Y 8y8Ef&x)v̡ݜ397w~GA_y19'7o٣R3%w 8@#P3adZhHh~7:^wqw4HwFM xJOE$z'ڟZ?%V =7IROe}t."V#JMa g1ZizJ @T<; yZZ ifApR ib(qɺl>VQ~4+kli{nEǸo=B !/%Kgkkr؟w?"*&)"`'$-ԏ1TӪ< D``3LT,$ny\EA$Ң9wjYs,E ea)HΣX(qφph3"Co̍뜴ϻsي{X:GJ$h@kJ^vk+䓨\~iĺs"__%T:tH;^'p Icה*UYWMbqÞɴ('^dam{t5ހ 9,kz֮yvOrA. ZƅŌ}2&J|7҄[ qlYUt>GICy,cCK9F{VOA%Dĩ:O<($7ʙGhz\H3*I=t(ǵUŴg#)5SFWU49A$Mujd|L*K2'U2=+D$]c=,!)p/LuDF&Ċ:cQrpԔWuE&LYޝ}M.r`IvpT,ʓ˄Y'KR%:ZpQ(/9@7+ {yrpq&dU`3C%aR\(&0f DCJܵsX"ʉH #c,( +h|xt0pxpA }!237\+SԳ" |`<)x`ETfQu*@h{ۄWValY0KB_uBke9u=Aj>]g#iqxH_ J)PZ3i贎E"_{sTǮCP͂㿫ƃW\uF|#IPLx//=kpWpbFuRkw=H[Wo+3M]HOy,o9Sy'ҬTWl+7ScL'[S[,\~_|$Pei)ݩ'p vwQD_OGnE-7o?^ ɆkͬmՐ |%p[&XA(:'iI^vW[2r9 j%>P5XF<ތYgdO&x+#CÌ!_Y ~k~8W1̏*ąόwUІzxyendstream endobj 252 0 obj << /Filter /FlateDecode /Length 2287 >> stream xYKs6ϯP퉪1ăGLTmcݒhH,)nU~ (Ö(@@k0[\/|Xc*KVvN\y&_7qxa2F'ee[Z|y#{e+`I>J'o?g:Uڀh ͛Ld]UӪ(`}Ur;‚: E>%bYe]E9~jr4WR}J7# p̆l]KI8~.Q{&Dtb1Y]LU&NP9c~*UX[J45~qDn er)M~$iAe+g~}nEtv/CKy FT 뮡 lFH  ֖FK;rUU혠d&=)K&VpBًUWlH̐tHد TfwI|6F}F1枢Rx \~=*-r:() /M`-R]{CxO`/mj-E /˼HO3'S(G]6n.v̦s ;QЪ 0zrq\J2~*\n2!ttB(Opjv&"pA/ UM'hr5 f)Vd4XhAn֞ublM}NM79o6fLMil:t8mP;/lGgz„VuZiӲ 7״RY^EPFN9$0j﷔DoNޒapǕK}MڑkI$[pNi7$FDi+V{Se $juW=lW*Dh 2>zG (\IZC5ī!͹i{K۟$%%k U?XzL!i8a>_!0jn5I;1y*[gx]M_IA'n]:`2M%?P@UOQ ?s`Av*y3ϑk!/.r 䟺'idU$]a ! ɉ`kuV: ;pE.rTfˤwO& #/ǖ*g 3CO5OrQ9 2=~mFVָF\/'-*|9DF&fja>f4Z>vC2}vO>Y$nK$8a/CtK.KkVE>.[8en=pϸv6VNSŐ>{;fO24O'ʪZPJ+Ii; Zx&V`MxpUVyakʱ# o"x; S?UO(6U1I1>0D9;>tz(Q} *cnKS/Kk8u jywײ"!Džy./$TW0GV7 -yxȧX?)֍Lt@Lb /o]xd 9Y:n9[}aT[aK},-\-U.eZ),-LJZ6CǠ8zdURlYز SiKkwp#ϰISRP9m~;Gj)T|erի͚'v -Sq^࿯$;au5 U.feRߥlߢ4\+o1郦`՝Ľ3n\UԂW*> stream xڍP6 J Rkq-R(X;Rtw37I噼4ZRV0K<э% Qrx8!nP_btz]+(/ۓLNPv< 0P@ %C0@bP(20'oS<@) dPp;koVN:gw_&O"d6`7?v@k{;P!~bsX?AO?>`;ߊF@ rXm DOwxfi`P/+?y||\ SzwC+5 '_c0;+,L1 (ɻC_Op_S=, ZNi9m*[C@Nz q\!;tON\: MRv|  o?!>i?^ 6x"'??S$  8-AN7}rNA / F0wy6OOO# |/_?oL@1c={As0H]]HU'ƈ$^3ϜglT9A+.R]krL>- (្5Zo}57ZgLj{FwQQkKw GhoWwvV/$UPƯ/1MMXNJNʂ3u~17HȊSc;*mndTC >[i$3>1+b3}!_%}NR.N#00Rףҗ+X'_Qi%[~|ޱwʉs-/5u7uP;SdQ47fV~QdqM8@MΜGEcWЇ]*n߰L 3M'+F* Ps(&oDgÚ񄮗L+tҀ5%P;ß:}g_=H -D^Zh+ٸ.Xw1j ob`KVE+5rcә3`Ue$W<!wq\PE |4]Zt&aGT>;tW+(˕oW7OW+P;J٢m3.aSxNq|9E{HձKFq+4ry9)Qϟ腰m7p;(o2":33CE&l(!>_FIB*8UkiB?+,BK?Pm%ʎ [K=:ى<*m֐"K\ujSnkPKoVI1$OM6/ A$G r/8ZҼ 9}x%TG ]&CٹN;^&lŲS5V1)^ yʯ} ~vZf"38Ǧ?ڧOޖ*!vg~@ #pO_Rܪt>d~qqMU῍u@:'-AnDD'Ž.EƗ=\vf0L+8gLDՙ Ou$VA]LOx!DL>PdG":N.ΘE޷-h1# s`[58 cF)xg]X3A ,ֶd|4kŖn|Y;+b T)5hr<3cĦ؇i!۝͖t|뀄M2 hPU+l#mI~uu9;HGYы:w Tؔ4ƜI&!%Cj&d(rZhD\ 0YxRsXu,zfK`TA0xͫFOz^m9r(|r/S&Uytlm~g$A]|\D@V٨#7_acP3}iqۉc .kE/:ZmHy,HSl7xNӧ*_hnxsjP:ok41n e9;v.~gS9}Y}f2ӎ\Nvu +<~ʉ<%FH?[I-=Q.,I/+2Eٺq7މෛ[)5 o8<ُ#-sef .tr`CpϣaI2lg,.|7eKT rEĆTu>fSS|^g6]a_/hSK0^K($<DC-Bzj E%tF%ܗ@ѓg/?}F a>֑$8tH |E--@/(bx@ :6Z޳akw'Yg#xWzy =&q~ѧގت+驭mePS]v~֭f\rԐĖI#OB_i9(^|4-µYk*+;~#CNK$0nXcK> mbW.D +T]f2[/$ޏ m(5hįCO%xWmfl7IwCWߢ']rM撐>EB. g`S_-'s#G#szW1u{ ,q14Tdk冘9x)ܢYC<$,lW ԙ{=1]]RȞ|/˟`@"(UkF Yr{uFΊ={j4"b'=$i)v v?u{y 1jN#6T<5ps86Ͱ ⺺Qq:< ;0UjOT]%c,-Q&*2:r]x:șQy4E.jzʾ*a{)!ynə 1B ֔MQ14~{S ;o1=&,#J+%%qly$"Ē/E3a|qӺĵh[ѳ=}?2yb< W(uv)=ޒLG(<hHBdAT&T@ ~BB:H?*H)rh7 ^Ns;/u.h)^֓qWMb= x/D'N8#XZΊNO|m\̓vȏ=8m]˗;snP $K/0lDJ?U򰗛p*g߃ḘsJ&'X0 T $jB,KmQ2,ug֑h/-%^_#$2Dل؅pC/4 TnÅwCp͝oHgeΪqo,]9~wvH!'lFrʿZ"W [DqXܛ r-nʡ ݀"?-* KR%еO7S]UXx&Tܧa77m1htF4Q.V)h_4YiImOL ։(`$\k_Y~ґ2i&f()IpK׷Û@ȉz'Z pqr5 t[q\/.dg{\*7VF7ۍ~r7!g׿rQDjĞ&,wV'qFmbv? .,$oTiLxV;LcStO 0z9|72N |N=j!'FAn 4փ$}*R+\(ʑxkV` I,tIZvɸWsW/0%EkRa;ɢ>;9ǗŃZ"ob?D޾AV7mz{ }b@˓|j >)Hϳ&1vR/(bc.^h9ƥs.{f^6 ߓ2.:${CItsww:t_~m=AC*t@ IPfDcY'z( a}$;d +ћ׋Zm^_]Cn}32%֍ǐ[1fhl .qh:47hrgiD- be~Wl9_KXx_uc٧C,_MJ1jxSQѩv>ԗإMh7×FΝ{'`$y{j51h97W$9 X'YirM spl J{dhwVf!8 ]9:k'xU}SI.MLdx}"o}gs,]"9xRFS;:wD Lk9 L'B/H.rC%j0e[v>r7ŋ ~ɘbT~54,(i(^5)q[$rg#/<+Scf2a6?fτ*{o2 [% ?L+֣(k6@0?&;[O&^z|ADej&䴒L5ou8? ؝nS'A\f3f ѓ5o jkZ%$2Dk֐#Gzخl@ҦC f>%""P]dvg=ko#"Sp$7N#4Ɋb!Xҕ[8iiX"ʻ6!0o$9,MRVT :i*̧>Ar-yDD,0[U0Ȍ>bKxd"}J^%3Mo ֌TvDZܩ3*r^ \N M}5C0m6v9Mx+WM^[cteEC_mbQ0x!|?L9B.&0jӷ%ї5=I$<c#/@ &95àQn17c)wcweپb+eDMwKjf_:Jo=9'f 16Vwoy 82^0T-\GZ h?8EŵTxk,>ƷTHi2]KOeg4}7:cF3H f>ǒĦ_x~+zp|&Q3agM}~J30ZBB"qdWkё@M@$Wq_DBҨanIX(saG,z/Cj*WyMjxB䐗ʼn:"\ vurO V "js/CAV (,u ndjG3,h%NG)[\H9;m]30~5syjNݡE}25Ë<ƣB.tB7i7<6Wrj=*x:1? İQ=GۃXq{T]3gSqWT~"|䀭^ԭpo{[E^^A RMWȩUJܕtCԴ^ 9ų#.V6qt#-5?Zغw[!< 94hp 0:J5t)7?=+uJoS#p[\%Pԟ9;{ZulOJ~9q(vg=#!*uj=z7Ŭԏ|=@T ]g?V]; ==Λ粎cxդ60xѿŀ[He~f+UC;oH/9LK W{9cjM࡮Ф ??'6Z&Z_ xCJF{nw(T$Xt̷7!滨hrC&ϖl{z${8^p!-hE z,8MXj rzl'I"2a'8|PG$5Icv;5"F~-(/9e~XjM|Gjh 3` {4P$;63?V[LK<ۤjOpLk^jzcht؀"b˙LFC/VJC>]-x?iATfcfC?CmmIelb#VVg+ ]؏j3gE"w08PP#~;TE CswDGeJJZj[k`T`2^c&km4kToH6 ?Aԡ'b[%M!&/UwP@c1k2uLo6MeFpDR4T$#tims#>S5^dzwD2,y4W& eSM8o*;fI8(GA9́XO&,sʋ*;%P˰Я쪨ʏc޻umM1p+o MԢ hϿ6I>h_%,Ɇasr'm{k] f&K/J;?B Wb@Y|bImI1DWQ%gm`&n&[7]\Ϳ.󊪉~龷p =diL.sQ{)?(,1N[9 3yw$޽嵤|I\gƸ[Gqx^I0EBEYPm}wcFN!v7ˎ0,yVWdaoUcL[QV=̜A8"'A$],o\ZkQhoroR2 !_;&0.`y4&Ҋ/Nt4k J"4i,v#i=COz ']i)f9r* xisҙ8<.Y˧'ȷtjUx= &Ul%{z$'jEX/k~!R¶c>]h3E;E +vsE!A_,~Z=$e6_|F;fdwqBfQycW.ȑpHJSlGt$\_t.1R)NJ43u:^iN-[O,g[+ /蒗V #iE\j脫XoUX˶ݞE IiL5Xj0|\|&Bnu,7%IY=bAbˤ9[8h3poaN&#"iz[Gy.\?- `^:WWnjh.SXsl3 hD׻FENHT3m_ye&Z˶c%/CH4TD\ǹf,&cXyb3p.Xɱ%J=7axA5CMaGӹAu.Na dI[_endstream endobj 254 0 obj << /Filter /FlateDecode /Length1 2304 /Length2 16293 /Length3 0 /Length 17641 >> stream xڌT%z ߆:+O+.G@[c^?{sQ{+0sڿo?ł$ {1d@ot=kI39㻅+ޟI9,=rS_Mx7w~8GC1X9cnŹ,>xG0wzϿܹ{\~࿿p=M@|!ֵ!߅v|La^rp}DM pNE]Z&~>n ol2858XnОϋF d x E+7R>{G@x®^5,s Cz^`EI<) !-څ,ZLI [/^kjݸ:8hӔ"2?Kd¢, :PkH&7p EPL(DFmUs:qr븊g^X ;_*2m\NXR t$\񛡪/jmt p;9)i{}*6F0Bfu%Èj?U W~䄴A B!h3;q+,CIǺ*.Uսr<~ojKoSSQZcF\ao #2!sO|R}BggBɯ}F<wJ[\h>Z XkQ2(ij&a(N;BVw-U쵫 1?F98iZURGƯs1Cve.|dG38Mvϼ?1FLDJ5_vs5Z2T2yut2|LA!5#Azva `a~iGwz/rNP PI큊o~pk(|!ԣT{ā)=5HL)bC/%*z(%!?fҟ.пL^E>q\f{hꦍ~k`^A!PS6,Pxܘz_xx\ǂPA;8[>tL,V2 l=cC6V'Ч0mp AMw CHs]yźi9fS@S̅4'yq*koÏ*;'~7P+ĂP@N[]^*zRc*.vBn/su8:S߸6q ʇ>@`'K0`M΅ä>ۥ~bsI*=)?_R;!3SN/lOHSJPz3>L!xیc?] $QK48]lyg✾VNk2Q}U-m8>6A]R\SТN_ݐۊhmfi,&x$C1y?.`y4I1 ǒU [H\`$|t{H32I03uPiؿ[,):hF8uTtrES;/ؕݭL~}CtQ NT0 g!usٽCT4"3|h ,16kjg⊪9`_11kAٕ@=𤃲lw^x;){zL˷OdUB.P# 2C&? u.J“k<NK!Z$ sfA17ǹ%Yx`2|2tzv? dp8qF{mpcyHaFb3=+NwoaSOUn(J`p3}XTVUm`ۏ,#m>av0sq,N [ȍU { t7n$ sGQ7gV8RqL#YqSlpMËh ?e!4sqQb@Y')Wl,~N" *O Q/X[^,y.Fv=mz̚s_vۯYȯjH5' ϞbzUy/c糜,n$ 88“ȶ0uC'i0dwT [*) _S֒ \4='q+؛ei>Mj ̏u2! (BJ_L|*I9dc刕6LrtLg}Oi(e-YQ֪~:s\U[ ğ6T/)`G!}D-_}iED'< GSОyjLs5CXg4.~˂$LZOps-% +lP{Må534-% b<[;ӡ8h=Zu@HoR wzm`}D-)\̗]\)f2qr5ظvEbƒ$ Lmn2~bӰ'}!';E=z\Uը'v~I`{OkY?zjW&0J勁h'J*+1}D"k]Mk/4+*N*ފ[>}xݦmM,\8m1򰲯o&9PU-E/}F-n;TCL߶E[(Ng S8 V4'FG"ӏƭT}g^Oe½>7|\P^!(2cEN률*#I`mݟ8<~fFmoJVRQ:.FF ˏZgpݭg.#\cLbm)E ւ ˪~;gW]1uKu";aIRp<+ϫKskwe]:2_Z6η7-JT$ ա*뵑{BK ޲dQ?!)V |OSΦۛ_i7Ep$T>0lӾ6x*=3ZQ:K#"1-7bWi!to ̥ fت彜U? .?IL뒆z0c}~AHY'f A<jP3ahLaio} 9W~v~5G/ 2?~؇h|ޯ4.놦cɝÀ gETD4msB? Ďՠ7 #+p!PS;oq'cGx6.ճk"/D5͡GLrh,0ln@xݘ`J\=}ZT{N#4%0Dgb958^էB̏OC7%'FK;<N}c$a3-q_=sM6Ã[[Rt $$]<\ڳږֺ2> Jh|OCt&i$j RG! ̫Ǯ0TgwjRPߴ&Us으S[QڣO҈iE3ɰЕ,] c[ D`k%M]wqbROZ{$uQgБ-=tsQa48dK$oeD cI|setxrq;{@UyQX_Ta~-q9 /+Ҧl\m52BH{| [jP&S'Gqϰ6z7.7'2=­ Y?1-WOcKeSl)ɺEAa=fʝiS kkdcȔ|M [3Of-rŜ$Dӷz4|G4%9j=Y`M1`ءe% kϸPQI~'\/$mc1lj΃t M<ۏICz5zGScabóE_p/N[etԫKSk.zd bC-f7rs䈼jo=TlhCm:fqS$dE(IyCs錔ő B#gb.ok0%xߗioE v6Hd/4ub^Հ]ف.Eչ~u B0yc<&e\.KE)XԥσEۈ#iS{p`.6ᦙDhɧpUP6̬C5WyLG{2tk.Gk _yk_}HdH4rV8ja'(˛ NMWD<>$)zcedSPի<RWI틩;K_!ێ8aE]3ƪ_~kTг`ѱLlDŽ@W7JR#0o% oU׼h)Fgf݈ tRrelV;/Aq /[mFܝc)CPTDP 1Qya߸|& }PIR׷/0&!ZqnY6<+ʲhh[kd0#I&_LYW- FTtս8Jnvˆ DpFOð_pN8~k4ByTٙ}E|̘^ <@19XG5(*[27&˔ݳӘ}KgSTȁ>Z |_nWD\2|,)u{L:x(G &ɐQ7BmepM{ Џ(/Jq0_=9}C8̈}   Ǟ '_{担ϸ>HmLOC1f+zV︟Biи1kG/C^m~ڦ/O y.YW}cqB4 m!w[W4r*^"~0r@g;l|Ъ(Ҍ͇kC-dNCƇf+yڽ$bl&0dƚ\tJZ[ڶu8QX0@Xf [G ͺO:e 9UR28ΚMʎFىI<4h%ʳ&>jɄr$ ) F[9d_!Zْiq!{D.Z oX9I.\W}a'T]~D~># _&EIpg磷_!Z75i:oU?::h3OX´[ɧ^\UpWmҫŶ;Wu3l)Vk5%p<+؞fN,# mbػh9(1 ıP_Gs*hv:1E5Pt:)`lkfUY*`O}u uه|e~iX4WQ/ePݿ>9&$uZ{`2Ge h -%l)J$=_0ec{!unmpPl\ЙJ}v54|n!nvyW5<><\xÿ4#t$e`mǪICG٣,1* [)X}Xgq_@S3bYCAH'zf<>5{Gg@&2w EQ0}?n1(zhiJѮdoJwWPĒ;f"jG*ExoS&XAsO> i5S^~G.uJ~,$@[RY|n.qk\]7@ YS/O8F#N%\w+=a,ۓNfR D a! 6 7+] E̐Hhuq7zT?^m~[nvK6a4bgfXo.@1uPw+-Ǽf| !,\ߧ,M9=cP?~U|LUr[%B aJmD)xCvPحT qWqɆJ-ZPaGcptѧuɋpi{R5 dQ K1a0heO <uhZ=GtaxxTO*1Lk5jgf cޑVyt3e1'Hzة2-CmޫBծoxu1#YEBIXG,tߛKLZE$ojzQ*;"ۜ}4Jϻ卩U .{潍m:c6)*̏ cgIeǰ W|>l Z.t4;ڱّ'dG;mԠm$߱†X1(vKhF 4FX\wr!8cof:,>.φJu~~яh'vh>ߨ nfysqBU1ë6'c鷕ZҀݗUDݻPEs8h:jSLJ2& okA7~y;[ lndz-/ KuCS<|U9<7UfKG~dhz`n| w$[es& $Ȥ"V]u8ld[Űȱm)،fyf2S$4uPAr 9=+އ \X&{G6xvWnd)P,^#hyy%}Y;6K l.De})qQGv*X%.>! XY wִ8w1r)D m;8gmYxh|! Vmr2p9sV]zO"*}XjCF2IA==]|@37A%gjMMfu+@hW2/BA1]W!dlS@̺y!ec^[IukFgVZu| վu-l l*Lo7o8"Ap聫* )|"U}C?}G'4b'2\JE9Vo/θPONe,5"lGE$//<^9lTU_^sy{f5<=InJ+vvBݿ"bw~ f+̗~{/CY0`,}(0yEkxGT*R >$+`ذ4A:\~Xe :)F=תTŰQ,xMtG(pBH!Y[tգ=5U^DW|!bCQTۢ/"i6_lEhjrHz~ӀAպZv䪀rn"P BE,rKOwSXse īKG 1s"#*O;c ު嗿Tb|Li-; XNaKMO2~3[jafVXkHPQjMj\8 ĖGde iyZbAu/%j_qWtjbN\WHLUx|KD"w3MƄX}2"&"ZFYSn ٿ|$jǍ ,M9Qm҆H(@DF[Ԑ}5dʈbz?g=ܺgi)ra)y5b8<8V^TuEgjI ^!}o+X=~0 &r$ v(EV5>4seV>&'JGbM>S(faav,Hh\|P qrjmx&MhAؽ_V=4U Bzrv=M8nEQL]?<d> /DFH4et)AoZ(A;FE9YSN,F.[-y8 z%-]@$;?HNJ֗]Vwb?#A/Zi/OS.!e|"_"R4Ƽ}ж[{eBClhb85c󐲬 /7OB(RrkԦhI 8/VA3D4#e[6 [6,z퇮j% M{$fN됑RZPnq&cWr ?-ܴ je<,O Ji #( 걟V;JQXi=KM΀z)B( c@ 6lW(u +'MpoBqlC܎aQ6p 6h֝!z<{ >N}6[kQsGG;sM atKS*3x9-WH_npQ?TZtS:kRg&M8Z}kTX-FϘuؗ26X撯y,!KO,AV[/ rV$=ij"M:6Vsv+C3.O?n #zVqq"b6gA*[&N4Lq0?4Qח|Ll0)9>==K ~ry \ag n\/k1!)&EP'"kL9(lR 3("sAN_t>6/|&6@{F:TEyh=esZNӂ{V":y҉8^[2Qʃ]U e(wT}CFcS@MC0 lȃ\H5mm&DhRI;¥+Yof`hw:>÷w7wzb Tu |N˂Y/(qyDFj/i-/MG^ ,gjSuMMڲv2sk>x,ӹ=xm*x,ڙer!ɘh@sobiSf2 i%gO#/)0Ul3|Gә3iJ'" Ժ f8wϚlQBD1v8ц U>]Om0oےhoFtąJA->v xk$x2Tq#7QCf8b[Ӷ?9݄ C,OE7B}Wh#@ԁ^?GTpU&[Z+](cI*YTYa9SANGYDɧYF{h؝_õ\f_HʵrlEȝƙPN[<{ lf} IaβpN+MDc9hrGg;*DTZsdRjW:޺$s# V(:;%Hcd[(@[zefCx s_s܊OTf^PyhϳIa|Pr k!`;z߲jv@I&h8mhd,l{(} d צ:v{PHcR09$ʟ(e߷E/ |H6f#5nGJAq"9iϸ|T6qJ]:Y:k6`GqW6G\ $˵ 9?(Wע3ҥٺuG)_U8 Q(8h̚Mw)0`Oiyc_W(yu9æY0: 9Z}"~ʑro8z?RԖ{iFS6\&^UƒԌ'&p[N8y۸: yUr]KϰmO4gАmohZ=K.k&3&N_rLtU.>0 > 뻅ZEx)e[h[ؘۘvk69;0G >OXZN .Fl6\ye6Sz񎸤y!z4O1nEEmzvMM"¶hi1>ξ@=PhDIht[zZK1bQԐsj >\Ķwl\'0OmوK,W]az~kO12G/Ը\tw~p,&)~f22)KZc2D&$$Y|.RBNIF|e]ىTpf^+7 8J٤X|Q+%,X3B*(b)_g^ ]fvMQ~M rW06^N&qrYc$B bڂA6+jhi{[͊g/db,ehc_Nƞp}UޙNپ/._7l)u`)w2C#[ ;HFـlƙcӨCtf.tBZmsB' * #s%pTI> T#2ڌvQlF. Ǚ=D#۟O'G':iLq넽;Hs7O.C9dۆ2 8歜Tg˽ONyb !ӁԚy?+!oa&,:dh~r=iRځ KӾzo0NCQpjbJ?& ^E ]L?CHkmJ:LPl|aDr&&amjs~R4n_Mc6q2hীM+^6^Ktms!A=Cmw) 2Z._|}yRM77S2k]/zlBs9yXʑTQ:u*( WaOTJ'B< cOrȰFW$ؒ獧hC:c}N@v'ԔByaMxFQ՘Qw^+/U;<kEX츍eԶ&Ah)_sm<(R4kʞQBLrg,ܹǣu"IW))½DD/5&_cKC !70v7J_nhn/Z%Zf aVŹ_͘t2avEiDs3vy~c\"i&+3tG*f،$ǔ% 9t^3B%B 7qi3J$p_L&g7!a*=.~~L=D־&s '" p i'QEv';t`8f,iwu#3VLBخ̍=Q{7bIћ*[yz.W&>ZFzR+8P60|OSs%cEQJ /ZC5|-F$r#Ufߏo8l["o<{ۦ/h$?Eal <[@wnDGU$4o~P̥)EjASPal*plx#JuWXoB]Ŝr| 1˶jSl{ˈZP:8Z(*~AQjl #rv2ҡ5vD.2,n=! @Z]c=rэ=h#aMb Ю.wb[ꕾ ZA*Lnrl8=dΙש8%$xj*70@6[Cf譫# wvШ˃b*^ې$DU,<ۻڵALW8­϶rT]),#FU >ϘVzXP%e )($bGAYd\{]0#a3±:|qt6cU hRP, _|Jf'Y)M?"[ g_\qjrJ6|q c"|=بv6+otljCa{a@sVIendstream endobj 255 0 obj << /Filter /FlateDecode /Length1 1530 /Length2 7285 /Length3 0 /Length 8292 >> stream xڍWeT\%%f)abn%E[RD:;$/~k֚9{g}ssfu-.I3)LpsD*R/  03?;0Pp$B?0('q gN60/, < _@$J qTϐ4 te9$ma(8@-a 9 uF,9.pGK&r~Blaa<;B;@P0!nf0?@KQfCV x`nU Bva0jr܎b〼͇8C6[Cr-?:@Qp;Gn/_enZa&!p~OAoO [#.ls83';6nSugs@ AAfB-?p9+Lc6pL n}dnwv+E@f/P7[ɚ\O9ȍ@:ަn z̑(_7+ Jr[‚ ofa@B?&ov@G?ad@P[[ٿřGB[};-vZ,󪆯ovȄd菐yM7__y&OVa.qzo:(;Pjl7EYTkvJSpבֿˋT[`VƽPHdj}Th"þDV YR{שNtBaxZ `yEE$ak6S^AHznՖǫIQgNb@?]- oya;=GQbR̆h ܰi7XlA~Vꧫ2񪟠@j56r;{SXw9T^=+M'|Q'/Ҿu1Bea-(rClkyx&h<7yh? =OAy,/R3}"+ڀ흩ȇ iѸG (h:v3Iū%}fs;R}o@Sb8qZg}j,˴xOԱ4('Tc3 RE;VпTݕlF_({D`( r3aU(eigAn>}$!mHTHS?T>qs%F2aҴBGْx$_B:Pvvd+OKqG`SE>:|}KT6N GGG)>2REl99 7K^ iD?4`*$ oGIyV(ĊIЏS?ҳ3LSb/pw,Bᚄ߉TPJ yb^-2(lh2ӔS8@ݧ}Ŕkݑ}ŚgA{4@3>8LkQ¨yu&T-|A_v[߈lb;3=u[+|Z;Ad^fx,(Rzhě *Qe\;STt*3pZQGDĞ99(-eEQtM$CIfݻc1NR]iaZ {嚦TT;Brm g!Xi U_`[g<۟`-] }dž 8TIؔ. BKMdeLkv(;6A撦 -vCGa?ˋ{p-!M#M]#CBMY^ao{x;4㱧\V^wu"Y׽PpoKER.7R&u1Ⳛ=vʷ 0 l4ھ&dz`tt=[M rPNz h_*z&bb3-.l\ąRl8tQݴU}⮠áU.,% Bp$}EÙn#ڎc;^{jǭ]6 X: ;"Bn^*JR0AQ6 P L ' ?L zĻm'^'󄇓BIcU c'b8*)qT|qsemN,!b@m\Þ]a]yԷ1g*Ґybĥbgx,潙+s=HZC+) XisLΖ޽ݷv,gñ]mQ'c-G{/.VA,hoK>R/jqPU`CMY4+c.xiDŽi96{aj [h/׿g]F[cOϗIg_,xҜl>4Gu25Wf7[PӘ'8RggU5Ċq5/Le BU,kWrwԤD .\(W<vbz>c !zsRSGv6VE&NbޕO )4 @x ,;GfRRL5-ņtqsEŋu/OZ13f[*\!vP3{O U'h3k^:^B.1vd_t̬lg hëR&V*p.W/;]{qJHj=,㛬 6~w WoEÜs=c #prETax^vۧޥyo+KGJJ}se4`[yǔ j 2lPbT^=dDZl!ü'JDcaQ$;sMƘ-G0bpYr\r*W; qQHMk^RS}0vANh_% gA{&PnNJ?QNFixNAXFGxT;RḐ}nR<XLT39" n#ς F=N-sE6{\5]o`}Z{1beҷByO5d$9"g%') V/;pu["OEBX}(/pX2A1crFdo+=YO.fċ1N 'Q8z{OVѱϿzޟal>} JF GjDTb9U4)#wx#+>N]^P"VNڛ:RRm*>$T' &μhB~[G >n%]UIˡش̍e5 3wi&^ Zop" Y]$ j$xQʏ&^| z q|)Ϫ+MΦS0J %k"86o>T]G#aEUG`4c萅!·WtmPEmXs >͇9~$j9Gnx(HٳFdʦPs˃H 2Y$7Eo]ƴwCge6}GWumҺP ZaM r/yu |No)iR* !ç\I{0EUUDaHdq.?`ٙjMћ0=F:N-X3?]b.TF,c0+jZmOcg~~I=d '\+ٱ5EXN~Qˇ Y(<9nb1@{@>!~N>̊mdWu%ZB/kToK,ID=NLzm3@u}{ڗ0sW{/;,|6cDl(@-z]uJx?1Rx gKY`CXHeX^ZY%+MǤLǢuߏ *\8j1JcusNC}p4od21|^va>涺z0 "fM:׷;Yԗ6eժ!Wk!)V~!nJ#UkL,y䵝yƶ#t'2sb#ei W-\W;6ZL b}u|Bg=Q㡊n^Zc뻵k_j( $vyL xJlx%{/cosƧU<+ /Bμ>63,OQSo6ISyNjIFJZ8gĒbWw9Law{]}eO37 MLOKVܔN6#STIuA~O~L^lsNt|J'T-XpA|r}m-cB<Qv^G]:ynfkvrюOAB?HAFN8^0ٜr*Ŋ,3js|lܡfK*]6qqV!ةtC'd]]puNܧw>U#`ׂ]lh=[%yb:5钺\?] NKkD{g-3KwLGR{T?.giQ[-(k65,mVQ l];=̬(E.炶] F$E˖pN1/Te{IV+rVB%Û61lv3[%_0[;^`Qw](&!%Z1S*:nm.]?ʬXw `R2eͯLĸ/?c 4ӻQ_@_wc4ݥil^)D6~7Fk{~N "6mϲX/w6Af?@|6n~".N6킢ʪIYgnI慌]18c$ѥ3Æ/b;Ɓ,=Vaޘ' 5B8X,;pkC&FUP~&)l^дOi>u/ dcn5y,l J]h1 3~nFmV\˱^+p>Hr׮gte\W{ɗo=y34`&]`%a6w dյ~r'*t VIstSг OҭJ`#s[]f#9#M&<]|0lmԭv<77v$pإQ|/j k[4^~Pp,R\ŇtIqߡ"_6"acE*C%"ۗ,WZ\ MNMрwb_^~?ڵ|6rî+Wq JYr3}x ؾB8PZW&e:0sO1?&j)~3;rGCoPZ:s9%Z<"eZ[V.<'GK=8AZh\sjyE?!WʉD܅}}.PtrSա;y"1n[3ZlBZzgg *gdugF,mwhL{ q(+s.4 y}1g˜MjDKoNvP]'{)t/T%\- ?Z<K' JK+$4}Lެ֠PX]|Juds{rS)P;禹f5VAфWc>?\a.d `lhd}R "ٛ!o)8qh0h[|L} bk>LîޚAƖo~~sr9cfG8ʢ/ :\uZ|tFsTδPz*-q}Sn%CA.ehAPq S> stream xڍuT]6]҂H ݝJw 030 9tK%H% JJHwԇ>>>}^k{ϰ1)!l8(i P$bc3\ۉ،H. %$)Q7D-D%Ab@ @@J^0;? bSB"a<8!\ow+ -0zv 0(!8Q(7Iooo~? P( j% vO0ty Q`$pcpApO 0A5"W?޿࿝ 0(@GUv`č? s~o PUoAP0_~9f Gyڟ2 ܜ:p+{ ;O7#8sc"%DEPw(+7evCod@`Л\@;:D~c?޴pX@[󡚚6>40OPB  #[o.g{RnHV58 7 p-"@<]+#UO8_\|0nu3Z7DkA`j7CwpAP;] W].08Tu@@a7cqWm*e 3_t7qnx)N[@ϫ޲"'WVE޺(7 )g˷͞eC1pS?>Dxb`لlG'}:sZchŵw>K Ǚݮ,N CPΞY"4:ĺ{-E0e1`OM*;wqC|zHvK3C4W{@+XZ|>1OEoqs8lIvO|+eP[EW^ݜߢ`h:s)apyW[[쓱F;T<ʕ& Ž 't} ٬E jU +?oORT ~%3;nsoQWԠͫĮL:$PMsh,Mn.K8wǼCM.uLgKi܂͞x^ IɡA=򳃢uHm/לF!B(b vA˥[]%Hƶz*n;>bZnqVxNT \q95b}_kH.ZWWk?U&'YRKUɣA S5 #qM 8 eY 󓽻v[T{4ˋˁWׅa27Cf@n;}ͷү>eOOIֲ=k{ih_jѾqoov7s I4)Xav[ԏ[[Xufryk^i`0Ⱦ9\ȤVYY`{kSONδ">xt^ 2,Il{$xZb+jt8=qf5ӹLmv+t%`י5Mm(D<"0//Gkhrt;)M^j@i=M >9~5P*I&WU-˞?<~X!91`tN!ԻJ $Cn w@)W7&rX-쟅RKG-4WOq.t?+fV JTP)ObsբDt3*!y ?CyZتII8nEGP?}rݣwG2+y{q]XNa;:7"5*@u9MYCx4z &X )[_I`KT_;+1SYVrDv $ -:/|SlQ@'[r*zw6KEj d1sճ}ۂ>YS^O;ȂحӔz,ekX)Oeq$?=QV2{w, ^Dx5{,9 8aUȢYv[(],#5'cۻ$kulrR}ޮAJyl!OXxFW\h؜׆HXC{maG!Yt2R9*VFjUVcjaߊC#sVlm'a?p'ƅS!`7[yK%t}b3o1WNŠ}Mz Eq`PJ[lvaU` /jk1* xEgz]/QMˎ}2 R6RDj Q'ϭo7aR4zYp0Mr[L$+#Z+\U2*=깕L%T'|ѸG ;* =2EÒZ:1cK/lyg\M1C15VE-a6hbQ^kSHAnj|K:^"ؽ]A/cC%Vͬ#טGߝbF1v  Kc#Qws j*-a7Hc9Hie$V{`{o}!beJxtq1 im6^Jל缒S׾пx/O#}ViHyz]f ܓ O0WUZ2lj$/>SŭX1|V=(%kiEbî=̻3u;̛r=H.CK`\XhaFPVOwh^e+{v[gctK&(0-7<ؗNSK;$nV;z *!.*w.: a;}R,ԻNQOjJHA% j ~U a;_77-;#DCT ©Qzu`M^ iYz7lki^hu1-ׇ3B$$fڅәD,h7+}Fsv[j@JQgU*][Le+H 佽lD/_4]`{GJO/zTa( hLig^VqhmzΑG^ u9;?&4F@ /|e*O\Vÿ́J6~bcoh %cPĦl&9ub bE][ϝq<ᱠBKw,m}_qF'^O5 Zy%Ȩ5x{ t˯gP7\|+[+hXM0]}|Ax<4\*qbocɑQUN]VB~7XR.#kS-HOy7O(%]3|YHɁZni9Y2I>M9)$k2 4xgaӏۻQNbVG9,Ep_jBIq9v!?)U$x $ I$oaD9QL;P9|"Arg 6TIHfX}dOs!aeiy@Cu,mPuP8d 틏UR8U]o3}#sIqy~( ُyPyakŎ0Wd-|-CdMAWě zǯ=5@͊c(bcK"e:QV~C0|6VlO{$%('7xȶ#J߿GNQWSU4~W:)xFmS?m0KV CG0.%S+ߗ绡zcz(dJ "z[yw\ xEz@E8/ b=PWvebU[m͊>mFjSLQ>t7qZ}2ѢmyX&'^B!c^ڜ`섫`U 9$c܇rPĩri,o-]bˎ/'im] ɋ\aA+GumLCaso~+h˳РmY^*tI0ygg+mb34es$x1YneT?1%X@>9lb#@R#2p6l>S=E^&d {Y;oLZ"U][tZcPHI $jSpH$ڼJz<#Rrr58V^SW0Ǻ* ݻsUB?D/L~cO4S?g,gTUԍyx.ȂiFGD>,vG':̜ITO7Z={W0ubc [R{5Go3`| }UX>pГע)7NiLP+W%bG'F1sB^=ʏv2l_K:ˎjMjAEYҀq6B%UuKM%k Α;a>[1+\IWM\+m 93wEZq168|U6YGgsi @ yMϢcsQZS[k.3颮 qP= womFZ{ (1JWSjT!ϥ"7L|4EHr<8:Eؔ\-J[)k# 655'&ܪy@QVk V'3cxe{񬺊G-AbKa( QZװ0kvҊGJQߊC*+ݍLbg+ sx UW#* 0μkOM%)a>On펼~b`5*Tͷ{&[s{c+{˟ݖЂWw5*9ݹ'|)Ih|*YI,{ge` WJ!])j+J3/o Ơ%.wDH3$l7.-ˤH-BugM+t:2qJe1dGbeh24gsx2c+n=8#D'MW_ͨ;'1ІCA&}I3VEIe'LF4CXG5V_Sv]Ÿ4S_] i6#+0k2l aƟ8#VWh`&& Z"2f@'Q˞6LMq]_U~e`'L:SK$ 09DYij-c"_A>6.rׇH(>S=*PO;z:q(sEA~Q먾c-Ro{*[h wjQ!M}TōG]hD ,~ DRa )a-[HrXAkjRäkd0Cީ>FUl;k1Zn|66zzYvR 'd`IGg3$ف KޘY4DcpVåy$`o\dg) (6?kDlhژtoFҊݷA=Lq%4oWB2׉FqTt Y`2LZ9۹H/޺k;X*~Ay?fֈ>fsKSY1%>|cڨHln6H.[70ق5v!=Fgj+N(M&Q:ތ[_r쐫r&Yj2Ҿ䑙^]-MwzX+ff֜mj ߅iY5 ,͸|dӲ@,z6c]LY¢G~n$KRd`yi?S~endstream endobj 257 0 obj << /Filter /FlateDecode /Length1 1648 /Length2 8840 /Length3 0 /Length 9925 >> stream xڍT}7tJH3$7[Z$ & ](ݥt(J7/gl>W~?w33].k%D44Tx@efփ"!q (Y_9Wy#0gKGGPGH@"ay P9Cr0+yGGDDwjvhvV`G. AzWVq;$. s0W[I6N;iЁ nk`'_q2절? 0;W8B Έ{W}v:@ q'p<^?w&whU*60ߝE ֿ )4aXy p#(WoE/ NPGϿ~_ 8/Fk@/ת/=ѹxAEie'>Pg3v{V 4xːU:`ֿwW@vu{S^x/50 g_ A jUJ?=eH$vhx@K? p8  (#'q;YZ+}M{PC8v*zK/g~~\p ,J,ؾ6ZڝkcT 9/h6Oa|5QuZ^E3.MY󜥹#:zOS.z-[zٻIg4Of Q SeDߺ<"Ƒe&yZ9ɝDQH5]$[F7dކ1oNqAk&iO;/pݫV{iGnd E+ cU,<}붛ImHz׺l-Vnpה5["&-1|T65p3kN=ZӀd 'ajXtK4`VND 1n6OS8ȳv|m _/L ۵IY nNGMl>Ult[p)2RCRYyXoQ-zSj>+jw[#stEPpF<~BWoRWvy@֎‡V}u ą{a>#3(CyK5M_I4r8N:Q@޺N"B}1%Mh+k,A `vb\tqKnT ~{D6Ȼٻ|TG;,`d?ei / Q7>܆PDh%'1XGkUL)K:(a^} ے]<-٪`N޻X>G<}nGRʗ~5c3k_͚Q;{lm9tњ@Q|RR M"#lY8u3VEq -C-ZW{} Z Wl.}I6'٪Cuux>&Fڧh1wn Y񖜎Ws-oF}_[#痌$2sS}lm 60[j†ChjdZdh!WWt'hO*Ԭ7qq%?^&$i5,ɔ?J~_Z@(&Aq# o;+j4AVsW'kRP^S+(xNkF*;3j_j|L?E?j-dot.-y: !h;7MZhܠ|-`pA61q_=q7e~t7G\LYO,IXu,bu틚{_;kRWLa_d/-˰gYNiⰾ.6PMO,$-v#"fQ4f-x#5Jj~\EɭJG_Hvr=ݺSSzDbI'|4RNC$e1־ Qz(1ʋ8[7엂&e7Wćv6E3+{!В%hejtN4Co'b˨}o]W~u6 gWQw|Ns<%%αˋ[8-a=Bir ">r4ˑ p#!/|O}B^a%^2;lUY-Pa˩sis;Pl] QUkz L *՗¢&NV<Wa#v,8*]zXBo?U)NGw0!ݷ+-q1\teU͌(fͻCSxfAL=Su9bB"ӅxűqO 3/J/a%аn$#VdmǏL1 G$6ZxV\EsYi%![FŪ>:fosjXBX ,}s I͓@Դ5`BDeiQKv&JNtEMLKf|\2O֎0ݹuE5H^. N5m p D;MŔy+C:jkHIoQ3@ p!yb? )I 7W edXhޣOhk?#w4~N\:|`ߥ`(MiD~aHm[ǙbXw@5۩YP(u;%cnwV9*9{'dے~ZBVB.S{`堝+ C=\lkawߙ-H!}uhIg8ZiPꂟ0\U\A4 n[ty${+XTD?(;C[͂iNݕ^*v AoËMM]zݾ*W{Ya;}8i3V/o<[uV4 EP} il:Ϡukibۭ#Gɞ;+ӊE>f*\ 0ĕzF Œ JŽZh2Z65̬ WQOxY0ҹxj~k| /ӰYw{.l\"Hi dmC>iDZN_ιp. ŚFmUJ,OXf;T.vehB_m%hH3I?|HJFlO^BmNJbS:+(Մx.1 Df_j xw&W/?g#i.OZJ;ks$D;$FzVikRfX&LWtz3A`ES#rVul:Kh8v%حʔ;_@V ~[UBE]<očrCz|˚q]nWZ_;3;z2~I4L78OcS;Ztl.D)HT\B >yo7$n8Ҫ&D6Be|G(MV=9%<̣ߗcu/ jaί KHߐhAI1=O҅dƜ12 |FbBX}.nz'|2,{ߦ!ݩ0ir /c]@y\[ Ì@serlM!\hSgR|nO+-ZiԷڨe*cj `v\=3u6݃[Q ` a v巺4,͞1FrE8kob8pd q:4 8 봎aCTz~Oݪh=qWBsV_K᪶R'B>C'$&F;V .Px=+]ٵ#ь2'T&*,1}r10=Y#}m_GUӛy41w>hjN$Iȱ頺Bܠ|g2,>N20k+/@`BYu9«!Ϙo_pKfu-Q\ zEo-pdmd5RP"f'?Gw_ϛ)ʇmՔHAu c7A?Giju]׳z!\;]wՓXD "{3kE ֒ajXYW|3&Omѷ,cK-OX2O\âӷ$# zqq Mg,&wWY,O |c)oY-n R|5byC}7箄Ss]K!d=,G͵Hѥ+ fْ$bE\3Ms[/} Pc٨Y$7L(pTLthrSVʂ<TFf~~Jnz 畛PivXS^,>w O'XږIN}} JkpkыRXU1\ǗSp~ƃǣk^`~1xGf{TF<{#0Tn9*ʸj~.A>G[w}gu)' C~;ȏxn95"_TUdUEJ)ùA&aêc[w dkmK\eL~$ns1} Y <9٩EYI-6mΗfjyq a9*YÖl,ǏD,MU/)( ]ZVeq>1|Frn0bmϊA+3u @W q4[d{y)^7+ # >yވE֗^ %vOG+(0J=T3az܄rLDJyW,~FGAeջå_:\ads hkt5dj?;Q2Yز˷BUw ɽapb\~S]n6zmnj3~$dO{b{jʽ*myMRC.WLCSƚ2x@OUgm0Y\-28>2b/=eOElųbySIvpҏ#f· jR_wB5)@+Ķn+y+J^}ٛYvwde-i6{{C]qL´p.WӋN{^V?%YR=&KϞ|*k4]F@]\yLWwqg6s+7`9x|W;0gZ]XEK38E.ڐaf$dX6sE^VV3)kcn ytv%vMoFJȃ9wGw:MΩ,>s1hؗkDA`3vlwy3GV<)fַW%F{,"N~w0G[&<2esߘևp)*fͦ\LUFࡽL yz1c!YTxf!u>!6 è tUsВrnؓH=")h'vLaWg/l>Q>=fO( C1M&gIgfPtMDn5VsR!`|x2jL߼r}6[ ܞ'_&$M:yDcM#g< g}`c9Q#ymU"Bdendstream endobj 258 0 obj << /Filter /FlateDecode /Length1 1467 /Length2 7106 /Length3 0 /Length 8099 >> stream xڍT6LJ(]C7Хtw H 043 ݍR %])- g_֬5s_׻VCSfA .@ZUUQr<8:_nF= iɀ. -\baäw< рmrU@ĐGVȍ H "~/ AsB-`/=q@ZoR߄psAad `sfA~/o De 1A<n2.dₜ7m~]@  )h]UHE$;Aʅ>P Svt9^;#}NDn$gbע&|8}VMSwI#eMcz u9%^=J$)j"N (VlqKNmF</3oܙ *65VS }_B4 eZq ^6zJ,|^3OA~ ìЛ}tU䶛Jm.XX[5VG:7S.]RϹ6K^` pHoK91Y5֑Ftz~Qv)\ϳ!Ss:- d1P]SkcO3jwxnYm11˃a9ŔXz6QD?J_N0N3OI {WEu><-~"LT!Ӓ|Ef eF+ :K '>thKx~ I# 鏛hUgrR[Z>H@0dD A%܈g"$c#$\{}䵚r&o<,grK9,FlZtv;8[R&&~Q%b{; 1[H@h$|9wK:^K.IؒR4yzWS(Ljکjk7=:cMNQ-*!__l` EpHvŃ'+bk ksL4Gz Փyᚨ2:)?--$r%C^wEGčG>?2G?,a{OW]Ͻ g\l2EqH6}&¹"Kdo~WC rȻ3^\v1s|Ƈ^ 'C9/tyʽpnjOFs >[/ByAj7̡8zs4IgEKE2Hyg*+i"^Ǣe;R)}[5Z~>*pGC5uESwl/=K4QrK㗱*6.y˾ۘʮhsDR oش`B4ܰRP K~V9XȳU-x# VKs0tVRXTiNURv]H4W]s+6cwϱP #P$4!Ə[bďf.?&o{9 ѲrC]: &Q0'Q- '­ʩd|wT^\Iٝ[]VUW0n0(tr2}6q\;8n}A&RԨXltԼswdٝlI-[Zo@wr]_ծR4ohs{L~J),; {WuS_,z+]*܊%2]ű`f@TڭoAx+DOV2ݭq~?~t֖U'|mExc.ZtXUaD2kNA; x>ݵ|( nd`p *~cLtagsm_ln= \n\Q@gۛdRiPt:/\Udqq{翞r -&-"#ftLe%ѿ׎K(|9~K<&xje134e)sRYIJb ^3.D_6K)J[ qZMo>ג %[| Z 780/O'D'EYCx=n5&ʼn- n܂Dq+fTwxӠtW6 UVu,\(=/SH"kg,ʙLEŝi-M{?t4ƴt?MWhsG# dGK?}Ϳ3E9ɝp$>I7b 9n8ݘUjp@胦y#`.@t y@i'i_K~Hs\8X2=z U[d}]q.D?Ɛ/ƗCu/8z"#Esfk.:!8?>,02<6sR*xZP>‘~hJ&}u#8KGJ܇ay\QB#oMo%|9Yz %)X7"66e%ŃkY}A'{᜷d ٹW kE"?L`( #R93pNa}JhT՜{~Ej**L"=1Wg7cdN51-Ĵ oàjkA+I w.E4s~ޙѐwOv4Y`M|CTA*}B< @8EϹO.u2bb*mu*?A3+Տ$;m;o=/ˋvelS SˢEE{e|R@iUl&.H<&(7̅OJX7\zg/6 {m߳k]co*pSnTcIe2@"A|B*zyƃGE$KE̟KQ׭ j(]{Tݨ X 0]W_xIlϋ\e1*H}o5G$ܚh'H`Wc5F-i΄5SнbVN*mC9O*䓅xJ[8|s!?/,z%xӌ#2cS+tT EֈHgF&хO 8cL/7@悾^La|VOzY*=?*i5{\Czgq;R0Ѽ@p+FB.0*wc c'S5o`^`߰gs!)3~2)-Gghƒ͎coctSV(WGwE5G9`*=h(@aL^XȴG=q [ 8 P*SY~Zg]lAt|QyF/@+J?V?p1-erƂ(@9GrжZijrZD^9{=~# /&s)O]q/= (K^Y,-H[̈GXE-q'H|`~o&2\$mg +A0wFL6('u(7P^hDG9nW!o)`䪓-O蠖8  )0] $jL8υb'=wPʂmuCS1/HIۄȾfmD@F(Mzj-1]mqO|7*\^bzĤR[i:6Q#Qeu,2Ռ\L®{"}<2x!~Оch hJQFMxjbG YRw;쵬ʐ jҫUM7YcoefϮ KJtVrNlRN>765@c8n|yz }mDCⒾ"-[Rm?y`%g4gf.TXPvУVIsMUv>Gf+.1Q_^rU}Ǒ}fFy~P0޹Pغ3[y_vJ!M7zFT8~fSI14,WXIa~ S'b̮PO=ʊ(H/VĨϺsPn8VRbw {>KL!3s+47a9z/HC ׈L?{qgGd{X0"vwї+YRֶ^Znk*.OM\XFU'CE .48˽XŸaAOZ67[Y5[5]E؈ȉO%Y`˰RvWϯĞ~Bާ4^z~.+Qd Y iz%!E8)N2jkC#%>ir Zɩ4ZZpC}fk6񋈧 | U$3Et}2Pt?+eoƄN AiM)p!P&`HMHf u=W,w2RqdsPR-& 9;.|/sUUKÄ?` *lZ&ѺlV.HJWSe:AܶR?R > _S&ts߂!nijjAZs }}6'(lKrܒʭif⢭y?5+*y-%g 7>7s7Y<Б5CxuvK؛G?6C o #ۑv612 Nт5}06ƀIIp @ٮӉ؃h[9GKzwFzpTH l1jk?Bj8A^!J&K$q~18F?Cpopih!a-Իk?DYr־!ED#;;ʬZRZ)"+g Zs/c6<@m(y1'2>yFWfxfsET jS3IlPجZqt5;9F鋤 nW˜sLv%+{}&N nSn[xyxmk=U쎪z'ty`YL5dAf?J|S@2٤Ds Y9ς*)N9^7(E{Ñ:Nx/ʒ=&Ĭ goaJ]/Ťendstream endobj 259 0 obj << /Filter /FlateDecode /Length1 2751 /Length2 23510 /Length3 0 /Length 25058 >> stream xڌP #܂3K@ݝ` wM}UU0~Nw~߁XQN(noBD Sfb02322ÑZ1Ñ-qlF.89{; `fd_7@ G:Ñ;x:Y[G 5_B@'K#;t @T.. FN.e3 h U0@wepU K*f.FN@`cisp3:@Td @ex7&zsW"K:yZڙ,mqYz#;_F6FnF6FƠąF)řW Ҁ,fg*bok sqO hj'ߓwYڙ*ՁA(%Oft122rpïL̠ |f"f@8og#7  `ji0[2Ơ;YztAc0H^v6/xӱ2LD46eU4\jpGT 57ۃ P.# ?&KH/7_? )Bv7T*M-]mWBv6Yhhbb7PzK;寇 4-5 _. h{cfc99yFBlo&І=6t*`fkl_;A70F߈ #A7b0HF߈ #E7q@\#'o(F .*oF .濈 3|Fu 7d5@ Lo;˿w_\D4@wۀd? @c7_zcMt=u:L ]9[Vǯ3mz`d- ¿!//l] `͎4[ O  3ʴ:c5@4PWgf~Pɀ *޶6XW Y*+6Wt ?Wvowt;>ώ>֐MَHw}#gveJEEMgL@Рy O]Lex%=b)^c|?\܉KCy62A-(=]\?[@>i~9R9®mdi+ }+D ݥHn&zefng+` Q0qyhcRPfF@;sw&1 E4$RQN /OPF!΅69zGޝe?o_BqG z b5H:;ǵ`3LH&)g%RX?TpVE/ZI$6w>@V*%ѳ=[9+DB%-Ox(nt>pP8(to㦬LXs^=Lx*GpΧv倬Ng XccUa7bۆhd U'Rr}2>zs=_>]70״)Q!,&OY$5hjP?WQ%Z9I}bۦBiKA<zGb޸.T:mz48Fm #ؑ$ 2}Eu&>j+C(Yf7OS lUT Lv|KO%2p-*L 8Yf0jTԫjH0&{0`Ӕ5щ,b9tըk/BHUP5"c`.`=1p=㝇3 %H$l *ad>UJccI~x3K|`R^y[mOW,VW+`~M5H<Ă7Ɇ~4MpNl=Iƕi1UM 2Nҥfbpτ\w4@yUՍX'V9imfI*Aم\cH7*NŒ])q(2!C u{lao0ma68BE*pʏ= IRdd6tk:8Sѡ<\Y fr*[ܔ~r̽8^+q)١S4-4}݈e\cūy2 X&8ܮ랙gTPl~{hzJm*+Ҙ?p3C@كhӑXg&:C0`11c\V=@yoE:}i*Y@;^846w=\)Lɠ'΅8bq0ʅ;E תL bAʸ-V' -4R2ɴ-&e}a' сm` %crd(÷9 f=6mb9܄B*² ;X'l"By6<;H4ġH`"9PzU 4 ^e| g&4PW_Ŷ!zneWd*00d%|c$$M4A~^ց-m) fSz_F-_ |3.!wW:͒%Se4F6 67wo q t4P.U_݂^-]c# im`nJ31f,xFn^j_F1˘iJЪh>x/V(`崎M"!U*+dYo,! :bŧ˷&H" #~cۮنƼ+cgE2q^tw{#V%&#~2 GƢ@q4;$.y3d3p>:ɋGIppIZP ٰ͙{JF+_n*+|v&bNk{r~ Lca|;31P` k98[sKH`:ΓB'ۣicمIȵ4Ak{ $DcNtf$oA7MkCNMrti/]RSD~M ++<%?{ r8lAd3hL?NV/Q $z^~8IEn]cSn靄>a5jT0= s[ˇNE|Ϟ<  /Y[/0})~<@yZi /ه^ Y[/("fk6K?&^`ֻ1OVF2z5u NiYAx䴠q-Pp4o5x%g0PIX/v%uI&tRMJ9[/ O¿S]x`*Grשpk &R&cǠ{ wڷ(U'P685Ȟa+zsqsKr=#U~Z|aɡ(ƻҭ%&2hբ]VXiywsveN(WGvK0 O.BPtvIP+X!q~X)=h[d3҈GYR-~*N/Fl9E-qB7QޔL`kaE-NPݥg5 JZl:c:a~XJۡ_aT`CXEʻCm6%f0 )e&"I wfƉ!zu)KߥQg&$-F qTr/3|YVlH$VR9O"P{}ohꭏ҇% l?P|Ldp-g/f?vC-&Polm)"@ [ o 3A]8Fut)B&m5.H g Y|l * )lתr'rŪO,! 3N6*8]=Ȝ./™8вuy?/Mgd^j?c[h۞˵Nw{3u\; ]!|ޔa,ե/0dzN<"Tlv:2"Gڱ5 F;$s mʶ]dF;$o*ˈ/hDty2úbnܓ!Rf-Oh0g׋Bh[W3I``[&.G>Ǹ։gOk>x.7C<дB ߼ h2uǰK^w;kғՎ/n 9/S$b2!+8HE?*20< \;ogݭFOU "Nl+Xɮ@ʛ۞{qlSu ٙXΘ/k %u=on-" ˽EK\XnhiY{rsVԼ[raC~cuQifMd߉"A~A=x3g伸E_pVIұX$][xH?ޕٵX[Nb1j$.`H?W A5AGb>L[Q s\^ >J X=usSۜˋ:a4AGWTZOf\@h .QJ #t!/d.&)ڂWx|~ȵj0XL''w3sSkVMUKao [d %87I*|C!vJ^>!|zeT Jm[XNgT{pcgtjM⟤uj+]`-<#@wibOuz Ǭ율|v~/C{QJLL:\SRp Z&7Lhoתa[Uظ>uՂGfך.oh@/RА"ݡ1#7C1䉝*z x >R`;%J2Yٽt\ <˰ь.kl+FV JPb.=`׎;eT®"<BKEwLΫԴiN?r?ota۴}")~T/%1VzU@RNQ}{)iz lҞ>^% |HLH Ԏ mFN#zzLRxO^s)`⭘W[w}s" ,Jvp#[G,tD(aeYf\\V>EVVmGK$F:=;A\NďPj̧3QT plHЮik/dNz^ٿ+z3qrE6QOJ!P02u Iڳ]P&׎}GF6Œ@f9 >dtkYX9I#3F̽*j< E(z4^̬%&7d̋eskHe=R㗧sA-6fMUlk{.'~|íKy/5,i3Znb;\&oq(u ȶފ]%z`w}Hd6۹6bZ[NPOWͬc 6Bk5_:Rp#ksyo.V-A`DF+dr\G gbHC#>#}'k$dUT­Be>FEr?i92lt6-ߤ>.y4x5BDէaV\w~ܱq+}?Z|kzQ&8\Ӌ6ᐙB \'&UL]+"8-#'?u_rGr]Bx)* > O.];Jdk(D:FquK9ga7=$ c"RYkv[K~`31<e< A%#Qd?dWHHRэ{Rp'<[*DªHyxa2: (JxZy:PghPi{ц>m.ez_ 9 c. |2[_V^% xrݫX4?6zY.[z. mal *5|c{:Z}͚k4΀66ƾzQX}޴N~JV˜vS5jl5:l:$Omrzטg3}>[*ɛϧoJ׊g()1pjO}D<1- ̞Q."Ͷ'zuv|:ka7䆬@h|v4Yq:}e;w^񪧻̳u ~^2Ha2qlGxY8U-=ϥKS Ut 4y _S'5]T|yȲfj#5_!k2|k{#ؐ,ď$ʀEpPZ~X,]'3#(sjs"ֱ&`h_JL5wja ݮoc4\QRkd>]RCxIр=A͖j+YB-o"//K=KȷiK&_M^O%pUo7Tg'/{* C'?H LN5b!?%p`W؎;:gxo"sD0gY^ڒms (J! x3O?$&YiMCUc3(,1W0^ad~20B=BHj)E{X~+*;?ozB&6) j9]`o*6`q,.ҎPj}Ѳ3"KR6j-55u8֎v4ỎդcT  *]y*Gޥq@8R𓷫2QY߲#&HwMΟr bG~1 `\wۧ9ayoŒn1뀖3!-Zy?[ۨ"oQ:N܈)k18چyټ? A׈ڲ׆NhD_M,TOfԄ̞IxTᵿ>{QB[F,㑾QL,a5ǞMFj\i rJo[jhMfTfljBx׀24@PJ!,ȹOj+:!n% /ҏ&YU-h:v#|G'g ǽ}~q:h0>2X:0)zgrj߉d|XGB_/ocSe?9WX z5Y<%]IŧU\7Ѫ[k_ݯmS9%.#:5#ɪ2a }wZ DC_}k>Vs4mXUvK4{ܝ=BS/Q0(񽨯;e.l:65rh$SVI.+"N$no~y+>+jMUnPe$*N2^M`^2~NhoKu$2K״LE)sqSNLW6 &?jnJ`R>YhTwf s]XU1:bζO+D3MˍO0=faYraЋS .}PvT.fy)'*D?m$o%xg>Jߋ4 /(N2B4/J/S $?}_Kk}"wuU@(Ff@Ķmxff&mkIe=U"NUz-nt*O`br`{QONbW=Z/o)/2o6ܲҧ}5%ӠjQdc+d@h\&WQ-)"lkcsum \ifE,7=R)XŖWΰY˳TKGXL n}A* -}=K\e8Ğ@{3.>GW'H A=x5J;Ek֫zh9c f6ŴE:l3n.P@Z2E 5 `g|西qubѼf{2V;Fu^PB"8ךhc-!z3gn|Or$Bw1ƺoT/Mtߕ9^!^vd `饅Wee&m/qK !ޚwgv&y"RS|(Fק9E ڱE D @UgRI\.:TLpC`jK\d+I N< KOlj8'qn2>qlzh(ߔr! ^u^Sխ[lҔ:kaFNkہ@JBsv9oȢhҫF9? 0'^p7t icr |ڲnW d|6&hnK:0:&<>mķ#HZ閤-OtfT}zӶ0)÷jAiN$JY.clɣR ¤nI=M&`8oype1V%:×?q/YMr:ݳLDmK9c GGOBmjHަE/dwDwYֆ|.W'D>ZeQ[i/[Ό .yP>/UE>Z叠Vqcm.y22pNaw axzi!.5g`*K-6FҫX:jdfXYbq# :jF6 >jVXh-:hut1^?O 8CEm ۆlF5)/*DLyߝ$ĿHD)G ~BkUxڗypۉ E1.sJ@nuY| E<" |2_d?~*^"z.w1&uoQ]L0%m5vǽ9yoN fw?|2ihub8ذ{.z 2! M=C~ (εOx/$3-,CAZ:lz,ٺֳ"“)O W2= ȃ/|}΃ׁ@x6A ~0[^ctx`.2zrnٻ-T˘?,E&o-%x `{&ǩji[{|dfu cc|zzW˅[AXZ_UɏUk|i (ʅ m\aEW{s@־1?k'2jB)fĒ|8솗QwQ  ^btVq j5a0.G-gL$*hh եrMHc՚ Т{Td'/?`bѶ*J31DK](D"0֝AO^n8|}CNPlR/ Ի[ϟ0rP`*Z2uɼ5@\( G7G1|hZzd1pp6=\5\޼SqNb硞h21#~kwh[6O\SIӷmtA)JYTdI'00va4waV,d$<>޿g 9˯d n+VUY>Ƅc}2=d~]mBEA\ȦFs cF**9^4͂ vhLI:wcAN]fz7Ήw]="k8 4BE54˟S}T2tW3>{ W3OT.P cNh:iOj*;ժgZdX{7X)SK=l3%+>ĕ@[M?_a~Z9W}2.Nn~t{@ +g+Cx4z SL<b=PZǩ! X[?uD;dx47ROϱ)"Zo#HD2Bzsd1<2Gv{œ?qs47Ea@3_؏-G" :auR@br&;z >[& Ny+Lo$(7!Ճz7jHgoO00ÜLV~}$׹ChrճG)个IK?cFĎN4tovt“yQQU)zfB]HvK6Udoն@9ϞX תh%yeLᆽqxMh"|区K)usB` '0}ZbURBvԞ'^xJ-~ߧź+eKC'0鮞#[kHڋםOM7i/ߺ??`B^KMJI._)Q2J^e/]s+Xǡ;dѢZtGgÊJeO5bp0g*?Řp\CCgP-\*Lb^߁ЧM'ħU4}ZC9# P)*5Һ/Ϗu >R4LW!%h4Gr,}[G4E _{ESAh8ߖF *p%bŶdR02! 0(!$ ٛJ=N%˙/{#{ nf؄RTeqy/@:_ 5 yy-3).T_V+ޖYUqžo B7m\N갗RǸ}SfWa5=~ 7痉:L4Vp&Q[,6J'pwX+rB!? /}8ߓc:~0Se(ɿquL|d[H Eo{ ~Y` :VױBň{_Ya @83YZU|n^JǺy2?6/x q-\>`´B 6\ ySec=Mז/NcN\ 0ˉɺ`hnI8b)+f?Za?QYF$U)'̠}qR}פ(.!)?U2tO0=bFxMzF魴9!1쀪8Ac ^}vɛcN&AA?,w\X 6 \,pxzwǮRM!q,/X+IլeEג0$ܧxB<;t@{(l)'z?X*ƮG ?IgeZ̅3gG ~.߿ s)|*J7@'yAOő@3I|Jl#|MFjݙX9Pt+2"vu;z<[X=`}9.MF/ž=ЪJ:F+n/Bj`s_^R\,ΰtfV!i~?gDē> 67q>~۲IM֮Q 9n(aysLajD/?hrQX (_]CIL&:e>*k9ؒ >"%%b1|ǽ-eW4D܅@C~@xx5VL/2`o|Z2EQKCC*./޸I OcἘ؀ 0-C=0)tiU x޶S7h0EQk4fĆZ9@gXBTB#(k/"Y7NB|HdLڲoۘIՔXV M-X1䧦K蚛'' G ai(##t=}{pK?ma:O H\V'{LhWva+ B$CL{.`&5K}'OsI|ݜ*"Fb`M= @(7*b{:I2Ex VMfmD2J'$N0aDqbOK#0lj(eܕj ,]3}[RD}Cƽ1{edXϮJX <ǦlUpf=iM`vؐneNAF^`dwlk7)0Y=u4X`?e%T i h{7?K/$M~ŽfmGi@73`&5~Hm0O 2 !:벢Ɗ9Sg4 n/AKDFIcx8tٻxQ0 ^ Rʆ1K $Q#cUe;jNw I囑Me u?`¦RX{1w_y5PpyK*~18չM߆AK"Q&Emq]I$n!4-f -j$tD&uaҜ#? 5u:`g4}cG˿,^>*/Hk0#kvp_)40ۊx^B̮n" ԟ L"֤[#}8.^M:'wCL6Lk$L"<3])&E_T5u {X!TMk.)kvw1:qWg=%Թr9W,fG_! 6+@[Nˁk{;=',FSt{rTZ|`۴*o+ ӑ6{O D+tdn'!|e( :1u*7i4b:6SLLHv~&$0^lHG 5COA`'dd!aBt |П3o<4 d0 bfM*JL“e'ݪSV>ȓmqs`S\|ѤWld+aADP~WeU6NRټ䡘X"_w(c$) oF`?Bw:Nr2s' [Wԉq.Z{kv̤P-TB'@S8ƈXSy墇&H3+e#?10T[<;鱍~fKXvWuݱGWoir8h>=AZ/LB%C#\ZOjb~V׎7dYqwJE6Lj;P@~K&J};FNBF^~/պP",e&m$'Mv9׏]9O >QqiMkݨ%kJ_ ,Gl3)ZڠZ Vsʊv9oRlWzn<14=?Jx klL3s[Y^K׾WQޒ `u\P]x.rM$SA#o956'1hmYI-(]pbl 0b3=CW5~ّV:xfCΓpPWs #dE܉X:╻O]m^F$Ǜ$.N#TzbG~LJURh9pZNe.Y$kZo s:O @Z$ .zHFk NV4&GZxa!_Ca52:;R:\ A#RǏ3 /ZZKrpJVd82 ޮ,4f|02+ bFН~FtOt_FBKFկ'-t81f.fpvF  %a]Y8-ǽlZXd1{XN-8BX벯咐.. -ǁչXGuǦUXBS 6T)$_/o[hb}eQX"VAw'gjɪKrPinOkC{UJ%vLvxgtroIF7z oBW\N1lWfs?4Q Q{IY!OXkn^[V)sWd$3{orE&_Qqokɰ'l<.t|#E5JZot̵)E1\3p&_VQh`ܺ]-G!ӿhxM8̙'kV5YY-xrW5w֝[3c5Ѱ}Pr4Jov[ܠQe^BG0 'r3kPv"ޣ&<}ʧ5ưvLy.?0 :}[f2mSkʂN%<,KZ 9!&D!\W։PTE'n:G򺣌! @g@|\sKo} k` =p$h9g7fW.*2mi5VJ^);] \㌬ݤVe Rp?8j).v/2*T, Yj9|L@S6pɃ{aV4'{~I;7 Q2E|"f&C |BRR֏ukʖ`]4l,LxrdGNM>?Y'փaX  I #c Cm; P-V.>_ke'O͋A1WE^*C+^Oxp6E:'| B x~ʇwvݪ䷭Lޱ(G;\fq%+\d]}atV2xq/BDS bFv=,i;'rV`=k xc܄ҳ7jZ<:rcakI^onL_ GԶ > qB(U#G!j-w}nň2,#bϑ^a瀮f4;?hE#p#i'x/Qh8mLbnde&Z޽2P)(hll)J0';ڕ7uL aVE~g>ayLg?KG//,9 yP#F42qڄCVZBEg=-_A;ϬCm N` J(dS=heX@m xޢ/kq%&AċE%gWoԌ@S,F.R1gܒXSdjoU:W8WJW_2G̸9? ]7'lwZt><',ʲ.k,%d/tvD)C/ )ң+0SmEä[w:|; 0sZ@67XH4t*Fh-zV,a,7^MbE=xi;]ݼLKşA$lJzV(KKTK*!+a_1s7t":A"XD2[7|v:FOs.8E3 .Mͬ4{e^)(&eLI E!!<+Ev%82>́),e7DW r4~v<=הwR?%2hqZ~%g2V$>#fp/g9"W@'ڗ },ɲ/%wI~c@eoڽHȊ A2'§G^IAݛ5C>eII Jr; E3DC72#<bCȘ?! Eo4<' @ LGJ C}~O o)F2B`i Ŗ]iW \lNLsLaobSbCLꝩTg9-2ۊAL@wm{T!ae>D[ #B|`tk .v(9pM[P Au:~i_)knYK QA".L}3D(9EXy0<4G}E[ un\ mK=7.HT ~V#Rm[JA7"Z;Cv{5!H87UϐVCԩ4a)YfdmAs1窬9p&XV,utyl@d5tOJ|d\Le\KkMEמsVb:ёڢC2%&G)B39[{ )+Dw{|mLdHўC,lJ+KrI}8 $0n?Z*/|Eh%#]v ƊI<D:1A=dCdT s/پgZE fՌT\dxY#$gр<4&_O)0N"7qƃzm2 & ru w.ڔzv!d1.+ϲi,R73˨8ᡸP#;~\)){51|ҁ);{%nLSrFO||wR%s.vO3hңy\{=N]zz& !|ܦ^zSmIj-4jef}=zy.RH90yLO]43&Jo:g1 Ys0$Ъث)Y-,ELeLg{gӹ1 pjiwBEFQ)3 6j!rJ:y^jOIņY|;/ p㓫PH(jn݄̊ Y=Bt m},+U8\98fF _kx8D\0p0VD0-SQ^ka$n3}0H3hHfJ@4cRNĘG}NwUu:>8w:" /-{+EI&m6/SCc*i(_j/cؔ%|M>vDN&8MG:{`e2ؽk(9'o0.4ӎgN4~'1 QfAI2j Kehܬ` Q79/Fѥ6U[Ҳ-ھcmhLu*wkT݃&Bu"a1 ̱@h]G:-(3i/ =;uƤX~1ChU#b;~ժKC&S? bSle^_Ws} [p'Xo${Jm-"0{YFƬn6/ٸcT&ºǚuh$/ /ĘoME'jb=<{"ƪΩܮ?-&YعM5ի:iUb| ljD)$|VIN͓FTR? zSrFX\XAۊHX i h7Zu;5;S:uE嗭_f\ۺDSǏ ȎܧP.z_ɧy6<ljD`6n.h:xZS+q:\^%%}2ӐWNL !royԆҷ|[=GRPИ5[Hԃ\$`#$%V>J6»+x]/lfünt6dG[ӡp8G'Ls_4}^)Xc@?OX7 pzOLn˶Tb·Q\]7wI_'{EmTNZ]䔱kN%i8~A,g(>v槇O#0\\;1)a(8-&>v *>Ii_H 棛D{wwŏ)eendstream endobj 260 0 obj << /Filter /FlateDecode /Length1 1448 /Length2 6520 /Length3 0 /Length 7495 >> stream xڍtT]6]J 1Htw 000ttH4Ct H(ߨOk}ߚ9qu> !yMABll$%`3#\!p`)(7M8 @1 AA_p@h0+< GNk xd5!NAP>Fz+#{$YB@ÃG=x@=+ 9'`C\p[( sEll@ ہ3+dm wr 0;- h+i#=# GŃA( p@IVB5tw@2uɊ0yt%YFݺ:0k['7gC  "#"b@ F wآ( `+ @"~>i@+FOv Gu 2Gizrzj:&<m{|D@A1Aj,/T62lPw ?5p}c0ٛ Z>43"G ei 'O;JnHDhQso'C 9U BM,%B\ `*=]!?PPlYvD=%N2Q#ap3'$" ! /TQ;56`_HTE` G( vqC: lnaq=GD!_Ug@&zP<`ki gt|RlO^q"Z.pS*2/!NdS?|^S=̦﵋Q#fz[C"\3.e Q;W%|1fAloq| xdē'dGnxvc1Y=^(3rma1f<&czvoƧb0ueh);Cl`gडP^jTIt*dJ:CʕW;r ":~**A'50+IYۮ)ZRsRKj(d"* ::ɷit?zbT9\fRp|u0S)ߕ4Yd2O('%xIͩţ%*-alZ1M9qcL:Sړzm - GZ $5kq%X *w1A6!fAL=gj$BCp|ϭ[l'  &nw{RR0P-U='xA9c_ǒn8Ҡ;Y} ^i y uv~pMX˝# uDЇaM4CCa8-aI~y@H\m47rI'vHrᕟfDNAhh3t>H'8|5qC7|*i46]KIt٣$B j<cxʙI[S{_)窦)^E}<|V<ğğaeHzO5׿(56phG\F|J.w-J Bń3`D.V-:#X=D̈9Oa@|g=c6TowG`pgY}VZ#o8I*&٧37͉nlUJw1(R&BCm[^׵U=n^&F v=jG\Q/{m')$?UHP)I+Qݕ"=b:dt1F71 l%8+J:֌M 2hlwod966B|Df~:]T '[4S^Hݪsgr}͹j,~wmMwgh[ASRQd򉼢ZS {.7SaQk{_vFXqoH 6zѠFe7ä2]c|_emWh5:zSOrަʁZiۨ:i>i 6*Œ$0X{ҭs~;! 4Ah;9OoZivIeutwi o~I+!_;t'<;=m҉ Җ?b]덙bD]^hs_鎛[b6zѕyFftl SNZ'Vž _to+6>\S+xrx˶ pWnaH\8%|=C<'i4'o cB,R)p 4faӉ] p ]luXNRn{Y]YJЩ)߃:rR~< 8}J2:cϹ3 2:2؋J)ڌLcW{úXHA89;oN2:ݷ=o)зJ&Oq1ǯ]:A}ջro񧅑O ۃuB;,dIM [Z7rv 3HjM0 9.[oר蘕b2Ӛ@W_dz|]? CB*(PnmGO8Kr }5AuS̋ŎHex4@1vFWu0rj|g՚nUDx%s*F"h؞:QVCwI6W]hC y&Spv-jY.ijGo伪XxJ**V5bi߃7zkrb&xuN ïeK޴,uH2MLO(xV|L-l]0M@@N!u񥒢vȓJ}_j9bc);O]0%˾ڠP]^I d*;oc Ao*9o~x4C",1P2&MRG@z&'ݿ {<52+N?(S'0}z؂L}mԯ?yL~flg[,4_#NAbee&gpTXnzND%,- p& f"LPQ#@{8A-P͒ك.@] rwSTYʷ_168k-Fs޵U\'# 㠎PP#4 zLh:Ԥѵ }Ę]M' K2vZVˌ2)`ڥ7g]ڂ c8vQ#^{NgYf~H,*ߺQ<+p=78t)"c,P@wVУB>fSrp7cSIh|*#9d6~}xoz兌#ZZӁ;}S53f>ڍ}ojl?n371ծg}8[sa>~v 2EWV j SĸٙbY(I?, du$n=k6;&S#Ҁ8n_wieŰ|*V{oN,Emli0N[LEM%7kWr W6F-5pſeն&,h]RHNs9Ę:n&E'?kOd3)#?tF淜:A"9Y~Kl<نҫ񃊤G^YbBtZ{LuDcsUmAZ<tL G~(u:xC9 7cmy.+8>+U\75< x9N&<~ɏS*%[M[cB2$j+7.oْyn>t[%!~oJ,rN=.XW~bvsICD8'rM /%6Pߵ>6gȡoerfZ2+-y%qG:S SwA>s(LOMŃ}*8AO } *id ~[YMҒ#*K :܊lCXA^ ` zGŠcɡa3WPuElpu'q،#ʢb3(C7% 3֥pW}^g*L{)`ujkGʛֆ 1U}M)NQe<AwK4"qax5QbpAU;VX]u+mtvvDPN#i{ 4z}v'W"2%)\ y -㽹aBJGܜIaW9;h{S3ٍ s 87͆v?^s"n)Ḙe{}w YXI zcuj4n*o3QlWJçBH|єp"o/rP-@~?7Vymvg֎%y Y?\!>9Ʋ6&`m.)FmlCr+D{YEŗ N-+6կ0 /HgWf90Y1>j@`KI|Vb7a Y=Ƀu4+V#jά%u׫BQ~pWR@8_̐RWm5ϛ/979g=s$S6]Ih)ODɚ;`ZPTV e<:0[q1g:sF5`cSG922Il=ĪLlU\9';yrׁG7zc Ej6K?xEǕmb^܏D1DJ"E Pƛ47BCcJAW_} t7V4% #˲K!;qP7aV%a!0 a,_h^)Jyf6j„\#KaޫF.FO[HS7bzNˌg_j?boo*͕816]6]{[VY--F 5)6LY8VNyYHvp1|mDV~&I^kӦ !5RRfY Nrhii!qp4 nd/I;uV),#{+}ߩ~|"5.bvKcQbd9+=xsܥ- kZԝ45uN9Ϳ,lGy6 K5?|}@;,:S̠ؖa|KجٿVMn j`Cendstream endobj 261 0 obj << /Filter /FlateDecode /Length1 1731 /Length2 10369 /Length3 0 /Length 11471 >> stream xڍP-;!$330 -Hp wGsjfս{wGM$ 5H812XY,,,,lhP'+v4jM#ֆ_ q&rz&*䜭V.>Vn> D[> (2lm hvP3s:?hM܌D!P @d~hۚ@!NVɎtuueY;2: 1\N5# PYCƌF 0:P5ur9@+ 9 qVP#`beaX>_-' @A9Mlmm[9/ B |Fv1mVQ<@ϩ7_Ys! sf?50=;qؚXT4}ʴ=Ɓ{z%I첫'.pwɞ6'>vsVTo`+%`xk G硣3ێ|[ ،AJd|{:m_ZmCm)2M3hD2;\@hZKXT,q]JFB1#,t=UځP^'`zS>9qX]?]vkB$}˧;Ɨ5463+㵳o)(Zp4ZM#(߼n Yt_>~-LοF˯nbĨbi!4)8$ ,`K((;l鳍.qTR>uX(cxcBHrI1S 0L¥zLd;4t!ӳ}umL_^BjȐ(>at~gX?0lkoVW4f-Z=ubnBj\?v՛E:8Y*Ԝ2;fncue)(*p~VS!mb? S-1hq[Z:Nd|xt8%%U /RjYKu !kƟpBk=r=OӃK+rN.=ȗ¤|ǥOo՞=Rz©iyu|eNr(Td^䣚nk@68$>5<8B|アQWQ?eb.zM$RP+BWZZjS4|S3\+ɖoƶړGt)D,k猂TMf҂rCCNEV3]y$J,hXL?~Zwi'"f6r{Gi}k߈r"7Զ&H Taoj55Oz\3v^|;r};>:mժd $ҡlK(O8>P- O --EHXrS7 Ҧ-' 19^OV㞛v<|sr}*$•l:7$mAK4!5פ%_\5ɉW6]Vj#kuLҽ#(\LQ:\ԍ٪)YznOWs[)N-f *15k$v%A缍wG&q81Kz%crZ/wmQCW飛 ( jߞ/M%lpΧ12xhX_xXIWA͋ uJB!d{bq%]쉫[O SY6 ! 9jE%JB Ewî3` Ӣ?2Ksz%ҲsDk&᫠@O2ٻzb{H\FO7 /LxhXb:D::VTpɰxkGbYp8Kk15 MJDug~gD C'v{մ޳T+otɔx!^,ndse#=z/* Uo>9HҥW&H9j|Vh1q*בڽ*Y|KgjּTl.{;}Kilv lY[?"]x%`N00͢G|?E‡QӂYƃ&uKlr܋ ^j>T{\} .!nVtF^ǖ 69'am;rr^2pܤ[KR  ;g aa^rxDa[ΞG :7)>3:K3GJt̽jR{_I痒܌ 2$9VFJOubgt3gno$؂Q`s!4O'X* n0q% GʭF1ɮpźF\Jz~<=Eҿ'QT>| ^̬ .os;kB"1-Ú{ψIa0yf뭈ª`C[^N.2O[˅#~s)%6~яiN5yl{>+(vU9#y30727[ڗԽ캎~xT2@~Zn/I4śk6l2ʢ,,aL\g1bC FO_u-Yj?ezud5N~_ (DoR# N\/ ^DC +Ӌ/VN_xt>9β &px9"OJb z2g#[1|H~JgFNr(\q x*_lYs0Vԕve"]ͤB7Tm<R^З7elYe;{pF+G>w7MGz"#us <V1Ef,ݘMƖ]yBqfNh,u L${E%47Vz-+aMԬZ FQa ~:1  ?wl`ni srzE_Er2{hHX7;:7ܐZ[CHl(71-IƝVĖAS8EC? N$=L{=tR }r^f2k>ddǦ2m/WLl_Pbəw\8uxUA*1 B֙0-!"inAJı0HS^j$[^̐^73^g^{{2́4pӓ~Pea%t\M;puIcU~:1_ԟeCŏA= ς0uE{C&yN'dM8N>وaKS t> ^>U'Rg,b[pjI֠)MPf.| @1 _x!nOHNMH8x,sɃmu&Y:A4 ,3WeTp. U+ӬgS;RAC[qOSwLQGRET"f}[KlCQμMk"UAG6;MGw73c3M8DP?'657;[}yד5,=r8 'L췙HX['08d&T)eذ+{>O4ةD+ %Սyc j/f[=Z~c=J j3ߜzӼY!!5IceT4s+ f1k=-M3'5:cB7e{ [8ol |֖2[3r*I"ϴ[zG,j=[W}3jDo>Y g)>kM-[\P& ArK5򴣺Ut)I*7kژެFXF,a:/}pϜB2tھ;b1| ;d⭟1]T7d!U$.sP>ӋI0 Zx\hQ)×tMH-g]hb8VeOJѓ#{ԣv90ԪD1_:- zM7VT PLr/L QKmPs @i5[ϑB_ ݚ5T Lv%V  "_& vBsp/@Q:Ce˼$V0_ã/c<{!VrQJJQpG?k ΃ >^rTT\=Y:t7~GwD()z^?p&Om#Oɘ%4,NsLԻ VIā-E˂;2 *N8V\_ƒ#+ľ 5~@Sf k"TִcKQr1D`4agnl/:G_:-&Ac{%.%9 I['KP{cG]^6Xjđ>| rz,ȏḠ( S}uSA4ev( %2Mqi7Sab_mL˧"z̎\ ?i/PDykgzS>YG`̤_S {6|؝2uY6W) X |ZA +W pinP@du@"=!l*QeEZ;jprUI>]Y1<%L+m޵S6Sc%$ )ȼ W-OUgEC Q׋*D krezR@HhJ{:4/JWJWIHul1-(FaZW7ީp5dTYDS\ yt4`]ОmBCJ@nrkߴj^j?3E xobk6 eHDB5E6_Y6G{A89NeZf}0fY bܱM>%٤TJ9t|L5ha`^`o*L6 O# YgN|ۅz[zYYŚxѝYJov iJg [˷t} m&RV̪ &|$=9şɣ3vl8N]ށV6b+_1a( x bÐWus; >щ$X?Oo ~lGY.ԏ%BWu- /yA(YywrV L֨qVDuNc{1aϣS?~;u3$;goW#-:'GL*lo9]'i#7%EpN!,+!71FV:(5,uJ\W xDrm]gAI+uFcR@lo0Ͱuq K 8՜q|M0xN cO׷aGiTv˸_ϋJYE ;}OH~<}QG1`|F`wy7^Ws9-z-+Ըx|dl EGYUb7S@7TZ"`$" ,pr &gnE@k"" Dڻ IتٔDKRr%ahr:Ouj[dKr5L{7U NrKw?Y9K`<6Eeq:ݪ1lneS&8kPPlߐ~zdfGQ`d>vS'w t4ӋcWHu86}aEj}s:Ý7=O01B35IFY"8酣]αVK|[2[;qO,EgO|v4ηho`g%{PSC^Q[?,+ƾ LS)7Ӌi-!#C&d[V~$of,b;羧{LM xq.&=yELOrja"V YKڧ} W(&>޸3q{\hm -1$[3|ggw;Yg'](}SH%aSr5kF 7TU[K d; ~T#`\ZЖh.$&A[OE-SϺ:id`{0O*մ6[yĠYTf({|dž>fV.UTrL@o Txv[mn/j|ngⅤſ3T^w/2-B"ݰ.[oɋ(M("subDGfDZZW+]Ğ?N3k0(039XRD"x:'M0K?l_眊N^y&O6Vp)HȮf4vTu R]җlSj$אK,AN\'JSe+VFLH>^mܧp{uV&;?| ױz;gxF ^ˠs ĊRvU|3Z0A%br@$,5k%8klnŜQ#uD0C9 +R{lfvBLy_<Ɋ;Y+3 댃}s| ֽA b~."L6#9-:3MOZzGHF2[_%XMّ0 Ƭ^ V\f+zPa+3bJskejʶsP$֐z7q|a u}Z2Q-P*Zz{̶I2S JvlŖ ̂MGP?PiDǾ,~Go""2ZmI;7;yEendstream endobj 262 0 obj << /Filter /FlateDecode /Length1 2610 /Length2 17144 /Length3 0 /Length 18646 >> stream xڌeTX-ׂݥpwwwwNp!wנ3}]Us=@A l`pwe`adʫʩ0Y((Ԭ\m9@g+{?dDƮ +HT f `app23Xy+ 3v23d.^V O 6p:[]-v ƶUS+`hll!@Cr]@3_i&Lj@Pr`a V@{P(:GXY5 Yllj`hleeo0%]=]f ۺ8ݍlM@o VrO.V..VePD.'f 4Uދ;xVfbȤnoftp033s t=M-r# Bq1v\݀~>2!̬L]&@ +{Ad?4V]fK4gf^n4ݿIq02X,\NNMoà,@o&l =(8z̦?,ϻ ea $lmG?2vs-hUoQM?-4rҮƠe <#;?d+ +Ov,5z=&F{W rr"8L"I70F<&3I7b0IFl&߈$F ȟo/SXL*ȟoU@5~#w]7y$E, 3.V.ښE oUY?TAљ;f*gllj]\殿lYLE Y7(vv 2&!(x3[?`DLrwr-oPU@[a/۟>@"-ݯ?E@[:K/GK &? 8#U7L]<ٻٙuZgr#Mo6Ȥ#R޲Lq+vPmLar$`Nn@3?ZQUP&-p._X wXGoAYz8{AȒ7Ws 9 :"~@SSP:a >N77v8[Fo 6셷iybJc5q\6<"]0 O?("dOx$5(uAZyM! :@tspC@\Y\ x@Ã|(e 37-l(eJFw]F "s ?M˹x{X-dҗFP\ӗZLO̥ pILblV:NQC`r6w)#R<;d$sV#Lw{Pb>`JG.jFOZI ݽ&Ip&6k?ϧn hT!zgdu |馨e0%t;Yڌ=SwyMgu~Vx7NO%kyN7O Wv' O-?^{HQа#XN.ftz*S{uB fT! ?@PQ.b᛭DidpML2k/3ϣe-y8m0J:[/(ĮCFifąݟyuKZW婗1D@[JLE۴ƃ֋2ċ?1|w#"#J0f9S!d?g"Hȱq/]ذ<+7=h8ҽ:qP#fUIvnWleH<;r&S8ަ;76N2 v|Gdd@ӻ뀽>5-`Li::Tj(>0P9\V.cp(064(*5ӟDat 늸|@1pO*Z{CpT; L Lsz3 'IՃEiv7E OMwť9MV2~}FfZgak,+EEC#gbS>#f]>\vI+嵢SdLg~Bp=.1(κfk?QGnhlRc 塧= #!Q2vX.ÅpТ1?J .j&<(ahO2~ݿ۔V*ܠ.S \~)^`?mhҀrC:PESiI )cf{ ~[ͤ&ev0݇lUAWH c)P=2D3KR35 X_$-$ e0mQڌ^/B"*Zœr|xдDgJ{tҚxWܰFPH'NYDNֆax؎¬zYTJTX7`sWj}x?_Eb#\p]3-<)FQڕ>|]}tQ2m=ݫ+bUVeX*x Mr$.zOt1?(bϘ;O 5N[{;̈́+Mƞ5.(FN ˍ(g|nSS3`]͚:5ܟE+t3)4~$jt|Qy`CDŽ6TGkٵgvG" jv cdD[⨿j( -hg(>dUXMun|62"! cdc0]0lw[[fguYp/T5ES QWAjǭaH QWX,ISx>_MVG#|j\tT\2|=[8I2@?ޝ-@!%_sh£N(DUFҀg0]K<+Rv_]gtpc/hZecE)kQGc59sf !Paq(dxh=0lMT0[BmF|_[Df̮to&/">!%t4[ΈF2doU2-Q^߹W\NVvU]ͺWwDsȆe,,Fo6,75wdJT'Qö%GVbL8A3ޝ0 b$f\ltdx7Ov"{`oBM gr N_lS}[kkXwR<V&2&zU}::ٜi&Ȅ!(V6Fb gu̥gTz oa?y#]YS'qP,L|I?CHE% j3EլіE'=ӄ4ꚴl%ʝ2)0 Q:# mXp4,RzguWzbn'Q$w򫄄+G\Tˈ D4Cn!xhY=6{TJGhOct#х{S=`kAU彖V twIjUgozP:nhlå:O0j'it?^:O [h[R>jȭ[яp-'/`Jl8S'ϦQ쓤+]Vd\, yۢue\ktm+z-Ʃ8o~zƌ`x7EkamtW\1".p X4_IgB3ߢge bډG"5_. ? c|8šjy-gr3Z ^2+2pRt su=uBnB!AlrvG& ;[r3.,υm t',CBA:^A !9*jP_[--.LYEY'"0xZ> l#Gh5r? 細uM3DB5`(N+9pOR<4|.Z:=BKoZ^Ee|އ>Bgi*HGe "Piml\ˢrV*2|u{= le߄ AˆAQߒ9aluZ>rρ]7/*?`+n `K;Ee%%uO8 '|6hY>qGɛS4%" u&|Rx:M8[r'c&W˶__3gzѭ˴4;D,1)`d+ⓗI `{)C>+TPx lJiHs^7K 7$UeK)O^fxG/K:ϛI5\*Q3zLGOo8l5m_ efI[+NR6g~-܃c5wHP.[SW]VjgCv(CMla$o[n|oOd<4UUߎ#Eu;6`NdSy>xH-I` PPH2SEH9* }%<(7o¡+zZ )۶^wdNyW.4G},<*&%$' h'ٗ5PmDI=qzYң!2#_H݌ˌ^\]ζ2Ǐك'~{%UNc@;Ob:ߺu f& $f#no-3E*0bqE [7k`q1K~ k\H,#QArFNAqNU8dHgHg[OΌuq=TlEw9`搷>Xp]xǵWr&z5e>+WyY֡қڦ埱䤢Unx+MDg 櫼SapEV8L0@O[Zٌ&)@)sUJgﱩ&!A`|J0Sd̋k#&Uat|"I%z |{9e} ՃUlR~5^1j`eD-^,GY9Z!Z K]O~$Rp]<xE 9\ &<HWgT5k]( :Fdq< zJՅKzaG xr_UWS +bC$V k8kRs/ϣ[k;csORy;wCJ&NYT:%9O-NiJ+KYUPzUw轆Hzɦ?fMYooom{aӆ@_]0M j^P_J%6Ֆ@7 {a=?&)y-r{JNT&Yf)FVN{JZ&|2;ok!sZ]/i)%Va> a+ޡʴ厥s/7 $ﷄv8kUO1΋ސyAݪTŤ<|yc]?t=vEp**_ ^l+f8ÂARoД&oL~=aKyUʙ-b7Z|Km'V?lΝrҊI$FWp8ш߳^!\<\x9)öuG»HCM4]9,7%Mexo>u}oת` cʄKcKY Ӌ.rW)dPBu!CH킩kf2SS&l:9٧ b|bd$g*Kc 4-o! m UMinY!3``3F\U:GT"nׅ=$$5Ӄ5ժO@g-֏+\nKW3BG&_H1f^tG .i8(͑\l4~)XĖY|Cj"zFqsxT8-,Q$Wb:iӠ DJ=BN(h(OpWt<1&r~ZVH]oo)pa\@?ե⹸4CO! e.K7RAchڅ1}a1ՌHgu@|I=K݇>8bwKX csc{4Py[J"O#tu޵ҞCh ՝Ed&4 ByjIKr7 qSB؜}NiD6F2$ˤ"1Qii|)ZYg|5gGTts8Q(|Xw8 u6[ci[}xDwY w3̘KU>yT>r9M0JFB2;8"qmKɨnթtlFbk_/dWŬ 0Ps*<qŏYNroWf-J4\2Cܣ>l 6"4RE[ȱO>}X'dIPH+XKS >°9tthMS q5)\;Y{Sy)<6909䝜USq0#~ˎ\m%\Ϥ)|y t/$gaө/[#Q-i(ήkaǗHSR[Ư.y4gdC$Rg䋛gK0 CHJ@ g~tmXX4|Cap ֓ *{|ɞ!YeO)!\|u h{mdNNX>18NdžL`ßݜ% %f*4_@8iAQFjDamG\H{>Y1Dwx1L8e([rF&OuEqX|͔J6jt]-L$\o@#cv T Ygj}2q9G!'4 ՠJ0{X3*{!m hm:)AdqA[*sv[Cݗߌ5CcIs֘hw;1epG?F;(󶾟5 [@OŽl3Eϗkxe؏A&}"aQAuNIa"Zx>"KiZLn![\f IMhq[).|i(:d>'ܴ 7V]nnHΣ3^Iũ׼)ɂK]qGp4 f=hN͸S!_L2;0>zJ%N2i,eA A8I]hh$N&+?P/iOwYஊ پ;?cE oN#eDNpCǐ9;j%9*M Xζe#l?tDwA+na_S@,J;Ia!nJ<KoקWL=%gs;YNT$;qޓxd]3pŶ_bpL1XXq<0kֻL48g~P628nGA4oPc-t{m=Fp&\Dq(7leSbnNpnĀAFLyD8<)Fϴ Ӳ?Y>*g9m_Wm&o䥇N}iuvQ㐱91?"l m%F:h]idY_㿎b5D`H)V橭0Z@<,}J⹛U\ӡjX)yz-.J4,z4lI9E[>؀'I[`nSa ͊w(ddI8lh+Oii;᯼X4kՍc+ޘDbvP(xk& &$.dR:jr?> Di#-243r_$GK\*BOp,DvT",]&,#o%k.ikFscظnF5=p{*kY<…~+=0]="fqE@nqUxoZ6dk$a}qG4K 9E sX7:y1]`aDK\[!\B7͙-lwcpkzILL)sgұR8CpPJ8vY xy]ӃZJAiP/6]48p,҇x$VrSW`d(h/%og~VK+|R{Vbt/h%+V=jSjn+h|̀ZB78xw7+xt*,SwӎP 9 #BHnwY H,p rk"9R5\QE. jb.[^.RYcQ2T =;&vuۏXRij |E__uq]cQCFqRk t웶ii/3eGB%ļpvaʮu9b}:&;|iF"Xe_3ttP (r?sB>ݛ?Qڷ̽{*f~,"T5y*) F?4KѻtVDOTLY%O8;+tBbX<ɻS5KI2qԛ˵a%NHLv߇=ZI>*h R]8C+ŧu\}YҼɜ#+ oW+0[߬M(9o+}v*G-,}7 rXevϥʙy׾[Hr 7TADGe0("/v~ PN{"Dž`- *Le:vPNVJ0Kg_n9W dXg*VC- W -KO͌WZY$O#@#4фq*`Y\9-'[xޙKu}fv;xObJǧ.=Jp7}^}JwВ{tC6-1h#wS0\%V}8Wb0ƌ1 |{*_}QAWJʋ]Ѧ8a'?YZcs7 Ů]iKOt$4I*'jB`Pv#XS?sQT>TL!.qR,מ +o[ML#?x+K|#4r!&I1^*H)u٠3]@p6fy;mayd8AfpUEQa ԩ(:K0?p)3*!]VT&HƔ]^L=qCB;Q^R]eaҁ&i[gC4Gq,J2~V /3A ns1-^jO]=pPY!<0ZV̳k!{gɰ ?3dx[6DR]#KRT4wixJ{I,ִlX)ŭaU&.@+N2fy$wjF$Dq_Ħ 3ܷ? i01*SV_&>D^#&VH$ׂ M:%N8mQܽx| (9{YiɡozLE^OZ:Mpv#@EK`"8Rrؕ>ȵ^TJ%}j]1L\4 -{P%\!m8b)3񔣐}0SyU6^لazMqwLOk~UiY!Mœ.lʙϳ(Qscx{O&81Mg,&a|Zgg`0G4+7.6O0d~ xºG퓨:CS:_&u{B|:vNysy4mmk '{;j4,g\E} XfOtG|@Ns"QHkB:f4Kzl ӅPMrzρ?5.Y֒'Յ @L(X(-AJ .|ʰQ3Mจc-,X Hz%Z}b\}+#zaUH%U:Ea [ފը<3Q\6ZhX ImGfDqZ|ǜ\1xҝp3B兝}C)-kLƙ!L %PtsL^n1a2P 7#3K)8@}4YlT+=٦\ww!* x[| pUMN%$ET?ΏN+TS6V*߼8 $z;E]VW׬4C*چgRQ.)uN .do[Lν½Rkiv燨 |vxLѫ@ݖwzܟd%|OE(wd8eƘR)kCdJB1pQ``e&[>)q3|g:M SIMC[]L⎭AtbxXL.:vàífqgS @<;LC+l '>R۝D+3u:0F"nr|L߄9Y}C90/}Kf~bCY;Gl |79x)tYyfm\Pȕ;+.JI˩ÕR"~%DjȒ98:"`l|JPn]m"nѯ'++T[BNqdֿ&ЕT΄i?x}{,S{Ⱦ%#*fcLH3s?a!PJ-Z-=zꋎM^"*- @ѻMC~ {7vuk }0!~ o#%M! gܮbl\7BaKوSbnE.gN CO}?oXavQ=nCJ'9#EypM D0ԡc}snf+[!R껒HxgpW5ny{F%5դ<' ܏ S%GOjc'PbC");h%Ӑhd 5ȩ ͖R $IqnF *4E 2ʚR""$2&򕬴dMOOL+toz68G\rjRPjbd;+(}. jH ^ `F-{-=P AXh Q){ЦeI 7SGeóI|Wcb^&Rn]b1uwqR8sҼfWh֟o|昼ahFwj| ;)9`rwzTM1]U;L?0rGc#RLk#%R|ҊL(zXM=#m>G#J<~$ع4}PhOO MD7nÓ:eaeG Ns j/p8vwin2<曰x', uN r׏4w=O`u{Y櫺DB>pFw)= ˜Z36=DH2wm͍Fp{Tdi 37rj^ +ŪfEr>:)ĵ"')ZҚ8 ʧ<d7`~(_w4*w1e(fEoCڤhms^"B"^ +5֟~~ߨ֞V6 ` :\j\PtYyupp ddY& (xTZ-$;g3mIT\A)W%v""c4t"-= X?썷}3*zy7^_R~Q̉c?>=hhtA%LKՈhT[eu2,9i{ N_*EYt]:5)Bsu-z=;,ӯ`=&\DY7-8*LSd/斈M3oB˦\e|2צ.EA9IMٕ$/jM^/&Kuj}σ[cg?|Ԩ t|6N2fm`u608jE 5}&uQa+L*cmjwke [G$B< +af@iPqWaBvbYP3 P}@=~{R%Vbx;6u'|а"5HşYQb\U!"`zذTvVIg$ CUA" ^`p,P',|/'[gXZYiITAR ` 6e"M_/aaA-cn^t\`k Cqtz~87;aee (Ac2R(Vٮas` 8< wejh7~{+%g9xaWKC &Ԍ_!L}&ǂ,gUvsLAVfLryFEIX Zqow.waĤȉ\4D-vlIΤp޽E3)=$.ɺpL7'!x"+_MJ0#Je}5JVXKșw7yP(VTOVzJ[Jtƛ>gZuG4_@:AY"F(艮[XJChOTs7-$b #DU{^-B^V0:Vgi"v r|z+qd9 &P"ɺq9*XFlGD'tXM`qA*nU[B>DKUvwJw*oquG?I< VAG0 V,2Uj14WuQ&j;UBDPoJ,๷D|>0&zBpP}E2s ~'MqӃS#v^mF*fGt?b=`E`:j^o4ҵ=(j+{!pq(d݄n N_)jG,$O1ayLTHA o턗mTj"w(Pd̀yzcM^V: ^ Zza /,AW"NˡgfXů>V0` i  5\ lɞXCendstream endobj 263 0 obj << /Filter /FlateDecode /Length1 1417 /Length2 6088 /Length3 0 /Length 7047 >> stream xڍw4\m׶mDoA0ނ轎.fw%^"D!'Zt hQ޿I>}k:s{_gpv0U$/$*B $" 88h G"A2A @MOPH\ZHB @R(i2 nj"0wtBcu r$~\a(8@N0WLG(FB0JpsBݤ H7v<`(/=i.7 Gt@{CP0 O= Ĵ5zn0`?|?w($ w?#~'CP G8.0 !.HL> wa*!nh/`Yatu!_S`P uHoߦa1 Pq9@1$ɂa>P'_-|`BnH7,|= ^0  -E`p73(@! ;+̐#.&_**"}R@~a1P$![^}GE ( S:wEbfG@b ("K࿗'@\.B0CD ⿡?d{wT Ev~!?pU^:1K Gz=z0) 0><^<0';hMUP/ !(%¨{ā$ : Q_*) 4 [jD0J= e= &FPpO' K}3N̄^YUm~$_dMjN Z;7*dό/~`=gQ]qdK>N?6JVg,]jg!o&wlspV'{ߥ.R|I(KƼ!ήhIgμk麙lZOIes=]SmL-1^aK;w"*2]E L^^zgR۩Q_o/W{zQ|M:K/`rη\ "pԽ9G(3dWf FMЃ=(<3>bgOeǟD߻t?A5Ǹk)ⓍM40Ch9eK_$MB˅${x}8@ՒbȓO.hrP9ő-ڐzOUmscU2MVLLjM,v,עoo'}f|E(ĦM6e!O*Y`z`䙬]] ;pr^&fP`МzQWá޺lR>_KG׳+qj,d穞uUDt~xOa~Ŧ-]Oas۬7'^U*.@/3 0U$/eY(~%*siKk_^}5LmhIcCmtqvœWqz]=O'2ШB2\&ūA4 N/OpR'IIޅ1Y^تø [dd]ZޕWzTN46a%4]<=N## Z9mpc\Gd AYƬ6eOf|<ċԨV7OXv42ՙ ^ 1='\\nV~[,Ǵ N)ITE(y<ވ%|fǻV+j4||ATUL˜MHZ‰r5$Qy+?N%S(c_P }E.".<|JNoRNp6۔y=ÍwTX<xaQVwn8ssSb_V1vxITYs[bK-ˣ\pDBT#¢jZЕ^P$%ktM! 6swwx߮O&0_Ы+Z9=g+%rAq&/+SZp{i~:iØ烖5pB,=cj|E ZTʋH\*ҧFE[Sa{]@l_c8Q7[Enӯ?f?K/i4J= M$\3xQd%}m:ܴB\`U>=Wk$^Tx~# ~h,SBH#:H۸]AVz5MN?N"< M@+@Y# 'yaOo]NLQL9d7t+Y)!jZ% TcGAUn0eL WFRSc>&#B*뭶 ^7z\ȬcݑM;q\yb*!BuTWIWxݔ*Zj QP `ՔMF^lC}v ِ4B2c%7w\-jȣMFӆ2`Ȑ{S+\^g.v+;w2^#ϰң݊2-pl;V7vT9Hǻ?S&*r9'P j cE67A ddļQ0MeHRH"uLː =|l@!R""G%HUdp(!?éac"#Fk6 ɧA'bIֱ).L'֎ ymq֓[?-wުUj5~$J@Y9(PWaKT❚j IE:tzK-cͷ?}i\\ZwuUuX 6`@5B"%j*u:в5$h(5IM7}6RVy/$tǣpt7`Q ~B}-CCxꠜOpZ){^ }q*w߈2{ 3OsU0^$/~3O;̬($}ӵCIN4GzsD4aa.&׽=>.'tnY=(XuI ʭ5oLz2|pyt@EͷLo>w9zoNOԕϵw\&8zZ.̙ .Bn=Jnf]6X#B Th=R\b}"*) 9[lfmeLϚlrMDeSi]x 85+GLaRksOCK-N,nfUyV+(ݖbpLbYk*1&e/Je5EKOUe3j`FuVa2o=&9=&YAIG4i=7C)lXȳqpw֝q( sAO2 wLz9afO+b*]imKw%`dwFp#CCأ)pkyuqs!-<5qůE-˕=;E\]&Q vk6;~n1f> yUSKC> i1|SEQܼN'fQ2|8QPp7+C>GCxu3zVUjmӯ>C_?ƙG4wϮ|׳5-`o) Tq gNBF-jN3^41:9օQsXulZNYy8Xn Wi(痃cd5]juSD d(NSM<2)Ru!>m|,g(s1z5j o\Ƃ'?=F|vK:M(w\ n9< NyE+ujuU]F#^KT?` `$?~7!e&N摞Doh *B%Q/l >N}S#, lReM0aSk k{x"-_]T?fޟh=4–;ɧB }mS9Vq% IH( 2[6|ORE"8Y/i/K}p"ӵA{; Uf7 h*lClvnƉ("v4&CO_IÆw}M[sVwsϞ Ȱ^.|q򉴐thmݸ/lS2-8ߝ=t9Tzo :5t~,5h=] ޜ$c]٣7ƐppWX8: w(#v&!**qXp2Y jI7q{'YWPf@߷~wIXk84X;ȮUЅZwswl>O4p+6_H:)T|T(g^ >ku忹hFxe6KL}vn]܁GrOTl=v_vHl=ݰ}O:9BaN,J\yj SI[0'v_uJ!bj\>.<{GQ1J#&6-cc;բuG{}W7HkܹdH˙gL50'IFic)H("5èT~^3#ll2# 3bۭ, q8 Ir_M:Jv^}Ь[i$||-!f#?^ y=%`nA\yWcEwŖұE&J ۲<+2SsǷlUОfc#,srvǸ+4PJ.QY[DN RJrE=B+ю`"=4ēdFGMImmE`? *oQN].K6QJ&כ3qm; Q8n;yxw0&-+3ԊV2 {~&{Qq*1{tH2Ҕ$qsZ!utΐ\_~S>-ZEao fyY nNR2$ʙߑFs+F;=(R]{ҝz؍ciyu[n_ -Jp~ ŭvȧwY$Ej ^fgf~!-@rPXQUtD_ƕ@CsSl܉$O >Y¢B,ے :?p?4+4V%l7nP5̼7Zm4Ӄ**b2c^\W82Q`(k~rڊ%tEP]M-WezR(i[\6+}CG0L=V6HoT[p[7w uaukdJGlf?,h,+ w-p\٪n,;fDZ.W y[㋙5NMD? )nQ4~É9~V߸Iph8H5Tؔ*qnex v[Uqendstream endobj 264 0 obj << /Filter /FlateDecode /Length1 2110 /Length2 17596 /Length3 0 /Length 18861 >> stream xڌP]҆ !&,][p_%[\/{>W[T|{w9!#RP2l"r*RLFFzFFfx22dj&@ ?"DM mr `fdO Ȟ j 4A6&d" [7{{<(L\\/Y lr& #lAkhBo`@7㧢J&&&ƀ$ M- btCdb`ox7XLlޗ8޳d_lMl,ZD6/602Yظm@+qYzGWGZ_V@+K7 ) [=сhFy?f1c_MݍߗkirmMadˠjs2w̻  03@o'_w ^ [ /xg?KLLc# hgw큮`O:f r3I*[ W  x}{QU)/!  3ȃ@1vm[~ Vnxg'ِO U7@˙W}Fl_v8Xhd^eژ(wW12Y[ޯodo^1#_#07pwbx0Ϫ-`9/kXv/b0!0!.  X =z.޳%| =zϧX =zϧ_zgwF%wcK2N?dGf_d}`fkn6?]?]?z4.;U+^RؾO2߿ vPB: u ug<߿ g?~yd`Q~_#B7̊4pv +뼯'%?|IܯMc撷&K ,FK pnvhSZ)4cY'_2]]Z,6fXJC.ׂTشv{Uw0'08,BUµBsRWRGٶVh[GGH2G)=x2SY1J_JC1eLݩ+ɫH)%fMdHM8lWCm].ђְDBw6r t>q8%8铀:lTO8]BtFLFpQ@ZO;J`Y0Vz~(dD(aOI <'$cO  -Chq `gƏ9}z!} =YβĶʏIt%8yf^Wm[䉥S7Ry@ٗ`z^[zUz['10r\H]#@DQoa]ue}7 zb˺/yijˊ``W,eN%^huN?W'qt֒]y!I|El:.=͝#!G[y.KHjSw\(欞RCbI}˨EI:q8GylSgʃWHo@ȡ18 F.}_rKe %Ƣ(QV]yYʪ')G,Lav0;iQn.Nx>\r[;S"]x#W6ɛW1VmGٝ] 2\j|)ɐkCt2"!dGOP;_Z:D @8ȁcGKͷxM/".Dv,c5C{Ƙ\'w`S(Fͤ:?O|)Svr#fë,h5d|ܔV}G: O94p"!GO&6?k}VeNH 29%+,sUI%oKK*)7UZ|^g<m;Oۇ4uk\:SXYA]:1jmhVo(4EBR.J}I:X֞榝 (V Í8&SQrGܚ%ў <5ig3VEP]R_*26緱ݭH =k01>ϧX]-<K;4K-Lb*8z]^t9]3y+5ˣIGGi RXtUa{c]=, xĄOenqeի`A8U`d S~)JQ*B>}&h!=uEXIRA0? 'u֭}`.T?~@چ?$xth+g;kcܲie+ӫ|G %@&A 'Hq yVzĠCm]͞KHFD6bιlQGh}vTLNi!IiMԷv 01?D1W\b݇Dy*pW5u\rs5LbtsR@/:tM*խJA e:^e/iYR Ե;nìs-%OE.s7J~7G䊉_;ӧtL5~ 4|ܧQ<[#i~":Nd$ E84 )YYzxG"IoƇ SeN_'&m.$7CeKv*n@&򹞒д#]8-3ޜ.Ctψ Ӓ괺5~-"YAnnvJOyg;?q+"“n-s dgT8I,tre%( 1-цǹ?ٶbd04OKtȦ47z0  }0#4Gnh_4~G , Xu2NJ9 FK X?fXv}4i TjK(y>Hub٧*FtʅEF 9΄6޷Ԋz'*RkU簏EQ*t g1,ՈFtBHf #"vqTem1Ԋ3}b-ieVnyR8-dt̲NaPw`o3wjs\lȏˇ7V9Uپ"!tOS^|z~{ifh$Z5* UxgߴkC6Ķ ,0fg_llwfr {CR̔Vc*H9>3_^a7iOv)r+L>1RtYТomSO6Q0̥W~ClKHؙK Vc3KC24v?MMQ>vCC`Ȣ,‚(\"`+t J 3BBAOa[PΘqM1i:^C^:|(L{]B3dnȡ!rhyI^5ShSç"a~U1% 4:M> J6v쟟kE)t3ޔte=P̄럈7EM67@ȂD8p2? @du<0 ה *|BrONhiUtr699Y7ԫݡGq1SRONN qS֡T43DМRm>g65I9ǡqʴ8劺0;\C6tzwvq x>0,,7kE 箪D #ޕfՔobMB[^>Hb'ݲ>OoI.Kf0P8ªL~)`UA"|o94L"+<Yai[/P~2p #}kKKS$ipi<[8O"ENp9zU 7kx%ڪB]QƼz# U[j_" Q_ZϣW#]̀=^n3˲vAv@Frnb(Oپ-ukr=6{Vi Υt}q[4oES}X GZlO?2ҽ' IZmDQQ}J` 3Uy/{-Dvxj-!U :I͙6w2UŖVGU+D R]c2Y=\niªHt R7u8kkݺi|њל]rYTp&]OECWcݜB)\3·z,!Fs ,m~}>N)7f?׵h9\!p7𫓪 ByEu?>ynV2;he+@0J*b>Ta@!t'}!Hov\FC$EFFɸ ƲI]i?/1se˸;<"&VRFTڌ`T<16PJ;oBA&ue`̩.DCոD ^.k`s},;ꥉ'&~u{D jOS 2grPC)8W4ԣ-|m 0Y ; Sür|fxR鼝DٓqNsĨ Ҕ7QEo!(WiД?=E!vnoV ]8Y0FFMEKeRțER]5'H#]Ef~.hO2Ezq&uAv*Ǵ\*O}lLkZ^MosJ+]<$fJiIZa6gTIrN8Zk $8n6Ás0A_pCRš Պ#-1^a,t cvR<`'vGDGSlcђOT{Plާ9L31ͨe惞У4bXgkU02cuKr}x۲!A(VZ&s2S|xZ'*hfG=O . Q}F݆+"fM^lЦx'LtMVF{lXbՌ Aт%=wK6ۀ,H%6׍"~wpI-c2C$.0^9BrPzv+\-g#[7%"7g|%͉pm:l8uWj&Og&[m _&ҳ~2x MJK- !X~Ǻqz-ka:Sj|~bL D|cm l9ARzAڷ-,_Ӡ>S0Ra5TKHIa|^'? \e#j3)C]BZ 7iem5l蓼 3f-,  @6~qA; ? @G8qS)Ad@<<*]"\y52܋U4R&z/ ,p B_4'Ee%PY;Ѕm]x=MVؘeuVy{.a<_|Vʻ~:ßPL(G>\(] AO FcPc2Ra~9W̖ 2m7^C0BH][9k`C 0HHx`^YڎW%õxn}DP5[Z a2r7;zS[x~yn@MlbxkQ*&U DBlB}3+/VY<5%ǟ ?|!-=ַ;u_Dzk >'%FulQKD'uNxh%=>5r6VnzqcZjnSbpLCI&ÉLZ]a8Oj;^w卧Pf| )kV/%26a%aKW@S&P:&di$M|휒LFo7|0g*l-'U|bX4kc<&(ggfKGhM$^y<[zڑ1$5|cRWµgȆrDg>h/ ͇+Eяƃw LI3"8MN4)1']+o=>a\'v%Ȥ-@(( hx[=JfNT#>~MFW?2jR>*)&fNo_5݉ ޶J[#Wn_{d@Ez|J彑Q6z)$o%ّ=6ZjTu~ֿ:S(;:8F/5ƒLE;W8`RGNxHhS^G$b9=&<ւu+L]ebKvaqK?'qZq~#֗ m`90MHΕ0Gjwe00zF:?#͝Ѱg?>kmf#EKQũךWCO`izNNpu2V9l"803zVZF$IvW(ϻ ʯۻiCdTte!GPX>r:4㧵QF H͏~GcMX|f:Td5~s4TqutjpSiݺ)PFtݧ9J+AkfBŠ<Ye }J*DÕsbԧkk!@Y/Q}D)eViMB7a&JoB.'yt~Z0h' -ÿ^3._uk{5lx".ɡs*[w>鶻DW@мfj"V-vw@{uápŵg'p`.&t\8dv ~3&킶`|+U<ąn;:ĥ4sĂEL)9CtAfs`z6I%V61p<%=z-hD\lP֪,KMV&zJBH)U0RQ'~<v]sHJev^Ϗ¬D"0m}iVxX]xP)q s~x((9YBqzB_gf\"']0&>f;'(ͯ:a<[`w<֑r'^8Q%] ߈^B6+_ܪ3G`r/ieV)p |5wHElECZ-?וE VجM$V99i*P0j#5S;rfnR:{U&/c>"+1VNH+[g߳'m_T8#b9ƨ%;L7%x-:P>RRQUŴg#'Jȇqo6;,ppzOn`L}N1j -1;Cy(g[h]h* [O$4kU1ThS>EnHF@o7!j뫛Lcy$hqC!@`<"ÿsIvhu2)J&cH-iܟJ{S -S?T Ҷ ޫW~yC8A?iĭ_HBoп,mѡⅅ^y`Wf޸/M ?|ֽi:=+$4 +צVb3H1;/U~'8z;,7y9o@X1cBK.$ZA4eUZT@<82gD 1p[ͣ&5s ?+|ǵ8+!0zdJaWK1J+IEa An4w̞)LEa,-$XTKei6 Qd+$AT:¶dy͵*-WGHTeJ'gpaߢp"u0PQQk bؤŋws1&q13$Te9[zv!KW)[8A%0H%ν5kj,3# vCcзmЫ0L'6ŨnV^|Y/8_NUU.x5kM 'IK$C1<R>T͠h- ɿi_@BtUة4)rtOjhnެH B@Fs J |g1N&ew):{`JS"ʶ*z VnVɥ5n4'Su^7Oɀt'`;|,M ݂NJSA%?\`RlYAM 61w;Y*:et+3Au䞏J _*Etnr) e$u 4.@ Tu3 }YEl)B&MTjJZ_fn|$0#b޸ |Ǫ"ԅs_1aػԪdT AyHF`A1-:,zZ=: Xf+܉AO=RO)#؊] pth;Ǖ7я7e/gt4$] _>GsQbA}P h*du%@ åV> Do ޲}B@lne씭.],䮃fu'Ç20#W7+ȧ'|BuE1:>2'3,Hjb\n[^4-٬Y+鍊OF«!CL1«e&w9`7Vt<Wpz0H@w)Pzlg>vŽqO/tzG#_@0T t=IކmFIxUHUL$- bz.pjI`r% eu6O9V7W;$VTC5LU+VoHi tmNgl|{xy;5jr}tpofF`ө"j;'dDCib I5x{qG{'X Eb.sBF!;Jy&zfʅQB/lN,p8wG>!b?X+%"*m<8u͖#ȵֶC#m= K53'ϙE;!N +VC<}d54L v.x"a#KwϙﵢqB˶>k- *OԓLkQQuV0WTGez”o==yW6ť.}R3+T9N q4 oޤسO\8V1ȦevY'̈́{U;s%?pLtK{ƱYfD .,:vQC6Xg>NE\UuoE xEm&FjrۑnPX2DUQg/wZqH~6Pnst n +A8C7\GPܳ>ˠ'xV{nIodA#Tk50W>]0FdVe/i{SĩS+BavmD4iWú R*eY@R4.u*7;r[<&*GzP+2 )غ)"O*NDv슦^S{" F`4jsqRv˱h "5]TwR?t Xu:MJʽqY5_5b~:ya [BXnkTJbb{3 U"ԙL?Ʊ-EA䪮udK?z#m~RLDpkl*q! aQ šX>h@ITl5|}yCq5=CJoNqPup6.!qfi y M\>R/ͻ{áp],&@Tޠ]:[4@)aǧ;UXN:N+[襐ydBeU) =5_Tzzyv{Εlt#Z_?JS+I+&ϥC^iDsS)49;!# P LbkAaٰZ|x<m"?!64Y3W6hdX{dw,آ謻av3LhwY!4QC0^jt.8R1K #2Tz6 p31J0F4 $ӝg NqՈ\?Āa`hYVN@ƈc XmΨ@X]ΩFQܖ4z'Q#R$ 6R~ք3; ,QD'^>o14{6UF3ٳThJ4MfRBy}zam YbKxwA%@>XtI80o,E,ha7f3cqlrHtʀGrHzO4{K4\hna8elpU?'h ܃Jsqj&󦡬qюӅ@8؇T} ЎES-+덆=[Z8,j:qhѝ&*wѪu @Б6%m|Y8*z$ڶ(2NQcoŰwrEh JVx"s" ֓d$1r2G kLr)("UECFCɵ*muB6` #QZw)`]!-m"׎R@z]7c <,C$Mh UDQi׫$BRh;!# P&nyȦ%KHysUx+uh!]Gv@<"L14o{e6o7;D[D<0gS(ڳ{;,vߚsqCpKmيbsoƄ0?OBF=~{ .&q=PX҄QW{ʍU6JR "k8(`/`` Gc![4hkOS{!5mm*af5, }V1,x_4G Yp׈ u۞=:k?27 g?$`*;XQ;|k`-*,tKO7ޘ?v:jLnioM' e]rC; 9bkwMRϡ]ډ_ئendstream endobj 265 0 obj << /Filter /FlateDecode /Length1 2635 /Length2 18786 /Length3 0 /Length 20292 >> stream xڌT Si.IwwHHw#%H7Rg^uuJ*M퍁v.xL&&V&&x UK@'gK{;?,DF. P j`f0s0s01Xkh3r43dN.<@mB` l t41XmAMl*&@ Ag`d`d.@Cpt(Nn@S_ F1ST-,P7sq7r@K3eHv1= _,v621u03Yr ..#;ӿ lAFnF6F K7Hř/YTh W}bN@P=_4L],]b؀De@;+ zX0@/1 Dkil8}T/gfZv𿣃@`,=:Lc0ߟ@fjog3*k)iC_ ++(/JFGo]7eW@~#P-ToE_ 419Xfjc /Pf#N8FN FNF&@79LE&6/_ *"AobOR􎮠joPfnKmg _U&-~3 hHfUj jΟ:gSߑAv;ClA~A@]"?!jЋh{(l8ظ:$qݮ+I%wA$̠ ߝaGAAA 9|F 'j@tj@NI?o.@w=& &!V!&g)4Ri>x/;}w}DI tNA]Z!y>iԚ Nyz ~i {`Dоϋz5d x E+R>ƃ{G]X*YҙjQEsyYd.`i/=Pnfs&Hd}OYyko|5^܅GKy>6M-r,]\ɿD9O3Izƣ#'d-9FVًb@[pKg.!\ا? >܉T-F{{VVvRkDl~kId]哣/<4>WOUF ywìǾ2sWdx4TFN 6]*6c08na UǫW1T.'/wwOBivc&+3mrΜRB2Bm JLޛ>7 D@ig}2onJ PŌbb\Ӯ(8,ԟM >Mgt27Zp)t2 YSNE&uDrlIX>Y7o;:c zRǴg^|)Q(92ÁU!P:gS_>ݟ܊R-{c|:|Tl1_mmJ>9E|2hi=_i:?k`򇬤BGLҧ(Zfs<6ܜfNjNU#if#M [i?#=ZJ%L:XvC/`ƺ3#F??H}ʌl2l!d82a!Dʻ$fvٽUe_A%v5:MS29ѵK|/Q 2_l?wܝr]pF[ =-h;Cc(a}=@Vxn.W}qk}k| 6'\CV}Ud%7xJ8{eRoPU/o϶ENiky5!?K"}pS Kֲ B ΂,>D('pj[z,1@7$NM{$BBݙEARR?PadQFp9Mzx/h^ &Ȩ}lYRaȰf'];N Xs_L"XXq0#@{[aA L^ᖅ>tjtg͏|_{s vSTPeZ-wes:ˋOߩSRzP0ZrwzHpc0`'ߋ]n}H.2ݱa25^wOc(&Tro)(kI^qt'nYO6:3e`?ځ2 txmg>G ;NƟr%2!f7_gϬQ?0wȃk>/ [ŒC f?;$^v6S&Zڕو>2ޑQ$*{*@U8Ő.ex8) pտˆr@ ! E1n5 2<&J DkP ud(f/[ѥQ'݉ gfF*\/kg C>!H2o^MRT!Җ' h/s@W@>Yc 9MS/xeh~snsV~/Yvc^2sg *;BIQ 4&Y,Ҽbp04fp} M&"gm- mknLnsSFN0,?dDu3QFSNS#F20&Ov (SiQe0ok% to+(lh(=HŁH=]d^#;=>&گ^"BM$=c&߆aB/3p>@a,GsHT 8 qZΝ/ S O;  xHy!䭼}xL "_ăc<[/o.2A L ΡA43zS ̐JlDLIY\*a >U2^O|AE.8 JЂr Q,.9)I?R ;&99VS&՞' y[ pP"G6оgʗA~zWedt!  vd#mJz8߇Z;_Z!Ɇ3vEr&'|V%'7/ty[MX7:cg+x4=K/4g,Dsez^TC<{Filt/쮛2Ī׏ RX\ZMvS5Qa]f|O]d<꒸f WptG ]t Hn(ZXcC9{tdH_#'^r%*k]w(Zu|ǯ\٧O9ϫDҔ>&ނ)R?b~6}/#nj-!_?$A֟a\4̤ij6ǩ{_EJV$ޕމ)PxA V1Nk䘵:ėȅsT_*6=_Uқ-xޱP^Lq=򾏑L&F7e'V6))9aO_GB5"Ì*pyq(`܈iƠS=wuߥG[̯|MCx\ю ky_|.5i7,sğ 六怰xYބqP2GDw y M:^w(|?%B\F$p)t6edĖnNꊫaf9Aimy gj,0;jINPV#QPK(E:1 wP WHSeRE'YUE|ǔv*wYY_K,HzO(ݩm1a^iGiJw[;fqk&;OLɄ_͹lU35}n62 }WJO/SʧB%SP'Lg(Q}AuWJm-]}u 2K`yw')+x9j* t/jQDxPxN$VƜyNafK"s fŮIbun\YђE=SZ3c O%] q 6qYtC2ns5ɇSop*Jr>[Kݘ'>8M4唿VA)5(ʽ`Χө/ O2N|e=}u~UJG;sx8[ZZb d͉䘂fM)+雾h-?X3vTC>~9 # $LWD]X 0xn4ݘEt-#3d`O[>q^Zlw0K*R#2EG4~oE:hWX96[+ؿ ekWJsGC$jBPFqe(L/s˒5JB5Ñ=؇1婋X740' "#*!wbVp+.(ip[I1yCO_LAw?8eHDFfXC%nw wm Qg+%j϶^HD 75" n5"wlJE{3q Փ3V'ʥUцEUܮї y$pmn6>L?zM;J=~ Ks:/_KS[ԩ<.{hDZ"DEo?W&2͐98gLVCx >=~(d&fKܐGC =(}}& 9et ddF64p 5 ;0yIe-4c%Ȧ Asykvm>u9u3fa~8g=sK '9婻z\M9KF E(;G,,ŏ.pԌ:}B>`zFNH!``w?~2,>01"(s:B$f$딓/80L,K16͐޷V o&p7'fSp$D,z,AWy\^PYճ\P51n08 #^PUL"NC'1B<[zQ⿝`T;Űfb}}qQBfe(@߇m7B[Vc>dT'?UV* rGĪ0Ȍ^!UT4Wuč{NXsQ,7h[DܾkV?7u-:|PbrXcFn*e||nk͛c^ 7q9Ao6e4{M.䩟4q{f2Af(7iidD(f0M,tB梅԰O$;7;YZ2ƛy%L8"uSL%b$,]*:. f{L>.1؋}- 1\=zY[[FK/BJ0)ʰ`vI г>,ʄ.zJitK~jєC/ _@:Դ^}ށ[iԬ1d Z״i 0oUG*\YqYZ2lX-TNt./9iBh6m)#4>!q&Tԙ4-k$cj3gTȏpVZfB"6PBubx`8A;]OsnOOoœ&^u[xR ۃ7 2weImzzձ?b,xְ'Q)%4AZ%`$o$&^be4MJ0џ$huRHCS;%mYJ_ԦUX.o@j]trSMԿuns9yJ9U{}CͶԻY #w.#'.$of=eb~v5e=Tb)߮γSg!GrSDA_zwaJB>[oav5hɱE?/My:-B?AMc%?.vM2S'gm)â16 _~qMȗp+ě։~-jDѼ\v67Q@\r]ًX k.W6]ꉷ2$'E6伒JthvQ'#ZG: 2G=`{\vbrH}u!|6~ޠzg.Û8a t2)uW']jx^@uI$!Ԩ)tWG@Ʉ:Ĺ|SB4='ShE­q;;d(M6 ϻl6(dZNu!'){eR$GD°z< z- JT˹VlpRh3weD\8=S 75k_im:ύ |oΦH!p-S.?WbT($)S5|GA\X& Sw7=NlPRe j4+"Ch-kRyl|׷- 3IL1}kz/tipXl7TuTƸ2g#vUut_rBS+B%1R_us -#JcUMev8Y$;>ϭF߄ER-f.h"HkV:PB|K6%R)' ȭ. ;)!Feѳ#A9w\r!*4SfiP Ut?r,HiYɕpp G7U ݛW|LV㠛R^BO%}tW4"kp:Tf,jO~IHpa8'tHYS19q)4Wu$FKK5 V"CsLl4.yQD}' c,Dv{;T%,?5Ou0ajzVt'hd(\J棊GMLn9ScLiOtu߻6 jYs $6jW=h+7gQ1"Q5OG$^1sÞ:aDVɝS+%)[-5 r":lV7,-$őRB0bDE÷d[}ءꙅ3 0;Ռ95]bsPTTVC$M!cu_eBav~cM}F _9 _ii(wZ.Gp1GDsݡCbtEЯT3GZbWR녡'dw?wEcKU1JivN{q3^Y'os"cw'fJtWn7Vo(Dk3~9ﮍMZ M 9u*ɄFa`:l ^!Ďa:B 4 wђڟV {#+-Gj̺bb;#̐{d8ՋuR`3CĢuROU)AxJ\d)֊P] }+3|JUfz^6-7[S| rZ."(Dxju/:j *WHK ԐwA[@YW 5cTB" ċ)^ .{͑l9N|Ugw8)t fz^ zQK7䯈1^!~Dq,ucykVA"8ɔ %vBS~D&}uZI#TR5ڀWxˉBtlJQW_(9#V̢9 \ 4yO5+ :b! 2™а`ޮ/p%19ϚQ[@5T|̛<0I"B`>^z g 6VܟO1CWG) $Uc1q?Өj6oQh| )] V~*y֯QT&333?aꛅT( ~;vHVJ*8Y n(2 4EwkQ& Fw4 襘d5)fPh|8]$v"p\9EC(}(=I|G4lW=8/@wrG؉rtpmF9E%kO_3J}PFKML _.ܴ$To}O"fշ7B?B}gcy-y݉O~N_R;hmye2wi2˥DF븵 =9qZVKN'Ws X`Ew,\Oǒ9/=v'*ftCr˅`Hܹ5vBRU_I*) ^0J6;VAiⰾA]P!ꀺXfs^yFEUpž=}#ʬ%fPPG=EÎ i"OᮿA4~򿘜 #ֆY 0ԑ. BTNVH((ږ䕿*kn=T/4-'o1wB%ꀼ~Ĕհ.vaqW|Y23)2ddٷN}:.݂ rc E'oȘ>9ȶ; 06@gHk͗WB!pbNݪ 1_v*n@֎\{t+[.%N)$SAKr&Ud$<:(Dѕ3Y\br\$- EJUq~/eV?DќLU{Ծ }2}I`K6`Q"~JT?".g3jDz6nRN7 ȉ7Lv_8 &L-U=㊬^ 9l,u8خitTt|+ղ~URN͑Yis:ׁeaF@ziI T+;-qT6SHِ` $.N4](qQ8& AiT~2ĆUIL9(^]6":1TϩLl(=aRשJԹŹbɏ^^W1э,|d=t=hao>>2 |s-+WRnMo$&: »/6Sc cTd= @mN:xF=@;T"~=ZҙcTܘ[Ԉ`qkwM'szIvQ~H^( /.|>W_E~A4nM`T?Vjf^B\SsOpYڇX2挝KHHGHCeް 1 }EVCuwIwnB؅<tBrtٍjm"SE&t7ݯr=֘ s{Nbӄ 4bLT VvJ|4!Hm+O_>aio*?PΜ?^Gmw|?g 0iT0)F#V㺁bbfeFe 5st.˾:XK]u\Z N@iyReپ3ÖT!H!{wq3/# f?Oד'= .*Ҷ+᷷Dmm)- ԇ7t|\t嬒\>J7Qp֥d7aeMNR|i9>!475yonOkM#زMV3A^b&ݻ)k|cؐ#s~疲噯l#Lm 0Ƃ#z?C5o^T?Ll$kڰZ$i yL;5 TQ2%]@Sˆ\΃2/ 8-~o8h9ϴ4RCTWo,qwh+; &OWcAPTژ$[P ѬlpKdiW{alH$`!5,e2#=_$&1nx)L]|6,SN\%?EL*+d|> bL%Ԥ(A:fs`YU|Dـy˲h<? 㸾rs|H'e#-r! ԩ5O7,SF!nAD؊~LMԥ5߮-gESW[HOX5٪jimQ]7dލtW;v7V9żz4L9 'p V纘O/ yeԘ9rZҷ T<3G Eu'ߊ5Ԭd[o ~E"!/h At8V?#jSc̈́?)zk0JFLk?:"|1s'9hl.B';w.7 L"q;hMʒ/\/؈J6_0OTRJ%Vo20pPsFwIr) T=.uA =b1u`@q#,XQ6Rf6هȑk^래~_@z_<6һi~esKD>oko3nIrYRtw%Vg MZ | w2N+ >uL!g@Me [p:]vm'w2A0R j:|IVϼ!R{M| gؼ~& :^SY;}az6Bu"Z9,B]w4_G^>*KJv|\ׇTN$O:m:CұG~4b(h#ob_zwJ?[3|6ܥp ih[.Վ2Fr r Gy0QD: qZp[ވc{-qYMtB[9!fg3IcKwVӧ BB0CS&Vd{ r.9C-jJzUT9MƠ)`G}= yg3Ɔ2HvMz6GYx3St( hG[1+֓&MOޝ0\fFm>P&r?C"a Zt L^dD]_ߊ tƛjYֆ&OFܹEa|(D%k2\6Nx-e6Ω7fYվ$^f(#j2WEbow^p5vDtS81SZކh !YNjYZΆMqqc}Ġ5DD8ݐ e$gw`_ '%QBREDn 1m\tp&W[J]zu}/.|9ҏv[vOC&|ҍRiF+ 4s j>gwk ?n{Dи8y B3芀{ނt$Uc>B~ CD0$U烉D蘙|=׸o_katg1ݬC+RfyGBz*1nj^0 `29`?Ĕ6SG&ᨙߡ|*`ѕ'[j=Z!+r)Vh1wl97);vo/>q&ߐ`O=>n:S{Ͻfv98|p׎ظk2P?'IO$ faOru,8>G0m3]3<")pXّ^0 ">G*K BހDuX8fp-ڟlaQzۻ;j}Csځ4OE1̽A͊ii~\ǹ> $V{s|8?ťT$)}zBwX{8𐥣 J) r]Gz}5g숣(y;n ƚFWbsP_3qH_;F|;n78OCd gP}fW~"RIƖi<|)`+٫3\7.$tAR$\DXFsB}:91Ap G2B?7u}|gP:o|t_s㜷6N]𝺰י0/Rj;cKÇp_qφOt_)%Ƅ\my)Q[3fu~WxfH ʕSѬe D dRo{gCr& -@'!Tyf*ʵ }zk3ȡ(lV.!Cub8rLutY4[]̼L&;X NFû@HxbmM 9= (Rǎ6),|s0&iνQ@BیӨ5f>~ =V6IJ}0(jrNQPGl>o4mfjg wPmԶGj q\Px;94W|:K[h- A ~"{reE-{ H3d;(BK |&(G3})'>/=2ꁚpA|W)J>Ew냼3Xʞ0YjGӛOhzFpW߬΋/Ek'_&WJT1YajbO*qg }VQ FڃHoe4MJ< `f2]y>Y&될Dew~l}` ƅFUxH> yo!4 \zɨRg '9:v"ꮫ '-kdu ~́b3~#Yfl _P.c/x8" 8؍E"y-}c ښ# P%ʵֈ=#c :zD ZvMm#׈KnXiv,Xj΋[z5R*su U+l{32XZ*,0R4&%@D[bDl,SX#dlz#YB4pX= t@t#kI4Er`fC~-.pږ)0o)çޙѧoj3$?fBgn(x8(,n/.ǔ)=^rG=V<:$jViwc!X\G6!L܀-A"ul)?]?|A}6R{}.;hl3ӇVN& M0^J}Hֱ=w)1g s.RUh&*$À iԹyֹumJlՇ1u,_ !ո>2s~N$ǝm$qGEl"w/TN6-Xv00#M7Ҳ+} o ciAv"M^F3ԅv\8m(d^ɌDȊJ%]C)}LHdJܚV#<ۜ=qs˒%μBl\1TP$U d={(eOPUbdUu܅~A*(- 嫍Z NwN|,]N[]OJ/tEdDqo>dWĭ'Y6r 8x'D|Ozu[way 4p:|cTIq(9 " : g}V̿۔'fB*PC4\z2:E~#Qg)('[2y?+< BWFi`/czLp\+epsKbz h<XQmv=rv/Ljr qQ'muxsMQ-l<U $N ryT2goQh~PNൔgm׍~ͣcnfKLRkYKDqxmt42rRuz?Lks9gi y}h}P}Ā \{5Cz8͛ hiVWV:\AJ.+z=+IBP#Yn~3=i#^i,`iV{m d$XB^eܱqaT*Rzxd h pT4֗En:Q˸f*i a"e`{ڿ/ʹgdU>pۈaq[Xˆ5iV/r'-"mGrp@|4dW8_\Їm{bl0fQ_<}GU-;vb&5J^{E۔<{T KYvu.BNmTŢ}4:}y/Pi.}OSm"h Bo( `v ̕BT{)FSQmdiU([b{v6hfrUD2Pi9dFPSu3*E-5OʶRV̈e,b=l6Xǃ&'N[>Gʍ pm١PĒg53AeԪvg}o/+|8˭jpÌ 6dBt(,Oloo w(/C+IlMPgM~ `/yJRAU?/mK#`-.}|gxݝQx{0ր<ҘㅺXpFfj d( ݩƨNqJ$ s1Xj8.@- TIql%,#B<2aQuKYEXuk -T9bWÏ{ _"*g ^=.UoW7@K ЂF)\|Pır;7WUVg)͍P4nǾŕo.^^|F |#smׂ4^zJ16@tM^ַN(kd;liw n,zJnFX6b(=ł{J1w `7V!x 1v̦Y"@VU Wi531D bFq %,CP粛rlPendstream endobj 266 0 obj << /Type /ObjStm /Length 1809 /Filter /FlateDecode /N 82 /First 710 >> stream xXKo7WP&-H,Uj%-ה4c7!5y&rOd0+Č@y͔ gքsL{yh!وod& 9Z4s+<< pEA"1–Qa R)NIN _hqt=d~ LjM9#Qh6F hRc @B EfI4{x4A WKh т9hW굄)GݳsuvXqrWqϷ B&Q=Ij%IIQd'Y7S9Y1ʫEsmk:tCQʕhZAJ0`&h-M}j>{M_n?>|+^TDendstream endobj 349 0 obj << /Type /ObjStm /Length 1582 /Filter /FlateDecode /N 81 /First 709 >> stream xYsF) )*CB*U(CBvHZ[{ivG e5uOREN9B.$r"!%Sx$ӐOK:gyGڰ"Gɩ0dIcQdMNCӐ7"W $i!Tj-0IقV,lH9Xh.4,4, pl4[urh ^9hAG\$Aؑ a9@N+/`[^h+/3 yCzh6^y84[vNh@f05ab0@~=C°0Oax42L:s<- W3h QpOF >|Ca !1H - \KL,\=f-dᡛA9DǏ':jYfZs_ =y2_37t2|D^mXma ~賱C"l-uxxL"4&6B4b6r2ꪯBGwf=o(|e7ʛ$12JKcIbq;$q9=PH!]UVWӎMO0ӳM-q4~Z~`&sQ|ell8mMd-")NGpƆ0*i} Тޝ͚%RE>S6[*>)f3U^֎ĩ9?fkEgoBWWe=5MlJ[oQ;+G =ulg TaOAxqh8LvK`uG@/pOEwV",gy!n g\fIq ʤz+&1%>CYnfA%}Gt۴E}qz;W* ϙFmgUFik}غ+gˏaIwҭhSI'Q uCؠs:#ps*.g8Y6vve}7͑G.nZy <**y@[~Ӵ/i~;HeǒsE˦c6Mq3 _F?k##jq@ZS"O#n{|K=-_Y&W=?a?Xe媮1b$($q")> /ExtGState << >> /Font << /F1 111 0 R /F2 112 0 R >> /ProcSet [ /PDF /Text ] >> /Subtype /Form /Type /XObject /Length 1165 /Filter /FlateDecode >> stream xXMOGϯɁvWU^HQZL%_Wݳ̂єnS[]_^5ܭ{܃K͇] ;)ݵNCϞ秿Nw5ۉa3>vX0-6wns f7?un7rbPCv$[2M 6T,|j8[VKN^мO98<|3x^\`Cj4aJcJ GV:U.;7ǜ|1X.Ыp@DZ.3–FoaSg SIGd*>XvuгF&\zk#,PS6X-w(>Pob=ճƝ9$dߚ(b<ŠILZQڀU1$. \y/Z!i aGyV*U} |!_ӫ`}@24LJKUTr`{ eXpJ- @Mݑ"qT:# VG' 5Zd@4;,! Sޟuwᩓy-& Nhi(#M9gc6'dn2lNC ,2xxl,tNw*tuVUv*W 'l(FՊ ;[Zff#(88kD@皺m(ƪꖊLJ~Cv @ʢoɦUbĕuf0 pU8 (9(ڣDjB}!${tyﻰ*;ur_ -T9/yuH|^KcEbEA}.#9O-ru~w/"6<dž,p'91.<ܹ3B1gڼFt?k"3Ƣfgic5FGpl`^_ͷ[i_AEJ@ǃx^.rN \Uey\q[~dɑ+Z߈ʤ7I@OhR\ySY #m&z"ܣ8g:,#hJX> stream xwTSϽ7PkhRH H.*1 J"6DTpDQ2(C"QDqpId߼y͛~kg}ֺLX Xňg` lpBF|،l *?Y"1P\8=W%Oɘ4M0J"Y2Vs,[|e92<se'9`2&ctI@o|N6(.sSdl-c(2-yH_/XZ.$&\SM07#1ؙYrfYym";8980m-m(]v^DW~ emi]P`/u}q|^R,g+\Kk)/C_|Rax8t1C^7nfzDp 柇u$/ED˦L L[B@ٹЖX!@~(* {d+} G͋љς}WL$cGD2QZ4 E@@A(q`1D `'u46ptc48.`R0) @Rt CXCP%CBH@Rf[(t CQhz#0 Zl`O828.p|O×X ?:0FBx$ !i@ڐH[EE1PL ⢖V6QP>U(j MFkt,:.FW8c1L&ӎ9ƌaX: rbl1 {{{;}#tp8_\8"Ey.,X%%Gщ1-9ҀKl.oo/O$&'=JvMޞxǥ{=Vs\x ‰N柜>ucKz=s/ol|ϝ?y ^d]ps~:;/;]7|WpQoH!ɻVsnYs}ҽ~4] =>=:`;cܱ'?e~!ańD#G&}'/?^xI֓?+\wx20;5\ӯ_etWf^Qs-mw3+?~O~endstream endobj 433 0 obj << /Filter /FlateDecode /Length 2051 >> stream xYIoFWRhpfhA6rItsre"(𥿽orF-IS Cp}oa2$/ތ/^_zJ*5,Aqenp17C=oxXh kx#o]u=Ac;7ѵ*hUֈpE"Fi#eωJqtE ~O6+נ$Ӗ{~QھLEC@i:e4imC'Q~U2ƭQG/4r׷Mpu$2T8R&ҊU$ME2R2ufDQV=A?J/ϼ|D[N o}s3KLG̽fg6LY{LpS;41 P-h30 A(<#˩ t}\Xx!*ڸ#ǥ7y{elLeQ`PJȸ 9UY-pHh } w]D`T067w"~Zjմb0!KpZÆWy6 xwbeD ƹ|LqO ?2IH'XBpȱcD$ D]K& ń%Lm&"gOT:QE)߲=sa٠qy'Fs81q# cr7P DgMg<1D Z'O*6z;C/'nׅ3$=Os7:`fٰ%`Ű,iE$WC\ڮ ęvt^fPS>0PYH0#= ʐq0*s:FаS='6/LF)Kj v]4Omr! Bm Ėo"ٶYgڦ؎LˆD\LJ֊zB~$PP@%K{RRM6B ڄfsArjEqQ X63Zy5HRCxL f6$-S%* hlKbo`X$Kd qQM+?&˜6qZla ~j.Ă-kEB3/B\!IԙpmG \8yvdtlS:wbLqv dKr] N5ؚe\הDÔqR煚P R&"LI- % {;ovd'\,*\vYQ, V6SfX=DgÈpP#1琭87LӸ_5"Vyj9M,*J{nDk&9R Gsi:imrZ{(u3m>(fciM}o֧2*6W,Rfw \NC* fSm7؂qioU7i,3J!fq*wr,ŕdXG걇(]\0GKn =g? w{ sj૾l[񕹛aGKqW^r?_E--Ӂ ow2Ieq \U#iRUJnV{{"x{wB*cS9K^k[e'Iup=Ϝ&pI@e1u4Dg@Û^&w{gH& (]'(ªs#[zy0$Ged<>ʺ ^8QW6> stream xY[o7~ϯ+dH5.ME)x4Օ4FrHy;ٴI/\sH,y&4.fy.]fXr篫*eK"4aa mbQ?iy҆KuRVWxXfӅ%-V9Y ?-Obi9p7{&U1URJi1OR uIU:X4+#d.{=[,s-TN_Nw qSYVʏT/,"I2fK['xSB,;+?n oHS OYl>ZA #@<pasȤ#Gw3xQKd%d0~Ҋ2dbݸ9G>KB7@^٤ϲփ7ݜ*jXpκל`g:'6F|h :ebJWX<0] :KIUq}+dU$"9YUQS YjØI"Zz~yGt?DcUOlG Z6O[p" =KPy4E!⺫v?f1;iaN^ܕ&8S46P- ~?"ǃK܄h.g|- aHr]7y^q'4mUI=5^T&`q#7ǎiD Q;یH#JHp"k!E4f ĝޤ׎b7 i2eUfC1.|R2G$,Oy.rM5r:+Gv3G.GS$ ';p9+ވFG&g,/A QГΡ39c2k=˪MP= [dITuSIGYa |͞WJq懊ƣW[#׌'i\9w԰@aN ʷ&ʀ*bA< >s ?DOk1 yDo =iD\4Ф_)?DuC5ubƛO +-GRtbX21$%/qL]Y$_{كz*rݺ:k =99L/::L*%U=?k<+Ł?}hW΅Շ =Cq8/J;<)M97}㡁ep/D$.zL)~p?ԘP1~=L.MY -x"$F/ըFAUꗁ¼H@7M.}݀K/ˈ7|HCÅr2y?faZius\&> CzP_^1[uzTkU)LE#lƦulV^r5Iٳ+{1{_~5#(rW#hT~ޗΈ`ALܟw$wQ/+;xQ0:e"><|ApzA\&;LsO iOO 5endstream endobj 435 0 obj << /Filter /FlateDecode /Length 2748 >> stream xZ{o_( {N,'EpD)G#ڽ]$-33U2YMɻ7g'/O =QI\%-',Iqeb1S]MglZ ;m,|=@"~ l7ѩTqb__,ػb F$B2F!o~x=iڿ~3֖Ҽ5pձ;,.T0 KU:2ؒ2>IB ^禰#^`' 8ӇoJUCfE]i9$-`~-ZO^U"0xͽDz zlr17+TV<=EȈtG##$RQ}S  /@_dH4Ob_'kAfF9>{qL85A'(E^&e0PC䯋jJʈJxbFw[ !FTk\9f ]2--m3,Xhe =X%.XᏉJֱ{MJOsF&=>¡*7Kl#Ea8-*kF1!Xǭ.(>ǫDP, uz"Z\cAe ^X^XMJ7u' |L }u]Y z{˙Єw^p@9]gg]ջPxE؎\e|Tb4_ePt2l#louD{xr[w;ѳFrE]-F?{&@ ';rG9$]&,m+IUOj2=Weޙ"30-0uP58q9ɶVLpr6I( kx[KbtdZ8UFˀ@tX/ekW &LsR_i>Ue&e*ΓѼBÍI}kTIʣ\"kp@zi3ޭ֋WY~R]2 ޚ*cy6(53i+vqWiIϭXMiiI5+ 3]ě~OLZtIa1OJ$!EM&й7w|C/?N_MvCw'4vI69`ĕ #zHe .-<%٢A {J舃# e--=w: eRU 7ߑ?obgȣ,OXFBаnmY/,C%9Y #Ԗb/E $mX'45hL]v.G|/V!VͨzT@8,K{ãOK4;],qczɎEft`no "]#jd-O)4SW c%ڭsVb.N7f޵_ieKNCԉ@L@U.ޥ$c52jGp2a $jƠ5&22BRnڟH2yi}7-} =O t Oq9b΂ ?vur"Bnc9C2w;ˢ՚qkjviߚ,T'2Tpf궲7nd&GQVQgSl(I⻠n3`YaxmYR+ :Kђ(ػh}3 ~λ\g#Y^*.rLh4x0XV tl{T:Xejh(2h]-dX)$|t%E*Yodz@UR:ksy]fISUf;4e2t&̜/դwn?TKTݪ1ݞۇLǰ{WNtlzZGxz[*?\ǦzXnD楠Om OQ1p.Cs-˜0{N{^.PupQww$ٻ|6\Y?MݫvӠ) %I­Q$7}"2 ~2<"-j=CܰI${jd=#y;93sT8.*JCsD̰?LL;}u>|a=/ovigΔw;:78Y8zrRRΞ+靰Ǻ=BO=fQ?3!ffx)|fp9:ǴkOx yp|&a n.[n<zݶL%14՜wAj@cS1Ǿ.<Ի5iI$$s%iB{<&AR3=q1q@x!^GWf;dufuΦڏGW.Nޱ4_izR~F@%AE1^̑vAr$g>՜oN Eendstream endobj 436 0 obj << /Filter /FlateDecode /Length 2550 >> stream xYIsW9@ X]NŮIe&n9@ (qL AI%=o xʁR^~,}8{wyv>giI.*y:].f:ʟ*e Dhһy5 y]DvS1R$:.e(9g6ÜUZa\kԟU!gܽ2: BW/8,@.'i <¬I\6 97Q~&y}o!GN"B) BSY}$L<)t~k? z^2˝v{A hɉBp# 7,ң3YZO}!H0ftԆQ6447Y[؄N-g3uk/wKoB[Ox~氧A]5fEl%)g=`$H-R !,fh0='&E"y z0'NpH9|ϝE``XG͝UeE8͸&koY3thfZׂ/P:p톗[ ΐuϽK,;-AZTE:I\{mP <AZJ9@a}̖M\G)%#BF7ۋPqhHOIRs2:O|MZ9)N`N`V\zcb2HvLl^Rh{ùS+K&- m60&TRԌޛYdp'2<+Yf* Lre2)5 E,?}ɛƘ3|d\58+밖q|Śd*ƭi*R &˻^w<fOlFkomǎP`0ŋ/ʅ8QYpʶt5UĪOkNdJea+TSq 'S JKvExScEV6[JH_Pujfk]];51LsssH(Hi Ôc/B5,N)r#)OwPNp7Cr[/V ie=7{bīe,M:ϕP*,q^ȫZcL|I ϰO)B:M2si8-@`+d 4[A$o7Lb]W)L[P vk̻ Mxݝ~ٵ$qC~E^ x>F3*0w5μ{w|ǧnl;5 g nсd&IRy@ /#T@}k>Nly:6`[X%R{P&y vN'ݯn軥2d#$\%IKX+{~s+2ŪM.'jЏ5NVqQdbMvg7ũ ~+gHW t:ǸK睻y[Rӄ5r/j&ْR2B eu$?^")LQFKb{₏%=*!e-h:1>'hZ:pGr59\~ :Zu2U`[Ugɇ0B{l^uA7OW˧+N-_ƏB'i_*u/ԯ.8le;pR''&s;NQ(ç<~Na7i1Wϳendstream endobj 437 0 obj << /Filter /FlateDecode /Length 2165 >> stream xiDh4= mZ$s$Nd;fq)0hZHf Q -MWhWt9\'wjQsbn{魣tn,r[ %k^dEmF/XvPSB:Mer5$ڲ6 ^0 YYvqUl{BYq#UymEKt*x;2|/ʢ٭Cxh_)KF5{|59Yϭa*(,{pUkKOa$8Wq3J:Mv!%J;k_{ti2拽]oR\n+笰=s?9JUhqzi]NMf⟛pXuƵ ?T]܆gGHu-QYAypsPptțpʷh޽]Kuw-4nĊ]+r_͡4߳S˦N*gK-nvrQ[\3xby=_L7́^ia!_47?9\GF/YТ;+]duCuZ5٭Ik݇V-H|bֶv#&" ="3{ύkhRQF>4KՏ>t pW$A@]sWL:PD 0pUA +ۭ hyr$0Lod@;g.t.*37Nӣ0-gwk$oCv_n+'SQPhE{2Vi0mf.GHkQhؙ>P FzdeNTJVD̶ƀ$hDfC=33x"]2JRZZm$]lj1) ?oT~L醅Jnh0%-5%q%X$2jT? $B icS?a4@؍.':hoZ08нN3d!V<CYp úPK|K _R*Z'8:LkB7j߷\_4_E1Wi i-VVZԪmHw Q"{N]ϲ:9({>UxVFȸ=1^x2Řȳ?o\/=2-)B';~z *tnBW-y:g^|z= %N=7QX_NWþ^endstream endobj 438 0 obj << /Filter /FlateDecode /Length 3357 >> stream xڭZ[s6~ϯVP;`'nfdwLX˒Wyؿ$;(888:;[eg/\<[]LUYnnJ} 8Y21o۲&Wk B.wzR/u9ܤ;|4L›> _)@y }2b|fj{>5~[68톷!ry'XHGeZMJxMSM9YloW+=6;5e-3Vܼ7~gnd).m_}=~Lp@vp.va N=G{Eihⱹ8a@YFCOyaDqz_y 4 cxݰ.pxN쓞#o6,uZsA띊Hn-0N81x'ʼnf7rm)-TL%516-;`m kۃ!^KѭUk)ȇQ*X`I|b0λH׼jx RwKR^&=0,d72.G弥'aݙQ uF*ghȲ([ݽؘ/Hx rI~q|5vۂ/<rF6Y8݄pQÞ#"vT'tt$K*S)7[A 6Z&}2*Lp1Ԫ_R|LAEHr"X;_WF4Igz ):B <{*!!RHz0^ՇTWSؓ1£Ü x iؠp>݆ C/4+B| U!Ie049WZʭ#2Em q A;6s MD)(c G >hC#"gN:cQ0$Q D3ǯo/1yA&. #(ؼ*{&Mq 8HLX0Pb9MB_( ]XXCՅ@,fчu=GLJUrŸe٪EԨ@T F2U+xN?2k1ak )7z'F/PQe5*eT7vLPޗ<[X`>]` 0~y!u?qYhbc,;1Z&2O'ěSves4Q56jTU> z|C!" 6Dُl hIngG( zE 2  ~_0 T_@J(P23d|um ~l@^W8 UywDӃH4:AwmK U)9updnXgԼz  jT!Ze&M'r=qy Bv^+QO+3O][~n(LbhG#X pGVq@#n2kؤ*mR3Q͏UKTc껆^M*4yH}5iP;*I[Hé 5N$3<,a"n)ҡꕈI!A1 PՉi{&B3oB;Ӽ,̺@yIɠИ˗rW.Eh3Em]^}+zbDzGn Uz?Ԋ*N 'jW(@)朲r~s>x7JإQF(zs._GJeirX/27A ۰8trPiq&1Pӆk='E]O`ő&D&KC}F7PGVelI-='R/cn/ʺ:n۫ĝd̞;?ql‡kNxϼ2unzuKQO2u x*|O$㮁5x̀N~ou2YZ0uX@ $ mYw$]{@=蹖ʭďSc  ǹcWWc<#(zci_-h ܐE #b}F@t/q(Vsu#endstream endobj 439 0 obj << /Filter /FlateDecode /Length 3116 >> stream xZ[o~ϯR rj;;um'-ƒeR.q.o!i9hQ&E.w7Ct //_qj EL.J8unଋ3m?|3 p.pcb{\C墯p1cu?/￑4Ұ4z(B6σ1r06NGՎHa9pʢ_pSfK p\*9A%'c[ ~)p['ۡJ#T|浶wc{ w_X;NVv ޒ[ka4ӏck"IlL$RHM$UlK#ѥ{ij/0`df Nh+%- e2Ɇ9g~?#Dn Oxiċm3~+W\O6&m.: f:NMa"L9lE #<_lr4lQWC uc}Vs'fu`V$PӬBx|) a>S3~xvM V&<ԍ_x Ǿ?+*7(&oTσ 3q_s>YivOW'%ptuiҬf@xwV%xU+J( 5/얤_=娤|TG/ؿ8ProE5BUH=aďKE7`+(eOCفknM @\ 4%.\:U@:vhA ӉLR˺G%(GZr@JGΧ=Gl?8>x#0%߼z"M3'8~r/*]$>F|: {Ns?”BKG@Xrl6l;!&OdN mA+}LEh;i["9\F e!%2=-Xqt"J:Z#.HO^k$w=>_Pf蠊6Be h ,Z0?`QbZ7(Di/㌲Pܱg~q9a' ~6 IOkJ1̺2 - LRa ?aQ|"-Gj5ҙ`c F=*#. Faޡ; 8O/D sJ?;z TIMR|iHw/^KxY'SĄp` F- Qzv9:{ԚDcIt=&XwMѥ}¦G-1yǂccnj5&JT6?I]SٶYeJ+Xe֦c~Qqi ;< J+}^a5So͆Y%Q\T$  #3rFCE5aION,E +{&c.驙H_Ck -> Ɏbt *&P g~ܿ*ycߒlpomsU'(MCM/>E5FDNN\tB [;բ >[ꐎ}eC%a|k,DjI` b"H1E" 7Hy@dNE+CqH6()|q6RHuIMX P(hWeg. 'eX-EkC5J{>Zef&@r͒:VM̾F:l>Zr#P"me*5kSUy4B%hS8ԯ(}Qk݃`$u(XҠҢzY'I3$/.BSn%p ډ`}"Z[Ppɑ!×~Y( >s𣟙cXt'iFT.J>?۴#e[# hSA IqFGUxOh'8q?&`j}7]./_o؈B3[1}n]T&_]<2r;i7 O8\^#'_R [Br8X1)ѠٽhalJCuIURd?x:ɐ˦ Um*Au4+$ `J"&\.kTwQ1d|Y#>se+(窻nG?Mf+ŷ6&uǽo#>[8;Ȟ,ۼ/}'ߖ`~g(S{;;}ƬԤ{k:Jg5pKvf8\WZ?ZSqjvv%l!,uiZ#AVZhG;;d:|TT $genҦSm^eNaб*ҁ%KY 6!:b:e0CG(]5kPRc чx΄ޯ{Ɨe_^>{{t4Bz)߂72FӲ\\(͍Fpٻ:no^]'@|$pc]EQ;mpbCK,)Zաe,N ,H(ee+XS֍x.vu*?MhT6 bP#" n2GُM :(PWŇ۶AhpK"{"G nhSxyb@s^>Bٍ=]2svo}>\7=gSO!__7sendstream endobj 440 0 obj << /Filter /FlateDecode /Length 97 >> stream x31ӳP0P0T06P0P05WH1* ̡2ɹ\N\ \@a.}O_T.}gC.}hCX.OȠl\=%endstream endobj 441 0 obj << /Filter /FlateDecode /Length1 1793 /Length2 11800 /Length3 0 /Length 12939 >> stream xڍP4.!hhiKHw]k~rޢ 1}LYQihIYA`';0@FMڀɍBGc qGBru; BdWC5@psr * N 7: 5>L.!!?R W[ @ 9f: `@98<<<؁n`Wkq&V-rY P:ƎBбuK x]AW xVRq9e+عo?: ;:lV yUv't~:_ ,_fG?¼YRrQ+^ dedi Kwg]'[w6"d S_r<-l8H S3`Jgkz|@\A~>V/BXZ@ k['AVz8_׏ L^7#RPe`O? x8)]ݿ"*9YBx{3>&fP3sqZ|?HO=_?zy 58_S}_- v:%BF[7y[O-%^;:4n<86.N{9 GuV@'),܁@/ѿ">Z?'CHH 8AB"^WzXs06_ L_`wfl0Uf|~|:v_"|/5kQί++|n$ Ws5Hظb_m}OImy[ y,Pf"v5mURl;cbt;iLl> 1S*~\sJZޒc\|9jC kIl}}4KКiE ;%G&cӑ}r` ݡL.{ѣY[4:[ɯX2m']y 54N r q O:w݌J['1-!9$'e95L=bj "b6/Y_\, H>!tE VI4EfdwO4ډЖ,Ptv0& s9R\ng?!p'TSViLji ܰڅ@W{P= 55N=w W]M` oM@y΃Y 1~f(q9Oh[SMJ`L?e*X<9ҊRA5dOw0%a[$S| ܨK'dhn1^ MEwH!؁S)P Yh p Y\C Uȣ4krW1QV,͕2>im%T!rWL,F=F<GX5kyZR npIk'MՖް@@ꭎP' o|4&8MMWV~:Kws-R@F^KF5-&j;H@J-ԩ@o4ytSF[̸o9[!\0py}6]{]t9|&-szZSi1Z gԮψ";VcEgmZ-ⶨ iGp \2*wFat?uɾIۘi(<* =v\O!iV"ļQ15Ěv ֭a֩ket*:r]6eƟ湬ڎBVFz7DFcfaсz]-rǕ/Lx_b]~LO&'pf#G"ό%?D+K8\\/yMg'61;&eȀBh[yM>僅_pJ ޱ.:,1@BM2ASBv2WFWmq#Iwݳ3-QҫU+w;}ۗ> DN%B6ɑ*7y ԊhW/0Tן@ԡcvN}4Q3l3&"."&߿>55r.<:,f*"!8( %fjKMtpUϚ\,s迩՗Ɏ:eaL>]0RP\蹹 f%f6uLhx _ZBv$fi7%qg:McW! uCѻ,%C2ۭ)c,EK0qx=e?v-+퐄Rf *[etn.J&阜/@ "DwqQ=8G|f˲׵wREtFE`TѤߝ@v0F`U.{hb۔wCˆ?Qoe8͋S b%Ev?^?{{:dzy,]ou:O,`R3օEoN(]gAXř ^C:EN7;]"er_큶-v9_ Dp;iM`Y~e'XN΃EM؂gJuBK?]V‰yN]]>jn~2 b#G5XX';2xvj)32#r}D^-0אF.(t ", ӡZ9Pe^wAV<=jCCt"SjAr_#4l5-߽YuLJFLX8SnJ8*=2 tSs#!EͼD#m^'A~d&;7@|@c]:E/q_J!*Y$f/<l'YKfMM8͏{^0$?]PSr+GO7t)Bcg]?"aUUշn"j#AsL㔺LPBmVZ%#H4fZޤH#L.>{k+$-/{زCLH=HIt];&?=#VQV_\~oS)-2\jQ%qsȸ#k@OĞO9xЌΣHRw!pB3LdQJ͘D(bjDWIY;]$W%cFx}&^ĤzN Rhzjt2hqu$%Z]`_Y@άJ7(;0=T5*?; b+zӋuQc:\r DIj`Yuu뱺BƧS3m&la7VL y#K) u8cPŀGEw5vJy ?kmEu/WԗTBcp>JUfGd NzmğT(%G swfkW|Y톏k*•l͑|S(Pv @3HyLGJ>u3?js*ޮZŠ L]G{;aP.`6$U?KXv'Bb1"Z@uii7 ~[F #ip^0k 3vĪ{擔B=7w;6!nןe{GVYe>6C23 8dS2gQMDʱ2Kt5SHXZ,}/tD1WsJrQռ; jrtA\dIዄP[Ostw5Q>RZyh.NK_7^eG(-_(yk/5k%cF y}l}NRs5(z^ƱkZƼM&C:K\0LgϪ]o}+\V4?*T*bT全cU*Cף)+~uc$z8 0d:Z%ׁz7CvmBwka$D'~9? I5py:ŭ"ޕ\:z:TAX\le@Ї:ƍ [83߭&ZEY/1&WOgth^@3dq%n\ʖ+/L#%f[V< vhօ\nhʩ 5(ΓQR_wN[<.z焞6bz L*{˦)jJܒi%MzvZp| eN;,VTMϒr@iڼҚ$u'*>ǖ$r-0֔oB#)56K;!q֛f*'QQX"tWvpv窶S +,-yҷ퀡tXAb^6{PӒ?ɳ$tfrnȝPM='2bKZ° bN V^p|ٞ;xXRȸ3T1r)R2O0cK I/ovmL]k%z2NYOUQgaHX "bč77<|뵘d5h 1*"Kxg/d:cl$̕8rXK=XQҼ,.T5{yY_鸸ZA}ŐYIçiFbou|= * W#a{tЧn{BNN3 +-&6x0LH޳cݫ=*W !hlS0Q9>A+\{݌J.p8ׇd1s D4z)s5MGeSx{`OoaU{0QЩ <10]A1VNT"o6_"sZ2<0eVH*nY+Aa.j M)P[=TScD1Bog3Eyj)gtPX>jFO?}lTĮt4%% dmk CSʐ#q7-E%QsMiZ3u}FB_ J8nO?ra3í=vFh]ɩK#66V"] >`SVcƠ`Ԏ"n4! w C}//ct ͽMY(;|4z̞lz?NO?…6'4ר@a<40&*/R7Ncj q' َ֤ZǗnicٽ_! bT~lҧ6dnLѷk_ue҉G_0sJ>:Q[2TPY6o4ɲ؛ u62ʓPk|B>$s0"KŴOtOty̑?`MBHw3cદмӉ5C:oneڝ=2B18biEK=uO c܏qm)l%W3hBZ gWA1[P$Ԫ\Q:P01VRmudѵ!JauimE9,U[1odJN׶Sag;̯~dF8jʤVy`nLķ.1?Zv!SIHXl֌!m4O3$|Cc'5uV4y-J% ;j_rV cz}U|2t[b+^rcgh!0us̷!yKHw3mdM϶z.tZPu; $*#ܫ)'κq/ӭ`欒GRuxrPh3sS!0j)>?խQ|dGMU$p 2A:<^;U#`gI՘ :UPALW a6sG*f `~MYzՃC٥O e~s`ܬ$ī^֒ԖFI8DH828X`.q۱V]\tM H|K]#{H@p㋰6]!Ifba6QwSw"ZVI>.ةDod;9z=y-\|ITfjSu˻<7AWݖٱr5ê1LE𥯝<NDj€X,qѢ \OanA:V`،tԍUMD l'Aay}~][Bt5<%CYy$O{dxͤ@K77u.*UlfփYSзxd*Nsϵ r!';}r9Hkm@X+]N`NC!ldVet|KxdpbqaeUw]E$3~Aȁx*# i {D.>3|j,~.:>1+ =<ӤAf˷ 5,/wTcy/Ԕ7b8 -x*!IBHNJMrTch k뻌\z7 ßflǿw }:ǦH:ea!=UڽIl:Bf'5B6t:q< Yl{}(.-v6>WբjλucRm6..(zO9\rwtRNxz ,J*fw˞Ӟ!몙kx3 ओG[leQٓe,m2i@]\)'c|`X#TV9ܫvӅr(wY4OqTk2*k; {{2}갫O kxԋI"7PMNK,< KF奻GE,OA(~fZ%C!бVޯppsEn;ӯW^92YS.S6v Vŭ{|*$ٿJ uAeUu S ްP=d%l׳-Na髒%5/)ZWgI#N~E>[Ƒ!5(\}xv 9?Wzu"~pq:i+51tM4 4ptq5Od'&? tIj>>G2|>֕I ^B*yG7Z,1CRܲKF#T*U0u@3 7W.jzs\|W'6ɲŭ`K:DNηyd٦)=˖AqETk}MRBUo+oc'(T{Pr^? Witk3ygʮA1y bFyV0Su?uMƻʴH!4pe }L?/¯ .7`VvIŹ,t$+b-&K]g _w7~V=-$~bQ[;A7C &Ӵޘa4b_PGȨ9+ae]|Mw})m짲A-mpz/X>f#d?Q顇Q&S SÇ(R)=Ֆyg3O##$x'τ:@h O."=j+y{#P9wi|;vӀK> stream xڍwuT.]]"ҝ0P ( HJ#twwwwIE]sg@!e1Ca<ܢҺx|ܼ `-6؋#a6Y bPpxEyD"!PQ,lx P؃0d P_f3p 6^aV NJf@[ ' s@-%X`@dvqb04N4 0W x4؂@NA (>@@dY;p_ff;;`T8an0vhy@GRj?:A0'N'/\Y0d\=l7%فK z%'фpss @ׯ"Gޞţ7t`PgE<<s ` c ?v0~lBOF}fu4u^H-o4 + Us}C%`s|X yk106{!c wMζQ/l'米aS8+K1[gE9x9n sU0~8+؂A'1|#hfx8=oq[W bky@(H8 mⴇC (Ưp@.oؠ\/ e/_1?rp+cofP/w/@f33벀) WA^~Γgh:rd.[j~2?O2C7C.=Ts/y%ϔ#[{ ?sRrTljgonIZLrxc"y8cueT}NLPTRaa>LbRWN'Ow~&t vÃW(vwϒw%>o +1ʑD22:)n'NtL'(6 (oA_UBHWgՉ1nK)*ˇwPn|TFi#_Riè}M~lDz渻6}s[&}9P:cSpFOͪA^_9w[O3 hV >R\p*:C%f-^[9$qq?[嶘d(5+'[/B.CvcupRO`/}%=~wa]JMf2)Õ|@熴Ʒ-m'Xu o&}ÿi@% }7~ IG_@[, _uʫ#?Y}C|.J>q,0  |zzbvBț kNɯI[v@qp#Pr1:ju/襙iU>d(^ x0*Ϡc:c{~;&>C eH1iSH>*_/s&ȋGj ׊ƗDɥ\n`1^-%S [4R[-5m*}m+ZQqH Ϗw]U`1-ZH)%F]%ڪEU!n·Ό_|Seuv]WX^`!|q?/ɠF[}ע>>"NܮxpMdVIϏtY\G|.YFΊ)FiwӀ& >|T'Pu~}'I8ַgyQ|B~࠰_3:~ᝏ_θ6cs[z!xp+F21"ΧiT6 GLk)2P8Lw_:"fhy]SICbTI]{dե4_2`;dJ? #\ac̣(J24rZ&Yjj]&D`B6j :ܧz=MtELIV p|PLh6F(@GK@$9%q막kȇLk UyF{yel\5N"kq341g%Wv )Z|*{=\::YyS 5# dB,fHX4ńCŭkp-4vBF*e^\ }갺Tqxp_؂=so(9*q@\:Y|!>2ŇK?#D/ckJ? mS\7$?FH~i/8%pXA0>R>i!U\Fi96o/}:Qg[0]l©Kzy9 sށJe-曢 4rBY=zmQk#{ UjLC wW .x$l<= RE7e/0Q<p&.M;40ԫt&`HCnFWc'UN&lkv*,S "(@\؞1UOz{PՔ^lwƘ0MXB~ h/ PǞ_rPgyա:m+WBq 6ayTN&eZځlscksimԻttQKbxX%D7}37&^]:E#GuQ>@QhZ63d Eo 0mۮGrXz sD(,b"svZHD=w-8je>zfRcDzu q^~ACٷڶqYDJQTQ ߥXK؞P?o>&?P&:Sڈ'\rqZ9x NUN93gmG_R|rV(vKԫLԝiupZ8M 3Ŷ ~s(%OkB {;-+F0Eӥ٨#-Y_LRI5Efz-9|Cןi:[8JQjeE t9 lS+աK/ׂ{ߺ@߾pI|I_2]Z~l@YgO5 {zX Gl/k: N\x;龪2)_OZ5O+$|5yw)b;׫6t7z#h.Qs怑k|S/->mf``)+5*^Ȭ]:+dVvFꞂgKFծoܥ4?`| *Cm&ӡRZSRSj(n|:F-@I/tZU=+KeX#jo& :V%4'w9cj+Lqj=#-B>EkMdj+3斾܈r*/a23?!|b 5zpσL4/g*:#.E5tE,ވS KS`utZܢ0Xf%\#ǡ3aRUQB w~ZcDS3w&D?04 -S~wT8/mJr18YHB7S ɾfGdt;Q{U_`&j<8[(: n/y;*]wכEι:{8,_ ?l)7 cŜcKLd]Keү.-]ꉙ9S[,zimU~w.&KH%#J?|"=s\1 BR|߱;zrƆ=~?wKeҰC$2xvY,*`c[goW6lJ#] L6 HG2[(s񜺽n798ė7p;+f5< ۂ}ti$C<|b`ɵI@Ѡ^Y{ŶR .ʣg>e &z J G]m@&Ly]T.lB|9%bijU@^X^>|EA Gp'"IhsrdКqSuTUWU$ R HK:x6gXK]Uhuk柗ʷLtlOKMG/$g♹<^_Nd)L7`Og)[Ol pyۗXXTuogMHKAPz}"ǝeҭ}tg257ghbpGِ{C=C<$QZۡEb\÷7=~@fW5mnO&Lo{: |˖2dt% VV '!⢾|{,\,w% E dQБ )]vCL%0u K{QLBᔔzv%t)N$'`cIă-~yB;HD*Դ @Q>f_ ~pgǂ'S%˜1 lILpV^o>}ʔLOO k{׺ŭ#V4"uT*\.BS Ԍ `~ڃ /2\"9HKLx~NC?]@-  \'&W;dc]TTS Lb[o1zDΥ: EgƶSޝ$OL^l.p$꟤S(+0CXicX8ZF gW2u6.,W!/4[E}Ut%l2{U2b\"gT&.֛DUyw? 5ha*3逕Y4߉v984& $np>YPv ȋ/z/'K33#~JYHQ N |QdYBwYPM4dtݙ7z$Ut'&![KJ,!Ʃ%F#[74+Uw>:Cĉ4bTwNf;)ہ +|-z[ÚߕPW,o[*s(RBS`c-`Rog{t<>tv MA]y2#S;2V× 'Xvh>͈ޖ`K yq!DKa\كhW ,Z[:C-*ZW$Ewh6 u.y^0hx(pJAקDCI|&'^'(`ct[u5tg݃M 9$CKibJ$iWTYaz/{Me&dzendstream endobj 443 0 obj << /Filter /FlateDecode /Length1 1407 /Length2 5961 /Length3 0 /Length 6901 >> stream xڍTuXԕ^ڥiee%AEQFRRRE[ǹ}{f{f!=Bbx| 0 @66#8`3(`0ʦ\AQHE$ P BK p{6PQ>h_@N(,..;(Cá$Pq\UB@C+*qqCex^pC{쁿hu .؀FNp?\( ^p( ~䁴WZ@]W3W"8w0 EB>p#uU0 i A!8bw}yPE^'Cw(qs#~H{E q ^uއ?Dp/*HL]Oԕ  @ 1yC1qvxc]Q@+*0ux€`0`p$Wf-@ý%A~Y]= {: ښۭbyE@^a w۪yȨt@qտxʇ :9%HzbiW N*o_>b6smԕV 5!n}ȕV䑎[ wW{$#H@Rj^]+q2Ba ubWryp ? ^:Ѐ_S2F_jutL5U^R۩x o`:Gl 24_LvUu܎~.b:eVYsd{6~y){2NZg$݉ErSޱӐԏ}"ꅊFXNSKՋL & 0op,Ć,F[F褳hJ5*zg;[4<cȕ%W䓜K&~ɘ`] ^fSePz])LqWuiaUheJ(9S*=h+qaDkxN-bԿʝ < 9" 3Ӎs,[G-~V!MFJw 4 +,ƏFdͭ}0SƔ=epz4O~UWfmm!KG>alG!BPC;g 9G76ăg:~'O^#ᒌwU[ߦ?r.|zT9tM֔?JZ<.$Eeȋ3dʳ\BeVrMM#4Ajֺ-5Gy^sK5<顊>+"pӏ8p&*N0mvӛ=vT -=w b7;CoGR?b&+_EzN T79JEtx0M,{:O@X EB ~U/[T,η1βo¹Ꮾoї^ks7M;{VJ6ni둣xZ&` AMiTr7qkm%}$̼)pAX \/%+c#2(!JlZy2 J%ylwlaP$D"ƑaU/ѯ2qkZC,V:脍L!!w=mO͊G`2T n謔' b:Ĥq !zJj$iEZ&T x~ /C>5?UR3G6Oo}kxSb5O^x!\ I\͉;sWI4oľ ("8(BGN86 VoQi*CHަãZn ןv$>mP؝xy=zM*]7v9/.}%5bK!u@`pADC 9bPع s{?.qdi{FX Ho* iCË(]y~n:kXgvL:Z_d *nFtΐK/;QR R:N v!6 ~1WY0fP"3eZ{ J iX JqFtӔ12dx"!o>ݶԴsY8I3ݰ*nlw}r7dҁc 4>~؉|^ 84? 2̘4OonVoX :>L)`Į_{;yѹ.F#6r LJ^N62b{$5/:ƞi]TP! Ԩ|sL2DNnWd 3LKۮtϚNL{Xk1cs;1=$~ʵ DF}+Us."Fn^jqӰ㾏#&(桎4qlڒzlͥNlY,3sCAEF]6w#h]<QG$WM6gE1 Ӹ `˛|T=d"v\rܺ_!3zalKd^R z'{Ӓ]m;} ?C5FK^bSsz:{(dTG&s?Id-_)/e/PF=vB$J(Wrﮦ%i]8£}2 3RgIߝc(xKH~KF]M\̝$ Wt54H_EүW%'8#R|3O/|IlLøL?P6ݶua1z\j`}%7"iknx"~2t}3Q!Ҩu]5#/Jd_plt ޷ Ԟ2Jߤẗ!KN"DĊrXYޛi:jXsXٷhwSks6pt/ݷĥc_ Sd|>3<?nm̃픢ܵBِD5֪a|e <tM[3xp"x3igFdXPMګb;šuՀu-Ӎ'6į (CX͇U10 '״;༲~HBƱDr" ի~RF 0\Qr_ӥze{=SvjX&i ),qY )|Q':Uk4݃+2gh;/,[Ef$+0gno~Xh8JȮW鳌x@/\: @-KHn<,?)7]:?>iO'5e5 %O o.a4gsmimjjčQ #G4wcKǞal(7g- ZqelWa'I)\RKm:t/3Ma4FߵKUb+(y-華xR: 3."Lr;7څNY }IK5Πh;Ojf| NYE}ӆd;lG2<>=-P֌@i$J Ʌ@ĉDu'||i[ES:7򬭙8?] E.w|)ǕT}Ml,WV꣢ɡlѼ.|܊=1˵E5UypbVƒ뚣j ۺ,3'[ ; bȩ4dsz(NMnCb_UM4e#{|tc?g6?v>٤NbLi3} &OA]5Lz%&rKfN,pOHUힺ2Kr_Be&Aw߹1?ez)m8˾"wCQKmfCW˚nfٵo^RQvw*\owCr\15 2RB*8 nK1wͳ|}cܾppޏ mL媁(h@rFy~ŌN'*]-A>Sdk"pڼ1p3^+UAj6]`jR!Xs&fȪJtǐs#>8𱣜uSOoG!)‡jO'4l 7Hnñ+~!A"qs&4|'"-t㳬Z-@Bb Sn\a|Ak+vUJWGYnfV7&V]|>;0I} /Lmשu5TxkJxRY?'xjdRx݃VB9хj1Zd[DxHZ suՁR5Lhnݤ>RRc?޾ .rUJ W'3*lΗQҵD9s8歼N:eP[1'NgkZY%&g]gL7[o?7Ӻ_ԥM-n#Tk{|D6W5.]%W3]_ZXe=bR+"K⌣nڃc5*:F4]ɕɳY3~3lIx֌L-T{e_U{cmzΜj]٥e%><.^>|i{ZfT컆k;ryr /|ݍKyZd3^D<_&2< ZᠳnmwƟJi⤒z3GrK.s~I ͡󼺪AK3`K-ígznël\eɂJu931 Mz5oٵWDDmx.K@fWpR֩diKB>x/ $]MbZ+q Gg[ɼm&qNjڴΟњ+F*Pny(;gb6 iD˷Lnql<:ѰN"!vjlJp{NsĀQvWI"Á涧N짞75*+I#,|ucg|~lz4d]*C>tm`lUlSpfFݏ_O5ʇLԟ]> stream xڽ]~KE\rIH8ӤH <'֗I@{swItPApfvfvzO~yy&'IUqLw2yU&,Vl◗o2J22%z=]*;mpהQi"leFmo9!+ éܘta6O)l͞IUq۴)[EMjgԇQ, @keO BҶ=igiPi# DyV1xbߣ NYEGlB Xw}*Ilr ig;iD@8ЌD !_ο ox13Yp 0| ɶ!SV@Ee՗%IM;3ZYN¢ Y1}mxƕDc3oGT&y"kKԗ2{Љ<-B&}j^ǢBگx@'_Ҍ1Q+ u^iKZ |"'9Nr#[kنq/grLy5>r[Ă$N|hNĺ'tvL`P\QH1KGVSĉ6ى}d*~3ci[Xq2FK_ta CɳsXV'4bTU_Q'ez`wpSPX @=7cqJhG4{?pț{wsi^4J%cU?c̣gg $ Zmsl 0ԍ}؈a3FĄ1vh_p| y!9M/})FY\jjL :59Ew`Wqs+yg„i\=?0NϘV28uDG1Z/5Hqzw1M߹`-wtRYnz2{a[| /p:UiU=XJ(Q$E>Y:ISHNlE4gG/G2aVgscy/4q+gn:N"#,fZJ#XO8kz:̾W`O$߈0I̘^FU 9ū4q#=1OF$9XTX*ςAN0+W19}VsX/}[zIkd+Ewڷց^$/=|hf $XuĽ =izw'ѓaD2$5o22T>'1ք&)} sꊰ+^ IA^ۅ)j8R57vԶDz—HR}iv Nc'W`sA.h E+=t.չ|}Z<cJ\a:)fc<e&\ղZNE"`c9_x٪ccHp8]GRI)-ۥTyyiփ4k}ṹAqOZl?O7pTR}uv:C9+R8޳_ߒj&ͤN.pӾT'jh*/SuIr,@{ {:ᰞR}}&mTy#5qd KBn*!+9h|^rc!z;M`o o_go/&Oq'V7'vp av71}93~#L=* (%;\q-+ g[֥θ]s ݂xJ-UMk ,-=ЋF1Qz$ja{L E/JD|[o<0o`{'v k ?M{]ӲOV ;-JQBiݙQw9a$Om4"%"ͰIDnz##]#Cp~, wsbŶ^BNalvx6+WYQ̿S|ɠx0h/L:_䲸\^ ?@,!.flalo.{Fҩ he,`*Zb- /YEDJ˙y,=~/[zq?>z~}endstream endobj 445 0 obj << /Filter /FlateDecode /Length 3165 >> stream xۮ#5}":²pXHI͍t23+}v'Ι-T|2Sl!?.Nٴכ3E}w͢+!:Peq “G1HB/siK[WFeYCBF&&F\7"-"L, vod^K`'D}֣K0ˉSչ(Sa(܉22w+VZBPɞJ9v'V{{n!崴dMFhҦ(ƙYUiJo8!*`$ͳv {L.c.)iIui 0͠蹱Mb`Eq^sĶM fCi- -)`?Eg$~ 7 _NUH"!H;29amCv!xb&JC!T#+;rzxB2T!7{5^F @v9)6S}ذ`: S9Tri\Cf n&nUF(9'-ULX= >eԂZz[S1؞\>J+ / pL9Pf]ꌬ PeYQjåYLΕ7 -X!6a޵::ug=X@&ޏ^uEU:㉀ }#ʔVI|31tΤhY^8?pcI x!@t[~aLb) B)JoVYY[sBZEd3Ʀ 6'0:iSW% ŗ:L\:+g;<3eEuJ>=`= 4l$Sj,uYt\F6Y]ZEF?PٳJ{KہzcA "TiY{P'r޽#X"dq#Q +jM7Qtei>(;1 S?[yl~8`I4Q"W{P!˛g{ Pt&ZqAChXeZNZӞ.ØKC{I=xo  -C|O{U#}PIMecM|1E$#\M1Z1J,{d*(2{^G. Ñʽʫ4_AIS祀B/ ЏAuU1 s|AÓTо4|{NߗqVHӏ_Š!Aڬ :_U~2 ]ȝ+׀Y v†߇K ^ѽD_:,F63ܲy=TNTJ@-6?ٵ×8Rr& w!EF}wXB.w'7lqw1z3ڂMnϒ~T QQxi1xRuy<<GQb 7FvY2V#tMb֒&=+[Q#=U2= aǥL.7RDG*qـ #D jxzYJ q}26 ,JډysYWx_/E-#+N:WZ+mԐ/d,镟+t$SgzO>\ŋendstream endobj 446 0 obj << /Filter /FlateDecode /Length 3060 >> stream xZ[ܶ~i#.n.HQE^Y$34k{=7Rv0fGCssrs)7=yjѥNo^^oZi\:6/6?mƓn5Р9ϧ ]W0<ߗ]] 88ϊI4ULT]OZ -ká1UbVM[8{ʄ۪Ui^?'`-Trں5 6buBJ/@^F%xIʒk\2+|;3q~"h&HM߸aIp >(T6(L֟)dƝXaDo|)=6S$nQӛ6َDAt|]fck+Chë߼.q;ekN`xyu'h8Ge⤨錅 ItT(c$'rhH9}xi!4F qs$`ؐr8βv~pu߈4?6"1~'Wxއ$TO NcbS]g6RٖCHЏx#vHO;m5uş)cVVl3LA$.~Y0ZEZGHR5 yhӉ"O0"礴Ne')W#pt!H"t6%tdx՞fA1"bte  SmgP@$jJW<`B B8{ *$H<"J#EKZKc}&KR&j6]tIqb|Hm1{!JT4۰)ILN&='k0+(`z`/͇" )|M"UBq2 #x`L0q8AS>#=Uk}яbUS~?]sg˩ k F i̪R&k$?8EYöm.-TӤI׆Jʡ Sَ,mN#'zJRJxժ{O'}V9zYq}H<#p;Q~9{,kUDz(2c홁1 dl;؁u 2Ը!<~M;Ϸ!v1NhȚkL1m Ё'kR=mC+<6RQ|#ö8-/.ࠡڷ }:4mq[s8أͪRӳ'yyX4+/ Vr|'+<|A% f#m-YѧS69)A֡˥ ,fyE J2,Bc8{L3j3I7, 3H;9? ^k qַzqۥx2ŕ7%z3ړZ3E)4R5mhk-Eo6?k;i)<7͢0V/2a|'W@-5S .O> TۙO(fDU _}Uh֝Ŏ`(} 86ѭ** ׂG$o=4J Au?IS֬ICiN43-<1@DL_)xe3/.흿tHm?AG\1]F;7n.X|^ >:}9NvŻp#-8Pƃ~xA%TyNЏ98UB.ā2@mJ`EFQjUWZuE,:@/۴2Y`.LkL<$X),?MՆ.v)sbK5@yZwsRMӋmv:|w7x=[㠶psoWw&ͫ(Y?jjՄs6(89ɭnٕzmQL#]1_ɔ1KіV3S[ȲjA4ro[IX\Je ǫQ]A-cfnM9z<Ū{%?C%ᱭSYV6̮ ]횱B:)C!"fVfΜPE\'tendstream endobj 447 0 obj << /Filter /FlateDecode /Length 2677 >> stream xk-K._N>ЦQ;:Q yw%EنS>ᬣ*Zɋ/^Daf^Iy<,tݭoam[5z$q]Ip  KW8`דLv_6 Mitj0O"u֬6IY׀Ez,09%b&^8u\ &虂)(0/K%KD1BM`{+|^g(0"N8e5҉ p{-,2?H"KHVYOPΫj LBq_}DT]Gus%,"tOhx] (O#9 0摙\+>H0=^2JBD{"ƭżZ teaV E@&Im'g ;XNzm,CI0K]6W"PeuA/+&(Pa^(ޚ~~&e2ZuaA`'?!_(ɚxd-W٨Dq0!ffL_P#r 7pR1ee& H\(i%nSQM'LUѳ% TH_#E(;ktE\dϫ nMk.JI͒(]Z)Z.r(t@yZRk}z 3M9EB)tICNGxP\{$ }}|gK N3?x2M ZlÂnPv }`',&$΋Z`ܓ3N(4x whsxАwkidue…DunLl$ vEoHT0Y.a}dRk-;9ɀY>C_00)Ʋ u}v@2!bQQ ~↷1Ȩf|1צj9v*k |җXVqi(Z~`˒:s+'_Rl؊=֛4N&)9BՃc54b]J|0ttn GB)$i3R}B}Ux+k_}K]ZJj]r4aA(O?y@.+@;㊱^SI xKfx&74/ѐ%|-\0* )"Td<>yBzLL^_BO,{SU_ L?{Y2҉"Wh?xibNf̔ΤtY|c#䵠o:jjO- UFR_tZRmZ< ȍeҭ Mj+5G,sn5YUf^@~5P[^_ v}|޵]| "Rk;j&,xv'&8NiA0:MQW?rk#W^vMosvpSQ~:oђw5`8e Ǜ*j? 1I+ڒOU}3H9vw~Bgz`v˘NS#j>q^FXl'iTj d_wX%I58ؓwU7:t&7"s\>bL zxk{U:0]4UD@ZAtN okgm=)uX,K]^[ʈ:'g)~Wڈ~Qe?_m+endstream endobj 448 0 obj << /Filter /FlateDecode /Length 2610 >> stream xko6{~/Uе"RkM)z@Z@q-ʖϲ7)ʑ!PDÙ9&I4zvqu"*dv?$Kɬ| ?fsH @[QUgtB>(ᦦ,#QemckǫW!eױX"!-fhy19ux>z1ga0| #~;(m2l" ӳ[XZ YVvݜ`]OMWj7%%7l6R&K !vvXtiNxeZrM$aY vuQƉ0 $f6;n0W.Ƽ< c__6HI,c &[ŧ|>G:*0Nc( 1b5$G {Q4*iS D8J19ë@o%*HLt0: |h Ά'l*W#5F*^0ڥʼn!L i0A'zaGPgIQ]`IA7ըГ<4J#V5Ͳ^֤Q|tQ"ԩ~։180]rϒp _(qI{QmB0LcP.QU#fV>0zp&JYx D都jz2gݛL3k]sRLwF'Ãp.f ܷ"a,W(Uz NvCG]",PiR@JWWu6jMQp#d%[R֜ё/s@)05V%ߑ#3mE*f{. WM pӟÄ?}'GݺDMN>Sv:T5_2JdF'"S+;fle U 4.nBւ\R}1="&Q40.$Km1I^:E|uFG1FPٜGwYxւ+մ\3'8|`cCw\Sn0DS5kI% >^ߺ;h-=F+>VdW~mH=hlr:*2V,nSyrr6son|}:6=aGN]=ri3Vq)ZI5 eQ=xNߥu 3ʄA]_w܍mF֜stUZvW`y]%(G;R&)Y{໫ J>q_AGt9ĮRCwF;f*kI0L(~j>a뤻QdҀ*w>l Vv 4k 9d@ $:Sp C>_֗i lAQ: ~F,uu jp[l2qPVc'ot*^̄GǎCKMO1Yly j !=Çuocq-'h- E0ōEzMy&~[lIESU <Y|IQ8e^^0nC^+{)׼&/riw)n;qjRyG[YL7R[ ,1_n%T^d .$Hi7d+˵]_Wr*܆ݳȟ8PhugXKrVlGu1؝kyJR?:^1J+nzY+[pwݡ<4i+ ]5ܷ"ɰ#iy+hvх@L8nsBOb4O@$ B1z$΅fOLp'Tִ%btfJendstream endobj 449 0 obj << /Filter /FlateDecode /Length 2262 >> stream xZY7~F~x\;%Y܁[{1ʫց 1͟)”dXgReBkf[הG@ٞdXK㭬(#M?. Wya/7h$ח%]ep]1 7n]]۞>[OH *_{e`p&~a,<-H^EIQ:v|?1"[8 gOyUYdHD|™b99;9AB;QoKoB΢ ] }%㎬Sٹ^FzepP BnrѰ2-LpGQ#6qڮmłv$[APv3|W{X4 *08`90Ͱj1 "=O@滏431K;Fho,}BЉvӇْIe71wśCIWy0w zO+C^g8yH`%B2߮`Žamn+f'=+_ iV0Xv1_]SB3ts1A.Ƚ\!`]JxSOH w).ȍ,уBCn|7qc[pgoSV*7gl|Rym#1Ѳ^Ǡ{!s4Q0V 4OjWBGܘP6oVY%8YDd9Au)B{IœuMXptup(䩳IdEPfdoVB뿨.PF h2a$>Z|\Z+GKjK5{Y]hcSk,ȱٓc j{.7+m[38&&r8kV{;ھa$JJg{] #VsqkyNxZ6C)C8fT2G$S.@Z՟j#;էx]z7MV8^83J+\<Q D/Ԥ&x҆h姶j;;+E^w,;|k;o6(߽v3giфf?]F+u*7/gW*D\U6lЅm LԠ,s }>O#_2g'R\ʅ-Ϙ%RaNľȱ!9)?.y"={W= +ѥ@ w gMd Nʥ-P(`''O'Hݔ0˱p`6V3P*8"| V #5X-8gxX(W5vg[9o=O,gћYp?%/ YG}p=o?OYXNn^H!Sk๛0R .{%v6y(֐OUIL}F$H`^>lyJ?Ҧ {+O VWL cmOF>q?#L6UURh nuPYổ{MKh+G> stream xڽko{~/!w"nܵkhH;QTE9Kyh;;;M4DN/<9IETē$'YN&ɛ ?]yPa qP.uuk^N4`pPdGa)t:3F [vr`h:Z߷әز莆 mUDCj0չ`D,'x8,r5$Ӕ֜qOmj:K.h i茉7;_x^$g>mc``2ˣ0Rd01ѹgɕL),Ul#1}}r+4.4?IFߡ%dȊec 2qu-K e_B {h9682Y.gTO|zqrg@7<+U쓲, "mXdkOK;g**GH'9H֞E!O _d-[1y24Lqz# -<ɛOY+ ƆLdq2!b{Łx~,O喈>Qq| Al|le<-T.N䷌hɠv٭(5"8Цn{U.+e3L<~k/ y)e,3j$2/"C>ÂEV(Z '*E‰o]2"{wL{.[q=Wxs;窧F%ECZhL=r=c_ .X2Rdhy_^z JHB]psr؈HN[ѰUwK`5ӃpoQ`UWg\a2AG?_|0yϯD1{åէ0Y"u)beE2XMϱ }0JNՠ,q Z1d7C۱~˱fCUKx۪X^ KԄJ10Үב(KP|I(℈4ս%sU,kk*yﻐtZ=Iqj{L* QjJg j㠤31'іYu;b3'Z}G~yeQ MLrgGPY4ugWҫCcں$=&N^`.ʹuB+j4C!0<]CݼEݑ=1H&0j]݈k3mUaf@%8UsJ!P"ſ ,9@c/SUBQR14#TS=Q8.>2Emf x!>ǭR壱_ZSI[aOsڱ3AӲ6Tб$Qx,?مYFݢHN(dU J=rA'>xdY&Xgc;FF%.07%ֵ[iD@ّ7,㰇zV7:/{jtO( F+-qօ5?+.\5ݍ@{OOu^z̽õp\ރuһ!ɱk+t!uΒG90[_xG@rʋͰ >AX34h5>w0uyTx*SbZmW0};?cbXn>t*Wr{R0aģw#}V,v(LjߤۤIOOtc,\ڽ AJq R(f0:h ܨ[n^8UOH;w_cG޳_ˁ,ʾe?;N,Aݍ4,Ae붘0N]umjt'> stream x]oܸ=b߲( $wK[\ Ekʕv$E%QÙpz/# x{78}@VJgYb P caaG IX$Aly7G23V*+uU"@,bGv$n=q"c&4 &4ۣl]#GTX$g,⊼ά \#k᧓ R .r~&H_9Tv̀-X_GeY +%0{[p&  ^p,lP]LoYBC07< &KFDYF^.fHF*niH.nO +؏H-{\و6?IA @nǵXeXLKJ.˵,0@-'.//Y OTwmkI`;{`燐~q Xt]H%¤pƊ ~w r~;`*eN=Kjs2W^&r* &~G0 ʱ~kq0 ʼnf/,6z:ʲZDv%h?ӧ*2jmEӉ{ Q>yNWڹoJ|I&%oi N'IQHQsIENH1=rMn G\mWj3ɽ8\\z-Nل+<*U@@bv=8xs=nlPHBf*⁋/πYAkB$xYhBtDȔcw5.8ˬ)) Jpכ DeW7Izt&T2!SWl#IN #!`},+DLt`D0!ZFr7vFcGSi"/S(YB^7-`%i~;-JdZ.}njv)]5c=S?Eq&2Tyx~/w"0RZ.aMn(ȭ=ˋp\Ƹf3u$!|+ ;:BgVLQך~-I]Y9W6 ﭯYP ZSQMA+5wh .vؽ%@fVK@p*4Ä=ދ7S#:lnd%%5UVzA_X6={Y; JI/\V Ep([͹v+v}#L4#L3 aޞ.r[V7 :WYdtOwCW=S\kcFZ2};י8KDmw$*Mb Q>eR}خ?KوP֕Y`Oпg%T_RGf| r7Y"t9eTU0$ J3>*jX02rdR Bi+&IҗS iG"OI\I?O9,#q6fI̳ߺM6YaC{! ^LTL\쨊hn$]ϥ/zwBG:ok0ӸPH*:搨t!b䋌7ҥ͜ 'ViKu0uGpP~)xsX,eC=":զkRnrq+%x5C$H{>Tfl`NP9;`ԢcE3D6>߸|F iH:_M9u =|O 0r*cJܹx9}?% 9Zхax@gd^ 3!'Aendstream endobj 452 0 obj << /BBox [ 0 0 504 504 ] /FormType 1 /PTEX.FileName (./baseball-qvplot.pdf) /PTEX.InfoDict 235 0 R /PTEX.PageNumber 1 /Resources << /ExtGState << >> /Font << /F1 236 0 R /F2 237 0 R /F3 238 0 R >> /ProcSet [ /PDF /Text ] >> /Subtype /Form /Type /XObject /Length 722 /Filter /FlateDecode >> stream xVMo0 WI>V(I:GYfVGOOϔICs` PkRS@NQ0p۸(#O:j$ﮑs)x. Eycц'Xx%yz3xjQ OJa;R>? >G w <4b ,Nڻ}_? *z߮9rPԛgIZQo|;YW_gR1&]2%`q?,o=IEJ8 RhCszI|,R +S̵ODIN|6jQD<;[.K֕3hi#S:I>:yXeA9wa$j&%lbw|v nmWݐCrsŅnW~_xįfwx86dѤnhǺ~Zh|ŗ1 " GP)Ǻl;L}Y-90I0_Ge]SD\NO] 9lj38/\{d.[)M (|T}Zk42ޔO:2$ƔzR̐gf(ߦ3:b(1qt;Ź쪖,=] endstream endobj 453 0 obj << /Filter /FlateDecode /Length 1557 >> stream xڕX[o6~%QaC֦؀ [ᗡPڒcJκ(ѶldQ|]og׋٫m*tX S*5އ:MYT:IUQh`BD\]y Nv),gQT*yecLj0éyIxΣQd=Oo,O-UN;ł{Yh&Bc~iL'9QxCZrd{ngßH";OAJ*t,eo-Reや 5dy keozHaY!*ecY5#HblݨU^Ag9VDسX(Iʲ$&Ug8G့E\'o%[1HpmIgsLM#B?hBqm v 1+7[%AAebC{vODTޒϮo??\7RFܼj0KHUP x0nA#iܒ="V܆玤@+]p`8]D WA6K* ϔ\MBm@1j쁋csipn'JP88;RfyT y5;M)U3肪 Q֢e'=x@? Wz#?%D{+|X}JCկi#jWwҦYL4N\j娢FCA}lM;ҡzlh܉~Ƶ}k q::)V bX;..GI#E}^+f08P'g  έv "{䲑*?wډ8)`j$wi2xcs4-$hT>X9Fg]a0mxqՎ-J›~8c m}+ ,P޵\ruL8]aʝ'\hf3˓MGQp\'p~.kd<~ﶧ)Rrjw輫ΩnWtt V-t'97ҁ_xm"|niY:d]endstream endobj 454 0 obj << /Filter /FlateDecode /Length 2097 >> stream xi @d4EJ Ii_.@%qT*TqYAq` ԧ 70~iY3L0 >+z-lKQ~OԊ3Zc=yV,\#"XjAyNKqBǹcArlQ5#e, n7X +!D9Ś)e&V ]Aq$F$f8MQPŁ]$;kuœI"gM8AE^tpwj:2V%FkUy^ و ;L!MkxPhe;6DZ!ё- y<3Y&b7 Jf?\+zɦyy1(m)[2֟uw?"m-Z x? EHy9]wU˄= -BEOG=ƫr;Ϣ,ϟ)6=X2U*ذO>/ǴkB'=h$NVq DZTv3,D4|>2z&KIDX`x=v X_ "uk'ĝpBƿh ]GFSbGv*hK wb>[%dJ3&6[k)H:5*I9-F}BiZi%!#%rYAK`.v7\d[s<ǹA.8GCdž2;' !G' ;P$c8Y'lCd V&y zix;۰  ]LFQqurʤ SGɽrqdSʍp;AF1@N)r{4.l,Ѷ\MY"Nja'V22OX؃ie!χ`VPgz:\u>&}+忡^H1HX\da'O$}Ah5h=@tG0z*@0~j,ۄkx&ׂR}X3xEcUVXn:>4\C$in;ѕ&4ժAg=Jh/Hd _G7l-3AWL֡^@Υ;z|=סּ֋O@E\U2K}[Kמ?}O8u뜀Uesm}uK5F5ʯ@D ]_7C$LP`7R6R(kٿyׅzb"Ӡ<[9c)JS%=,]$qTBsJ﫚 E$.Ca3`_P%endstream endobj 455 0 obj << /Filter /FlateDecode /Length 1607 >> stream xڭXIoFWA9vHZ)nMD+,R%%;ίfHtAq88QzY]]<ҩJGۨQ2yEڮ|u]]:3(4M5YW.npόIٔDQ)fZ ocBJ#pjou1lpӇYU\R9Sz^U*67+FV6p06W@ZS81Ғ?'WoYC MLL٥{ )/az4 ̓h |TuVop'|%L cܻA/F0e^rLLv!Qa^9D"n>"9P㫊ѽZt%Y L펴9uVڳtJ:f HY{˵M& (I0YV=ݯsR}ʝ2(af(qume˨T*M$OS*a&w`(ؒI۵A'0PB_hdbAjN': wI!6Se`/ -YrhJAxzX0ơVµ7I)wN=.!/hLs@tn.6 ?uˁg3#y ~<~Jht^ELt=S!X]AHR\a҆^L5u G1l8־]@*xaHASjB s ]Ա_>g5|RAR Qi,3K!Pg3U&7G)2=䃋xcE~7MXd,R@]뺥KX u)<:Ӄ/vK+Lendstream endobj 456 0 obj << /Filter /FlateDecode /Length1 1392 /Length2 5960 /Length3 0 /Length 6901 >> stream xڍwT6R HC7 tK 1%  ݒJI H R"ݝ7sk}ߚf{k}_{YkXxdm%8 5A$  ౰Cΐx,w;DHPT=~A_D_T@ pw @ 4xU8 cC:?6(p@ 0₪hv6P)إHW >>///^ n/ xA.q(`jx,CnCj!P!0[;詨Z_`J'W"(w0 @ah)"f vFQ`O0l:P(!lܡH/#߯46+l..~ Pw >|{>Aavhz=A< * 0(޿m$ @ x8* eFqsv({B$?<P$` euF P~}d0[8#S5R5C_N997#qqq@TXg5o`GByT&o٠[_Y)y8;`j=(ihQ75 P (H(B Ն"mڢJ A7%:'[oTmO@X}@ Q*x^nGBkb/?x4{ MOm$kZϫeixV>b/,E'F #Y9x4&ge[A9=U12Où{>m֙˦~M<,P "Ue|,1K*'|*_$7ލ7{tNNNv'F46F[ ɻM8bp%ּ7Ҿ{CBStlmzOc汔L_#09W+ ȯưSX}DI7|XYáƖ/L񒎹8,$2t=뺸{PyQ`!!-~,:p( ݾwQ,(F[} beAH -<ۮ6 OO,. D[{Wl|h0>|>d| }g\nc{viQZ]rFF i}|`*+ZHdU=.͍8э\]V{GʂN! 'QX,V\I~/B#5-w-6s ;Z486;6W}Z3yZ?{8V?:vnuVJZ-MƤ'p4 !)νNwz 6[w??]j6=QKkk8HJNoD+Gk }ZZ6XBHɹ1$[qK 9BaVu`X1K&(`+hI2|Ò!*#k&z*BZkG5#uZY%c7LG22"mPQeܮޠEiN8M:^M<QGi/̴OAsxG#?^adE#Ь>G xޯ19ݧ~5Ժ0G@ofqID@x_~['z|GYKr p3ruTOzKMX]sEkmh#H¥M|lvHa2C1@vY@LL2_A%B눂4 )덆e_ ឯHRԐejaPH@F9q1)nDzpίjzIEwCNĴ^Hsn="H7h %h5';I%?_sQ,%P"=!{;!{МT6m(F^- &U+lYBIB@þh𲍆 \5TQXE*穑l2'WDĒ\<57cqݟ•82=c)!H 7weaR/qͽxU) ls PwIRs#Yh>=lmZzQlv8eZm{(:AU/? MeIa;'3>Ʒ~mJ1V'؎ޭHSբOjT)#7Cy$2]kIڻ}na9׉f ZjO_E|<p.HQPgx冷=mYz\B%)ܛ+eså/QӘ Da1{8TIu/I(0#1p: l>O|TL*dD#l\!FwbZ(Tj ^txOӧáP&HW$mh3ģ3 >BXK =9z8`g(38mv>X.O0BOV_P zhݭo;]2FqX{kqQ Wwd7{k>NLopv/DH\߿樘nG^lV-)@p[3+tvʣܽ0ûK3Ǣ!>hzJ]hEǧ肫\D{HAx'; ۩Ө"M6B׻$gDhnxà/ت×aLznqr'ȌiIg7AX}N=[m(7a-~ΩʞxMFv+UÊ woZ, `Xo3kkX^OA4DhǒӪ`CI,_ ءUU+ ;9CQtZ)R_w( n}^r (ptnd~R(JL]6"xE.xqX"@ϪWN>ONZu„%[ OI"N7m-ʰSd1~w[ ы:/ Tf( sk}}F'm*_?r ]8e˕JtK@@B_jU]t~kwӾ|2p-.Ȯ J8@d(ui<Ůbe9/^9ǨH8<Թjnyqv~]ꡒ k{?OJ\4eBk>Dfz=> M$?=soPgeICk44P)ew+i) oe}mݘrq{̏ڂ꧿)mfƩr.ιqK_*[װ&GXY011c Va^!H}>O5B6;еتXx<Uf,]ږE8k7?x_te}4YzbGA 䩺F,7}ZRWM?7Hʨt͓UG'kX#44qLuܐ1> Vk ^OW\ 4Z|ƂB> N?p DŽQran cC硧BL@FC%XP< ]8׍͔@﹛=e;dk!w 9{ b`)$Pѽ!s^/W,&<ڡTy}u<< +iNxn+'Kol7"IwޙU,wUɟRt|ʚ?OmIzIE'#"DyɸfG-%v`S7ITSAдںlr^W7y+{QdA*Z9_'0yS/6ux9K$ҏҰ o8tGf龉Jڀ„xf}me<1%1*i8]c@)ĈmM ~ݰzI{ !CK=\_9}`]+Md R r Jv>a4@xk =KNto2{rBӭ7&_oIO9NGiMUbRZMwe"m?BT\y\}R%ILl:UjTtmqxF<.d^|lh/N`/zIG,R*S78)@GS.s'lU6ۻ=[9+|y ҉} Vy{3y=`T9,_li0Y8V! cƜ>C竄90`fJ; Le;>7be*3zJB#"er,zynU|ӈ ȮwD1^4<π69BC6^ٙ9M>bO}mrzީNve^ 3 3-_+DI=_K$] M_ :PvW͎ Y$"!c 4ho̚TQSmAG3å= P`jAA nrY];P*z8M"h"h>^hTo:Fʓf+;wZ8L N,F<^d)-havʣ1q50U.},^ʟJtz~9˳NUw6q'݄21y@6{U`vA?W`Ͱ(7F5-+ީ5n3qbVe7q'@-g#uG0Hd3B6 VšO?~Jy|X^NA9D~Cȣ IHe>kεOς̖X?<9d纎^UѢS|wEt3 4Td>{A1zc3hv?IrogXab#5-Yҿ) @%JA*wL\TM0:.?v=V :yI/d͒ &/ Aw5ghЦF'|*zHmixɂGLI^cʝU-Î)%m/̚vۦRk;@6f.,¬6&ꚩD$UV)J}htʋwLG&D~XXml0: b8 VOK^?YxG]lǕp^v.1y mҴollʹu}٦q04m&>3[[ecM$ۛ}Ev| 2˸}BȻKGD=3,u?MnxԂP9%'g'%(&؞pЍhm6 b}(W$KW9ItSFw֝%a3?+}vp2-Hq:0,7w2`79ޗx,CE\.-r%}s3kwoA-S?Wu.CG U)Ptycد>  {#n/YY^ugUtv _Zoѻ1o9>~i1JhV #ڈrrj:zlD Rq2&\3ABLa؏h'~/Eo{AM-Ķh@:bˆ}Ws8mLXx80Y:tvN/6Ԝt"^pTږ6?v,dg,C_#)"%}*&sė)OQ'.){NLn%$QۺUP[suäOM*~ͻ!R];f CPʕFhL%—I9u[mN0C~GGz۩ircK\}x|7y3& K36H B"6^-Q[>o7V FŹ|dH L鷝NsvvX%+*pM޶Oqr TY [[bendstream endobj 457 0 obj << /Filter /FlateDecode /Length1 1488 /Length2 6544 /Length3 0 /Length 7547 >> stream xڍxT[.ҤJׄ{hB$t*Mt頀(U@ EAz/"=w{WJ333I8X MPu$-JTL@@(" qpp89 C"C Fc0U0CC"pHIHa Po"K 9HEġ91ypCx )) %w Fh;&#  !0(_!e]hi!!___A;J,]P%v&H0u2 о`/(a(qF8B-]Y/?@ @`? pu]AF8"($ףּJF0F}(D4 9f5 @~O`_OqH_D+' Go!39C1@ .B{@ HF4|>P ^@G p:DN1^\`:B:|$cTVFR@HH[o Q H%sz Ɔ HL?Ci@1 .#uo8/a~FcfCS- V-43#JgL D0:hCC\꥿!H} 3z7̽)o3YΫ 8'tf%afB$h 8!~9x!P8 Uƿq60_Sao?(@Ü+7 C0޿;  !2/#/j}G;cbМgkLz*;>~b8[51{p1wMnbfKoUHpg(#yLI*a355WCn?:Qv3r FtR~.#+Ve}Ƿ3M~`?=>Wnj8kӧ}ODtzjğGQSh S'jBaq*T'їGq.K,~*oKntW e]w=RlF.{n- ]sJ veGxA. sZo YV~n= Bvc>!;$l.LY1d1%d4gu]/YN*tmPwz2,ɲ#{w'{ s^5qtdOb~TqA>a[=EqDsBSXVBI)*_W{*,. |0~tCh/nQI^GB:\ѢIݤ-eɰ#f!v8%|uR><6k2d;{2_tpoi6^Q: iHEZU>SǚYuO 95~#Yˈyn5z|TV&}(%\/S*8Ι ~Ms?DE=K`h8ʅ d i4F>]a;xnb\aSFVSi08UV˩7SCOYdu9}rկ_"pI?jFItRyzy$*CT' >{) iwdr?GUug֔M;aX%}7HՅtJю?f86M|SI>w.QkP3r.=%n0ldP)Pgb`Īp_hMuaYf]B,͈۲\ /$O7)*,zDJ~[0W5RSlDcM%Z F-< $@I҇ŗ)(ONm)T]>ʮfe(_G ;q/ zhaLC0HxrEoЉu3Y8cH)dT{H>Tzo}d[W'6ĵW7BI++<٬뽡`ꃹeBĭ+o[,X"%,qe9"B*ĺv6Bx#!1S|a7~T@ՆӁ$X.oG4kk]l-i~bct!<5Ӆ\< Ƨۮ x]T pMyy{[]k>,Rni5S=dT?R]AGƑ6՟r4T8ƒJ|Y2FK 7^H_wxvwvđgQ0c.ُk]1J>K 4!氫rt5K޷|<=iab7VRsN]{z|ms\cughb2GOuԸV{-^n)gl//sO r~GaŏmÃ#So*D[GQ&ܹ} 7M9BϘivѳq4Qs* >lĥ{$6B%ڤi|Ak-_4ʫ:NT^vr5*E,]t=uQB&u{EĪ` Thcgr|4]MSv~Yi-MY745W0)ZXTډ6Mr (O%g5T+nߎ 8ٔ *wv7Ni4wxLYYe>$puk m1*!B}Q7ː̼OƦ"]ku(\ [ F.@3/hUuሄ7 \VKxP=i|DX<'_3mT*8>CbMt{/=ld9X2OSG./Ȗ4\p&.&o{X< w*8xĆjO>+SÚ볾,Cl1AˈDɑ<_6L JA:EnUU\?T0"/xƾJWXmsDNil,f(Ak{; NݍN[zW}xG:oU׺LC/)QV1$To 㞥MliޑE]f'Np s u0g|^)ŹVw'R{+`;%yBحi4%Rm[UX#ilfn/TfїA9Xw񾉙G:Mx_M]gctm<^[O%bsP #Z%,F2.u80,yඐz9#`0(m- %va"5 } ]q8>ЎF6(֭d](MQow)>`?R+jIr#zd YcG&&r^~O-YߺC )iH-&U4.$߻N'ߔ&w bJzc/ɛݔlyh.BL62 xxYӜ弶 Xd+tWdFwL0޾ޙK>m-|徇2`.3jUp|3S|䵫Mmĉvaœ~@OZ+ny9IޭG#n"ȕ ГXnDUߕ P$]ǧj)C%=|藳(٩͗7]йgnnn65ԸG6\ "ygO!ksZ" Ki]q gtW>^5LMn/\4?<-CMo ѲnB UqTۈT2dL3Mcqk20dm˨8>[μ9A_ rQN@ybFkph}s3N6EO4weNI-=VH~Φsu}&{z_SCdT{Xa#>fԩ50dkjJP*`ٺ 2nRons[i $_|=P'ДnxtJڸҦ91`lk5MOO]}ܱ91ē*Eu~hB YC ͸ZCv;1*G4NSiWa7E1Oe5Ќ6WOƄxd!C)Vݜ'Hom c坮\E z6ߤ-e(|$=$ϼi"s3RN al%eᲖBiq @+Ιڋ ˸~5 CD9t>%@1*y¹.JӯNn?=-d ލ(4L@ɰϼ5dtEO&Xd^ym\~4I>^K;M(EML(}Qh*H64U5!ԶYPJ`PpQpE=_5Eɫ篯%P/f*7ׇzSM UĮ4Wzޱ'F? Ml#xIrmV2Ia +-,bu @ee?fԦCdwRg[wҎg"e]n m=%}-|}< ,4Na*,Qh܊M2J\44_'H'scoըXR?f6|!|!j]| ăb1pSCq)B7:m6 `O*n]@B>ҭ{ز+s)M.uY HGRhy⬻ _2&\R|juGkxY}6-yB}xL]C ; *_vkZanՐLj9[T帼Ѿ9Ãǧ2ʶF"ߺxoGx#p_-i\Aڿ'z>\=4SweXn] v bdmti^D}!Չc=su!(ֺSFp+Eoө:꺵е2'a i``КzJ7ΥmzYԻ!)FNQ5(|Yuյٛ#.eZ=r_p1w;gTU\G_z0R:@lp-ҷnf>I[ƃM`jZex7l5ۗ]Ce)KZ*:?Tp|kܩ>gaeBE$/ӗsg)CŜ$"RGdq}s\ܙ1އC41&,UE}iQ4udK| f^KxU y>hngE_Rn띐 o,dJ">Մ+9>5LM7|.M/sܰ KfDžܦŞ8+s=T Lo!u&\鯸wb|I/MH`ƑD/bEtB܅ﵝX#l!\CAԬ "߁ =l_Lg 2Hl5&7.n {Kp'ƨWym/Xo<Rdr"Wr]]w)QJQ/NorR;F.ui_N+:kiH}+ͣxNb[]z麪|ڮmkh}_]9b~`y)ú{Yâҏf?m> O>$2ʒ^=a@7D:^o/y]C9in?gK%5O|Y?^.9jFMdozo_EPEsrJt8McT.&kgtk8f$Eo슷X8/fB> /W [ 1 3 1 ] /Info 83 0 R /Root 82 0 R /Size 459 /ID [] >> stream xcb&F~0 $8Js?z o 3I%砜qNK> DŀHū Rv d0D@dR("u@ "K"]iid>xGc|~lz d;Ht0YN+A?DJH[0Lځe.U[DH1'K!io'2 endstream endobj startxref 292604 %%EOF BradleyTerry2/inst/doc/BradleyTerry.R0000644000176200001440000002111713160423701017220 0ustar liggesusers### R code from vignette source 'BradleyTerry.Rnw' ### Encoding: UTF-8 ################################################### ### code chunk number 1: set_options ################################################### options(prompt = "R> ", continue = "+ ", width = 70, useFancyQuotes = FALSE, digits = 7) ################################################### ### code chunk number 2: LoadBradleyTerry2 ################################################### library("BradleyTerry2") ################################################### ### code chunk number 3: CitationData ################################################### data("citations", package = "BradleyTerry2") ################################################### ### code chunk number 4: CitationData2 ################################################### citations ################################################### ### code chunk number 5: countsToBinomial ################################################### citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") citations.sf ################################################### ### code chunk number 6: citeModel ################################################### citeModel <- BTm(cbind(win1, win2), journal1, journal2, ~ journal, id = "journal", data = citations.sf) citeModel ################################################### ### code chunk number 7: citeModelupdate ################################################### update(citeModel, refcat = "JASA") ################################################### ### code chunk number 8: citeModelupdate2 ################################################### update(citeModel, br = TRUE) ################################################### ### code chunk number 9: lizModel ################################################### options(show.signif.stars = FALSE) data("flatlizards", package = "BradleyTerry2") lizModel <- BTm(1, winner, loser, ~ SVL[..] + (1|..), data = flatlizards) ################################################### ### code chunk number 10: summarize_lizModel ################################################### summary(lizModel) ################################################### ### code chunk number 11: lizModel2 ################################################### lizModel2 <- BTm(1, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), data = flatlizards) summary(lizModel2) ################################################### ### code chunk number 12: baseball ################################################### data("baseball", package = "BradleyTerry2") head(baseball) ################################################### ### code chunk number 13: baseballModel ################################################### baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, data = baseball, id = "team") summary(baseballModel1) ################################################### ### code chunk number 14: baseballDataUpdate ################################################### baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) ################################################### ### code chunk number 15: baseballModelupdate ################################################### baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) summary(baseballModel2) ################################################### ### code chunk number 16: CEMSmodel ################################################### data("CEMS", package = "BradleyTerry2") table8.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * LAT[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) ################################################### ### code chunk number 17: BTabilities ################################################### BTabilities(baseballModel2) ################################################### ### code chunk number 18: BTabilities2 ################################################### head(BTabilities(lizModel2), 4) ################################################### ### code chunk number 19: residuals ################################################### res.pearson <- round(residuals(lizModel2), 3) head(cbind(flatlizards$contests, res.pearson), 4) ################################################### ### code chunk number 20: BTresiduals ################################################### res <- residuals(lizModel2, type = "grouped") # with(flatlizards$predictors, plot(throat.PC2, res)) # with(flatlizards$predictors, plot(head.width, res)) ################################################### ### code chunk number 21: residualWLS ################################################### lm(res ~ throat.PC1, weights = attr(res, "weights"), data = flatlizards$predictors) lm(res ~ head.length, weights = attr(res, "weights"), data = flatlizards$predictors) ################################################### ### code chunk number 22: baseballModel2_call ################################################### baseballModel2$call ################################################### ### code chunk number 23: str_baseball ################################################### str(baseball, vec.len = 2) ################################################### ### code chunk number 24: first_comparison ################################################### baseball$home.team[1,] baseball$away.team[1,] ################################################### ### code chunk number 25: first_outcome ################################################### baseball[1, c("home.wins", "away.wins")] ################################################### ### code chunk number 26: str_CEMS ################################################### str(CEMS, vec.len = 2) ################################################### ### code chunk number 27: student-specific_data ################################################### library("prefmod") student <- cemspc[c("ENG", "SEX")] student$ENG <- factor(student$ENG, levels = 1:2, labels = c("good", "poor")) student$SEX <- factor(student$SEX, levels = 1:2, labels = c("female", "male")) ################################################### ### code chunk number 28: student_factor ################################################### cems <- list(student = student) student <- gl(303, 1, 303 * 15) #303 students, 15 comparisons contest <- data.frame(student = student) ################################################### ### code chunk number 29: binomial_response ################################################### win <- cemspc[, 1:15] == 0 lose <- cemspc[, 1:15] == 2 draw <- cemspc[, 1:15] == 1 contest$win.adj <- c(win + draw/2) contest$lose.adj <- c(lose + draw/2) ################################################### ### code chunk number 30: school_factors ################################################### lab <- c("London", "Paris", "Milano", "St. Gallen", "Barcelona", "Stockholm") contest$school1 <- factor(sequence(1:5), levels = 1:6, labels = lab) contest$school2 <- factor(rep(2:6, 1:5), levels = 1:6, labels = lab) ################################################### ### code chunk number 31: cems_data ################################################### cems$contest <- contest ################################################### ### code chunk number 32: functions ################################################### ## cf. prompt options(width = 55) for (fn in getNamespaceExports("BradleyTerry2")) { name <- as.name(fn) args <- formals(fn) n <- length(args) arg.names <- arg.n <- names(args) arg.n[arg.n == "..."] <- "\\dots" is.missing.arg <- function(arg) typeof(arg) == "symbol" && deparse(arg) == "" Call <- paste(name, "(", sep = "") for (i in seq_len(n)) { Call <- paste(Call, arg.names[i], if (!is.missing.arg(args[[i]])) paste(" = ", paste(deparse(args[[i]]), collapse = "\n"), sep = ""), sep = "") if (i != n) Call <- paste(Call, ", ", sep = "") } Call <- paste(Call, ")", sep = "") cat(deparse(parse(text = Call)[[1]], width.cutoff = 50), fill = TRUE) } options(width = 60) BradleyTerry2/tests/0000755000176200001440000000000013152515662014114 5ustar liggesusersBradleyTerry2/tests/baseball.R0000744000176200001440000000130213152515662016001 0ustar liggesuserslibrary(BradleyTerry2) ## This reproduces the analysis in Sec 10.6 of Agresti (2002). data(baseball, package = "BradleyTerry2") ## Simple Bradley-Terry model, ignoring home advantage: baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, data = baseball, id = "team") summary(baseballModel1) ## Now incorporate the "home advantage" effect baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) summary(baseballModel2) ## Compare the fit of these two models: anova(baseballModel1, baseballModel2) BradleyTerry2/tests/predict.Rout.save0000744000176200001440000001722613152515662017367 0ustar liggesusers R version 3.4.1 (2017-06-30) -- "Single Candle" Copyright (C) 2017 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > ## some awkward cases for predict > ## (in response to bug reports from Arthur Spirling and Fonti Kar) > > options(digits = 4) ## only applies to this file > > ## The final model in example(flatlizards) > library(BradleyTerry2) Loading required package: lme4 Loading required package: Matrix > data(flatlizards, package = "BradleyTerry2") > Whiting.model3 <- BTm(1, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + + head.length[..] + SVL[..] + (1|..), + family = binomial(link = "probit"), + data = flatlizards, trace = TRUE) Iteration 1. Score = 0.3606 Iteration 2. Score = 0.799 Iteration 3. Score = 0.2863 Iteration 4. Score = 0.003823 Iteration 5. Score = 0.0001769 Iteration 6. Score = 9.28e-06 Iteration 7. Score = 6.795e-07 > > > ## new lizard with original lizards with NAs > lev <- c("lizard048", "lizard052", "lizard096", "lizard059") > newdata <- list(contests = + data.frame(winner = factor(c("lizard096", "lizard059"), + levels = lev), + loser = factor(c("lizard048", "lizard052"), + levels = lev)), + predictors = + rbind(flatlizards$predictors[c(27, 29, 55),-c(1,18) ], + c(NA, 1.5, 1.5, -.2, 3, 1, -1, -1.5, -1.5, 250, + 2000, 1, 0.1, 0.2, 0.5, -0.2))) > > predict(Whiting.model3, level = 1, se.fit = TRUE, newdata = newdata) $fit [1] 19.63 NA $se.fit [1] 38745321 NA > > ## new lizard with NAs - can't predict, go by na.action > lev <- c("lizard006", "lizard011", "lizard048", "lizard059") > newdata <- list(contests = + data.frame(winner = factor(c("lizard048", "lizard059"), + levels = lev), + loser = factor(c("lizard006", "lizard011"), + levels = lev)), + predictors = + rbind(flatlizards$predictors[c(3, 6, 27),-c(1,18) ], + c(NA, 1.5, 1.5, -.2, 3, 1, -1, -1.5, -1.5, 250, + 2000, 1, 0.1, 0.2, 0.5, -0.2))) > > predict(Whiting.model3, level = 0:1, se.fit = TRUE, newdata = newdata, + na.action = na.pass) $population $population$fit [1] 0.778 NA $population$se.fit [1] 0.3208 NA $individual $individual$fit [1] 1.074 NA $individual$se.fit [1] 0.7846 NA > > predict(Whiting.model3, level = 0:1, se.fit = TRUE, newdata = newdata, + na.action = na.omit) $population $population$fit [1] 0.778 $population$se.fit [1] 0.3208 $individual $individual$fit [1] 1.074 $individual$se.fit [1] 0.7846 > > ## newdata = original data > tmp <- predict(Whiting.model3) > tmp2 <- predict(Whiting.model3, newdata = flatlizards) > identical(tmp, tmp2) [1] TRUE > > ## new data with separate effects as in original > lev <- c("lizard048", "lizard052", "lizard096", "lizard099") > newdata <- list(contests = + data.frame(winner = factor(c("lizard096", "lizard099"), + levels = lev), + loser = factor(c("lizard048", "lizard052"), + levels = lev)), + predictors = + flatlizards$predictors[c(27, 29, 55, 56),-c(1,18) ]) > > predict(Whiting.model3, level = 1, se.fit = TRUE, newdata = newdata)#[31, 34] $fit [1] 19.6348 0.6275 $se.fit [1] 3.875e+07 7.994e-01 > > predict(Whiting.model3, level = 1, se.fit = TRUE)$fit[c(31, 34)] [1] 19.6348 0.6275 > predict(Whiting.model3, level = 1, se.fit = TRUE)$se.fit[c(31, 34)] [1] 3.875e+07 7.994e-01 > > ## model in which some parameters are inestimable, e.g. contest-level predictor > ## that is same for both players (interactions may be of interest in practice) > > set.seed(1) > data(flatlizards) > flatlizards$contests$rainy <- sample(c(0, 1), nrow(flatlizards$contests), + replace = TRUE) > > example.model <- BTm(1, winner, loser, ~ rainy + + throat.PC1[..] + throat.PC3[..] + + head.length[..] + SVL[..] + (1|..), + family = binomial(link = "probit"), + data = flatlizards, trace = TRUE) Iteration 1. Score = 0.3606 Iteration 2. Score = 0.799 Iteration 3. Score = 0.2863 Iteration 4. Score = 0.003823 Iteration 5. Score = 0.0001769 Iteration 6. Score = 9.28e-06 Iteration 7. Score = 6.795e-07 > example.model Bradley Terry model fit by glmmPQL.fit Call: BTm(outcome = 1, player1 = winner, player2 = loser, formula = ~rainy + throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1 | ..), family = binomial(link = "probit"), data = flatlizards, trace = TRUE) Fixed effects: ..lizard096 ..lizard099 rainy throat.PC1[..] 20.4640 0.5977 NA -0.0536 throat.PC3[..] head.length[..] SVL[..] 0.2276 -0.8434 0.1054 Random Effects Std. Dev.: 0.6296 > > > head(predict(example.model, level = 0)) [1] 0.7780 0.5561 1.4796 2.6627 1.7430 -0.6647 > head(predict(example.model, level = 1)) [1] 1.0742 0.8355 2.1851 2.7080 1.9867 -0.4569 > lapply(predict(example.model, level = 0, se.fit = TRUE), head) $fit [1] 0.7780 0.5561 1.4796 2.6627 1.7430 -0.6647 $se.fit [1] 0.3208 0.3365 0.5447 0.8421 0.6608 0.2193 > lapply(predict(example.model, level = 1, se.fit = TRUE), head) $fit [1] 1.0742 0.8355 2.1851 2.7080 1.9867 -0.4569 $se.fit [1] 0.7846 0.7931 0.9306 1.1621 0.9525 0.6889 > # predict for unknown lizards - at level 0 only > lev <- c("lizard100", "lizard101", "lizard102", "lizard103") > newdata <- list(contests = data.frame( + rainy = c(0, 1), + winner = factor(c("lizard100", "lizard101"), + levels = lev), + loser = factor(c("lizard103", "lizard102"), + levels = lev)), + predictors = as.data.frame(lapply(flatlizards$predictors, sample, 4))) > lapply(predict(example.model, level = 0, newdata = newdata, type = "response", + se.fit = TRUE), head) $fit [1] 0.7239 0.9289 $se.fit [1] 0.2748 0.1207 > # predict at level 1 for unknown lizards is NA > predict(example.model, level = 1, newdata = newdata, type = "response", + se.fit = TRUE) $fit [1] NA NA $se.fit [1] NA NA > # predict at level 1 for known lizards > newdata$contests$winner <- factor(paste0("lizard0", c(10, 13)), + levels = paste0("lizard0", 10:13)) > newdata$contests$loser <- factor(c("lizard012", "lizard011"), + levels = paste0("lizard0", 10:13)) > newdata$predictors <- flatlizards$predictors[5:8,] > predict(example.model, level = 1, newdata = newdata, type = "response", + se.fit = TRUE) $fit [1] 0.9966 0.5961 $se.fit [1] 0.2696 0.3219 > > proc.time() user system elapsed 1.884 0.028 1.920 BradleyTerry2/tests/baseball.Rout.save0000744000176200001440000000665413152515662017505 0ustar liggesusers R version 2.10.1 (2009-12-14) Copyright (C) 2009 The R Foundation for Statistical Computing ISBN 3-900051-07-0 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > library(BradleyTerry2) > > ## This reproduces the analysis in Sec 10.6 of Agresti (2002). > > data(baseball, package = "BradleyTerry2") > > ## Simple Bradley-Terry model, ignoring home advantage: > baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, + data = baseball, id = "team") > summary(baseballModel1) Call: BTm(outcome = cbind(home.wins, away.wins), player1 = home.team, player2 = away.team, id = "team", data = baseball) Deviance Residuals: Min 1Q Median 3Q Max -1.6539 -0.0508 0.4133 0.9736 2.5509 Coefficients: Estimate Std. Error z value Pr(>|z|) teamBoston 1.1077 0.3339 3.318 0.000908 *** teamCleveland 0.6839 0.3319 2.061 0.039345 * teamDetroit 1.4364 0.3396 4.230 2.34e-05 *** teamMilwaukee 1.5814 0.3433 4.607 4.09e-06 *** teamNew York 1.2476 0.3359 3.715 0.000203 *** teamToronto 1.2945 0.3367 3.845 0.000121 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 78.015 on 42 degrees of freedom Residual deviance: 44.053 on 36 degrees of freedom AIC: 140.52 Number of Fisher Scoring iterations: 4 > > ## Now incorporate the "home advantage" effect > baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) > baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) > baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) > summary(baseballModel2) Call: BTm(outcome = cbind(home.wins, away.wins), player1 = home.team, player2 = away.team, formula = ~team + at.home, id = "team", data = baseball) Deviance Residuals: Min 1Q Median 3Q Max -2.03819 -0.40577 0.04326 0.61163 2.26001 Coefficients: Estimate Std. Error z value Pr(>|z|) teamBoston 1.1438 0.3378 3.386 0.000710 *** teamCleveland 0.7047 0.3350 2.104 0.035417 * teamDetroit 1.4754 0.3446 4.282 1.85e-05 *** teamMilwaukee 1.6196 0.3474 4.662 3.13e-06 *** teamNew York 1.2813 0.3404 3.764 0.000167 *** teamToronto 1.3271 0.3403 3.900 9.64e-05 *** at.home 0.3023 0.1309 2.308 0.020981 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 78.015 on 42 degrees of freedom Residual deviance: 38.643 on 35 degrees of freedom AIC: 137.11 Number of Fisher Scoring iterations: 4 > > ## Compare the fit of these two models: > anova(baseballModel1, baseballModel2) Analysis of Deviance Table Response: cbind(home.wins, away.wins) Model 1: ~team Model 2: ~team + at.home Resid. Df Resid. Dev Df Deviance 1 36 44.053 2 35 38.643 1 5.4106 > BradleyTerry2/tests/BTabilities.Rout.save0000744000176200001440000000734213152515662020126 0ustar liggesusers R version 3.4.1 (2017-06-30) -- "Single Candle" Copyright (C) 2017 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > library(BradleyTerry2) Loading required package: lme4 Loading required package: Matrix > > ## refcat is not the first level > > ## Convert frequencies to success/failure data > citations.sf <- countsToBinomial(citations) > names(citations.sf)[1:2] <- c("journal1", "journal2") > > ## First fit the "standard" Bradley-Terry model > citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) > BTabilities(citeModel) ability s.e. Biometrika 0.0000000 0.00000000 Comm Statist -2.9490725 0.10254534 JASA -0.4795698 0.06058873 JRSS-B 0.2689541 0.07082997 > > ## Now the same thing with a different "reference" journal > citeModel2 <- update(citeModel, refcat = "JASA") > BTabilities(citeModel2) ability s.e. Biometrika 0.4795698 0.06058873 Comm Statist -2.4695027 0.09817049 JASA 0.0000000 0.00000000 JRSS-B 0.7485238 0.07294356 > > ## modelled by covariates where some parameters inestimable > > summary(chameleon.model <- BTm(player1 = winner, player2 = loser, + formula = ~ prev.wins.2 + ch.res[ID] + prop.main[ID] + (1|ID), id = "ID", + data = chameleons)) PQL algorithm converged to fixed effects model Call: BTm(player1 = winner, player2 = loser, formula = ~prev.wins.2 + ch.res[ID] + prop.main[ID] + (1 | ID), id = "ID", data = chameleons) Deviance Residuals: Min 1Q Median 3Q Max 0.06415 0.34234 0.58244 1.07561 2.80562 Coefficients: Estimate Std. Error z value Pr(>|z|) prev.wins.2 2.38130 0.48495 4.910 9.09e-07 *** ch.res[ID] 0.60931 0.27773 2.194 0.0282 * prop.main[ID] 0.08928 0.04382 2.037 0.0416 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 146.947 on 106 degrees of freedom Residual deviance: 85.396 on 103 degrees of freedom AIC: 91.396 Number of Fisher Scoring iterations: NA > head(BTabilities(chameleon.model)) PQL algorithm converged to fixed effects model ability s.e. C01 3.757229 1.655205 C02 2.404778 1.017782 C03 2.319346 1.133959 C04 1.892671 1.399391 C05 2.253472 1.101628 C06 2.015840 1.075806 > > ## sum to zero contrasts > > mod3 <- BTm(cbind(win1, win2), journal1, + journal2, ~ journal, id = "journal", x = FALSE, + contrasts = list(journal = "contr.sum"), data = citations.sf) > > citations.sf$journal1 <- C(citations.sf$journal1, "contr.sum") > citations.sf$journal2 <- C(citations.sf$journal2, "contr.sum") > mod3b <- + BTm(cbind(win1, win2), journal1, journal2, ~ journal, id = "journal", + x = FALSE, data = citations.sf) > > BTabilities(mod3) ability s.e. Biometrika 0.7899221 0.04333047 Comm Statist -2.1591504 0.07257974 JASA 0.3103523 0.04164102 JRSS-B 1.0588761 0.05304699 > BTabilities(mod3b) ability s.e. Biometrika 0.7899221 0.04333047 Comm Statist -2.1591504 0.07257974 JASA 0.3103523 0.04164102 JRSS-B 1.0588761 0.05304699 > > proc.time() user system elapsed 1.452 0.036 1.495 BradleyTerry2/tests/predict.R0000744000176200001440000001150613152515662015675 0ustar liggesusers## some awkward cases for predict ## (in response to bug reports from Arthur Spirling and Fonti Kar) options(digits = 4) ## only applies to this file ## The final model in example(flatlizards) library(BradleyTerry2) data(flatlizards, package = "BradleyTerry2") Whiting.model3 <- BTm(1, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), family = binomial(link = "probit"), data = flatlizards, trace = TRUE) ## new lizard with original lizards with NAs lev <- c("lizard048", "lizard052", "lizard096", "lizard059") newdata <- list(contests = data.frame(winner = factor(c("lizard096", "lizard059"), levels = lev), loser = factor(c("lizard048", "lizard052"), levels = lev)), predictors = rbind(flatlizards$predictors[c(27, 29, 55),-c(1,18) ], c(NA, 1.5, 1.5, -.2, 3, 1, -1, -1.5, -1.5, 250, 2000, 1, 0.1, 0.2, 0.5, -0.2))) predict(Whiting.model3, level = 1, se.fit = TRUE, newdata = newdata) ## new lizard with NAs - can't predict, go by na.action lev <- c("lizard006", "lizard011", "lizard048", "lizard059") newdata <- list(contests = data.frame(winner = factor(c("lizard048", "lizard059"), levels = lev), loser = factor(c("lizard006", "lizard011"), levels = lev)), predictors = rbind(flatlizards$predictors[c(3, 6, 27),-c(1,18) ], c(NA, 1.5, 1.5, -.2, 3, 1, -1, -1.5, -1.5, 250, 2000, 1, 0.1, 0.2, 0.5, -0.2))) predict(Whiting.model3, level = 0:1, se.fit = TRUE, newdata = newdata, na.action = na.pass) predict(Whiting.model3, level = 0:1, se.fit = TRUE, newdata = newdata, na.action = na.omit) ## newdata = original data tmp <- predict(Whiting.model3) tmp2 <- predict(Whiting.model3, newdata = flatlizards) identical(tmp, tmp2) ## new data with separate effects as in original lev <- c("lizard048", "lizard052", "lizard096", "lizard099") newdata <- list(contests = data.frame(winner = factor(c("lizard096", "lizard099"), levels = lev), loser = factor(c("lizard048", "lizard052"), levels = lev)), predictors = flatlizards$predictors[c(27, 29, 55, 56),-c(1,18) ]) predict(Whiting.model3, level = 1, se.fit = TRUE, newdata = newdata)#[31, 34] predict(Whiting.model3, level = 1, se.fit = TRUE)$fit[c(31, 34)] predict(Whiting.model3, level = 1, se.fit = TRUE)$se.fit[c(31, 34)] ## model in which some parameters are inestimable, e.g. contest-level predictor ## that is same for both players (interactions may be of interest in practice) set.seed(1) data(flatlizards) flatlizards$contests$rainy <- sample(c(0, 1), nrow(flatlizards$contests), replace = TRUE) example.model <- BTm(1, winner, loser, ~ rainy + throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), family = binomial(link = "probit"), data = flatlizards, trace = TRUE) example.model head(predict(example.model, level = 0)) head(predict(example.model, level = 1)) lapply(predict(example.model, level = 0, se.fit = TRUE), head) lapply(predict(example.model, level = 1, se.fit = TRUE), head) # predict for unknown lizards - at level 0 only lev <- c("lizard100", "lizard101", "lizard102", "lizard103") newdata <- list(contests = data.frame( rainy = c(0, 1), winner = factor(c("lizard100", "lizard101"), levels = lev), loser = factor(c("lizard103", "lizard102"), levels = lev)), predictors = as.data.frame(lapply(flatlizards$predictors, sample, 4))) lapply(predict(example.model, level = 0, newdata = newdata, type = "response", se.fit = TRUE), head) # predict at level 1 for unknown lizards is NA predict(example.model, level = 1, newdata = newdata, type = "response", se.fit = TRUE) # predict at level 1 for known lizards newdata$contests$winner <- factor(paste0("lizard0", c(10, 13)), levels = paste0("lizard0", 10:13)) newdata$contests$loser <- factor(c("lizard012", "lizard011"), levels = paste0("lizard0", 10:13)) newdata$predictors <- flatlizards$predictors[5:8,] predict(example.model, level = 1, newdata = newdata, type = "response", se.fit = TRUE) BradleyTerry2/tests/nested.R0000744000176200001440000000173013152515662015523 0ustar liggesusers## nested use of BTm ## (in response to Jing Hua Zhao's bug report) library(BradleyTerry2) myfun <- function(x) { c2b <- countsToBinomial(x) names(c2b) <- c("allele1", "allele2", "transmitted", "nontransmitted") btx <- BTm(cbind(transmitted, nontransmitted), allele1, allele2, ~allele, id = "allele", data = c2b) } x <- matrix(c(0,0, 0, 2, 0,0, 0, 0, 0, 0, 0, 0, 0,0, 1, 3, 0,0, 0, 2, 3, 0, 0, 0, 2,3,26,35, 7,0, 2,10,11, 3, 4, 1, 2,3,22,26, 6,2, 4, 4,10, 2, 2, 0, 0,1, 7,10, 2,0, 0, 2, 2, 1, 1, 0, 0,0, 1, 4, 0,1, 0, 1, 0, 0, 0, 0, 0,2, 5, 4, 1,1, 0, 0, 0, 2, 0, 0, 0,0, 2, 6, 1,0, 2, 0, 2, 0, 0, 0, 0,3, 6,19, 6,0, 0, 2, 5, 3, 0, 0, 0,0, 3, 1, 1,0, 0, 0, 1, 0, 0, 0, 0,0, 0, 2, 0,0, 0, 0, 0, 0, 0, 0, 0,0, 1, 0, 0,0, 0, 0, 0, 0, 0, 0),nrow=12) colnames(x) <- 1:12 rownames(x) <- 1:12 xx <- myfun(x) BradleyTerry2/tests/add1.Rout.save0000744000176200001440000000456113152515662016544 0ustar liggesusers R version 3.4.1 (2017-06-30) -- "Single Candle" Copyright (C) 2017 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > library(BradleyTerry2) Loading required package: lme4 Loading required package: Matrix > data(flatlizards, package = "BradleyTerry2") > > result <- rep(1, nrow(flatlizards$contests)) > BTmodel1 <- BTm(result, winner, loser, + ~ throat.PC1[..] + throat.PC3[..] + (1|..), + data = flatlizards, + tol = 1e-4, sigma = 2, trace = TRUE) Iteration 1. Score = 5.077924 Iteration 2. Score = 1.087963 Iteration 3. Score = 0.2062877 Iteration 4. Score = 0.01989172 Iteration 5. Score = 0.001411764 Iteration 6. Score = 9.04485e-05 > > drop1(BTmodel1) Single term deletions Model: ~throat.PC1[..] + throat.PC3[..] + (1 | ..) Statistic Df `throat.PC1[..]` 3.4143 1 `throat.PC3[..]` 5.0560 1 > > add1(BTmodel1, ~ . + head.length[..] + SVL[..], test = "Chisq") Single term additions Model: ~throat.PC1[..] + throat.PC3[..] + (1 | ..) Statistic Df P(>|Chi|) head.length[..] 4.4730 1 0.03443 * SVL[..] 2.6624 1 0.10275 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > BTmodel2 <- update(BTmodel1, formula = ~ . + head.length[..]) Iteration 1. Score = 4.513682 Iteration 2. Score = 1.312929 Iteration 3. Score = 0.2792263 Iteration 4. Score = 0.0301132 Iteration 5. Score = 0.002205424 Iteration 6. Score = 0.0001348541 > > drop1(BTmodel2, test = "Chisq") Single term deletions Model: ~throat.PC1[..] + throat.PC3[..] + (1 | ..) + head.length[..] Statistic Df P(>|Chi|) `throat.PC1[..]` 4.0522 1 0.04411 * `throat.PC3[..]` 7.2577 1 0.00706 ** `head.length[..]` 4.4730 1 0.03443 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > proc.time() user system elapsed 1.824 0.024 1.852 BradleyTerry2/tests/countsToBinomial.Rout.save0000744000176200001440000000201013152515662021207 0ustar liggesusers R version 2.10.1 (2009-12-14) Copyright (C) 2009 The R Foundation for Statistical Computing ISBN 3-900051-07-0 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > library(BradleyTerry2) > data(citations, package = "BradleyTerry2") > > ## Convert frequencies to success/failure data > results <- countsToBinomial(citations) > results player1 player2 win1 win2 1 Biometrika Comm Statist 730 33 2 Biometrika JASA 498 320 3 Biometrika JRSS-B 221 284 4 Comm Statist JASA 68 813 5 Comm Statist JRSS-B 17 276 6 JASA JRSS-B 142 325 > BradleyTerry2/tests/countsToBinomial.R0000744000176200001440000000024113152515662017526 0ustar liggesuserslibrary(BradleyTerry2) data(citations, package = "BradleyTerry2") ## Convert frequencies to success/failure data results <- countsToBinomial(citations) results BradleyTerry2/tests/flatlizards.Rout.save0000744000176200001440000004355413152515662020257 0ustar liggesusers R version 3.4.1 (2017-06-30) -- "Single Candle" Copyright (C) 2017 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > options(digits = 4) ## only applies to this file > library(BradleyTerry2) Loading required package: lme4 Loading required package: Matrix > data(flatlizards, package = "BradleyTerry2") > ## > ## Fit the standard Bradley-Terry model, using the bias-reduced > ## maximum likelihood method: > ## > result <- rep(1, nrow(flatlizards$contests)) > BTmodel <- BTm(result, winner, loser, br = TRUE, data = flatlizards$contests) > summary(BTmodel) Call: BTm(outcome = result, player1 = winner, player2 = loser, data = flatlizards$contests, br = TRUE) Coefficients: (3 not defined because of singularities) Estimate Std. Error z value Pr(>|z|) ..lizard005 -1.2050 4.6460 -0.26 0.80 ..lizard006 -2.8710 5.9226 -0.48 0.63 ..lizard009 0.0795 6.1826 0.01 0.99 ..lizard010 -1.9386 5.3745 -0.36 0.72 ..lizard011 -1.0986 2.3094 -0.48 0.63 ..lizard012 -0.6384 3.2353 -0.20 0.84 ..lizard013 -3.3057 5.5911 -0.59 0.55 ..lizard016 1.1833 2.8190 0.42 0.67 ..lizard018 -1.5657 2.2367 -0.70 0.48 ..lizard020 -0.1601 3.7068 -0.04 0.97 ..lizard023 0.8899 3.0246 0.29 0.77 ..lizard024 -1.0986 2.3094 -0.48 0.63 ..lizard025 -1.8064 5.9131 -0.31 0.76 ..lizard026 -2.1972 3.2660 -0.67 0.50 ..lizard028 -1.2050 3.3044 -0.36 0.72 ..lizard029 -1.2588 2.8995 -0.43 0.66 ..lizard030 0.4602 3.9750 0.12 0.91 ..lizard031 -2.8145 5.0273 -0.56 0.58 ..lizard032 -2.0113 3.7265 -0.54 0.59 ..lizard033 -1.7726 5.2119 -0.34 0.73 ..lizard037 -2.9050 5.4434 -0.53 0.59 ..lizard038 -0.0623 3.1535 -0.02 0.98 ..lizard040 0.4039 2.5376 0.16 0.87 ..lizard045 -0.8704 2.7099 -0.32 0.75 ..lizard047 -1.0986 2.3094 -0.48 0.63 ..lizard048 -1.7724 5.4538 -0.32 0.75 ..lizard050 -2.0113 3.7265 -0.54 0.59 ..lizard052 -0.9126 2.9246 -0.31 0.75 ..lizard053 NA NA NA NA ..lizard055 -1.0681 3.4273 -0.31 0.76 ..lizard058 NA NA NA NA ..lizard060 NA NA NA NA ..lizard061 -1.0971 1.9958 -0.55 0.58 ..lizard062 -2.3037 4.0314 -0.57 0.57 ..lizard063 -0.2087 3.8055 -0.05 0.96 ..lizard064 -0.1707 3.4108 -0.05 0.96 ..lizard065 -1.7159 5.5324 -0.31 0.76 ..lizard066 0.6094 3.3873 0.18 0.86 ..lizard069 -3.4895 5.1139 -0.68 0.50 ..lizard070 2.6012 4.2620 0.61 0.54 ..lizard071 -2.3909 5.6112 -0.43 0.67 ..lizard073 -0.8878 5.1058 -0.17 0.86 ..lizard074 -1.0191 5.7351 -0.18 0.86 ..lizard075 -2.3040 4.0379 -0.57 0.57 ..lizard077 -0.7951 2.6901 -0.30 0.77 ..lizard078 -1.2054 3.3124 -0.36 0.72 ..lizard081 -1.9690 3.5605 -0.55 0.58 ..lizard082 -2.7400 5.4502 -0.50 0.62 ..lizard083 0.4915 2.9395 0.17 0.87 ..lizard084 -1.7796 3.0237 -0.59 0.56 ..lizard086 -2.3234 5.0375 -0.46 0.64 ..lizard089 -3.8713 5.5773 -0.69 0.49 ..lizard091 -3.3161 5.3428 -0.62 0.53 ..lizard096 -0.7485 5.5614 -0.13 0.89 ..lizard099 0.4040 2.7383 0.15 0.88 ..lizard100 -1.9864 5.6038 -0.35 0.72 ..lizard102 -1.8704 5.4605 -0.34 0.73 ..lizard105 -0.6427 3.1202 -0.21 0.84 ..lizard155 -0.9981 3.1045 -0.32 0.75 ..lizard156 -0.5114 3.2164 -0.16 0.87 ..lizard162 0.0847 3.6442 0.02 0.98 ..lizard163 -0.6947 3.4311 -0.20 0.84 ..lizard165 -1.2693 4.1191 -0.31 0.76 ..lizard167 -1.2054 3.3124 -0.36 0.72 ..lizard170 0.5872 2.2387 0.26 0.79 ..lizard171 1.5026 4.8475 0.31 0.76 ..lizard173 -0.9049 2.5622 -0.35 0.72 ..lizard174 1.5026 3.5821 0.42 0.67 ..lizard176 0.4663 2.7358 0.17 0.86 ..lizard177 0.0306 2.5324 0.01 0.99 ..lizard178 -1.1609 3.9087 -0.30 0.77 ..lizard179 -0.4892 2.4780 -0.20 0.84 ..lizard180 -1.3213 2.1046 -0.63 0.53 ..lizard183 -1.0681 3.4273 -0.31 0.76 ..lizard188 -1.8650 3.1822 -0.59 0.56 ..lizard189 -1.9333 3.3020 -0.59 0.56 (Dispersion parameter for binomial family taken to be 1) Null deviance: 62.926 on 100 degrees of freedom Residual deviance: 50.212 on 27 degrees of freedom Penalized deviance: 147.5 AIC: 196.2 > ## > ## That's fairly useless, though, because of the rather small > ## amount of data on each lizard. And really the scientific > ## interest is not in the abilities of these particular 77 > ## lizards, but in the relationship between ability and the > ## measured predictor variables. > ## > ## So next fit (by maximum likelihood) a "structured" B-T model in > ## which abilities are determined by a linear predictor. > ## > ## This reproduces results reported in Table 1 of Whiting et al. (2006): > ## > Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + + head.length[..] + SVL[..], family = binomial, + data = flatlizards) > summary(Whiting.model) Call: BTm(outcome = result, player1 = winner, player2 = loser, formula = ~throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..], family = binomial, data = flatlizards) Deviance Residuals: Min 1Q Median 3Q Max 0.0001 0.4455 0.7961 1.1765 1.9588 Coefficients: Estimate Std. Error z value Pr(>|z|) ..lizard096 16.4245 1194.5203 0.01 0.9890 ..lizard099 0.8358 1.1591 0.72 0.4709 throat.PC1[..] -0.0926 0.0291 -3.19 0.0014 ** throat.PC3[..] 0.3421 0.1109 3.09 0.0020 ** head.length[..] -1.1252 0.4944 -2.28 0.0228 * SVL[..] 0.1883 0.0985 1.91 0.0560 . --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 138.629 on 100 degrees of freedom Residual deviance: 95.361 on 94 degrees of freedom AIC: 107.4 Number of Fisher Scoring iterations: 15 > ## > ## Equivalently, fit the same model using glmmPQL: > ## > Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + + head.length[..] + SVL[..] + (1|..), sigma = 0, + sigma.fixed = TRUE, data = flatlizards) > summary(Whiting.model) PQL algorithm converged to fixed effects model Call: BTm(outcome = result, player1 = winner, player2 = loser, formula = ~throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1 | ..), data = flatlizards, sigma = 0, sigma.fixed = TRUE) Deviance Residuals: Min 1Q Median 3Q Max 0.000 0.446 0.796 1.177 1.959 Coefficients: Estimate Std. Error z value Pr(>|z|) ..lizard096 2.24e+01 3.96e+04 0.00 0.9995 ..lizard099 8.36e-01 1.16e+00 0.72 0.4709 throat.PC1[..] -9.26e-02 2.91e-02 -3.19 0.0014 ** throat.PC3[..] 3.42e-01 1.11e-01 3.09 0.0020 ** head.length[..] -1.13e+00 4.94e-01 -2.28 0.0228 * SVL[..] 1.88e-01 9.85e-02 1.91 0.0560 . --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 138.629 on 100 degrees of freedom Residual deviance: 95.361 on 94 degrees of freedom AIC: 107.4 Number of Fisher Scoring iterations: NA > ## > ## But that analysis assumes that the linear predictor formula for > ## abilities is _perfect_, i.e., that there is no error in the linear > ## predictor. This will always be unrealistic. > ## > ## So now fit the same predictor but with a normally distributed error > ## term --- a generalized linear mixed model --- by using the BTm > ## function instead of glm. > ## > Whiting.model2 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + + head.length[..] + SVL[..] + (1|..), + data = flatlizards, trace = TRUE) Iteration 1. Score = 0.1424 Iteration 2. Score = 0.8836 Iteration 3. Score = 0.3226 Iteration 4. Score = 0.007671 Iteration 5. Score = 0.0006436 Iteration 6. Score = 5.467e-05 Iteration 7. Score = 4.658e-06 Iteration 8. Score = 3.962e-07 > summary(Whiting.model2) Call: BTm(outcome = result, player1 = winner, player2 = loser, formula = ~throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1 | ..), data = flatlizards, trace = TRUE) Fixed Effects: Estimate Std. Error z value Pr(>|z|) ..lizard096 3.67e+01 3.87e+07 0.00 1.000 ..lizard099 9.53e-01 1.28e+00 0.74 0.458 throat.PC1[..] -8.69e-02 4.12e-02 -2.11 0.035 * throat.PC3[..] 3.73e-01 1.53e-01 2.45 0.014 * head.length[..] -1.38e+00 7.39e-01 -1.87 0.061 . SVL[..] 1.72e-01 1.37e-01 1.25 0.210 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Random Effects: Estimate Std. Error z value Pr(>|z|) Std. Dev. 1.110 0.322 3.44 0.00058 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Number of iterations: 8 > ## > ## The estimated coefficients (of throat.PC1, throat.PC3, > ## head.length and SVL are not changed substantially by > ## the recognition of an error term in the model; but the estimated > ## standard errors are larger, as expected. The main conclusions from > ## Whiting et al. (2006) are unaffected. > ## > ## With the normally distributed random error included, it is perhaps > ## at least as natural to use probit rather than logit as the link > ## function: > ## > Whiting.model3 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + + head.length[..] + SVL[..] + (1|..), + family = binomial(link = "probit"), + data = flatlizards, trace = TRUE) Iteration 1. Score = 0.3606 Iteration 2. Score = 0.799 Iteration 3. Score = 0.2863 Iteration 4. Score = 0.003823 Iteration 5. Score = 0.0001769 Iteration 6. Score = 9.28e-06 Iteration 7. Score = 6.795e-07 > summary(Whiting.model3) Call: BTm(outcome = result, player1 = winner, player2 = loser, formula = ~throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1 | ..), family = binomial(link = "probit"), data = flatlizards, trace = TRUE) Fixed Effects: Estimate Std. Error z value Pr(>|z|) ..lizard096 2.05e+01 3.87e+07 0.00 1.0000 ..lizard099 5.98e-01 7.38e-01 0.81 0.4178 throat.PC1[..] -5.36e-02 2.32e-02 -2.31 0.0209 * throat.PC3[..] 2.28e-01 8.62e-02 2.64 0.0083 ** head.length[..] -8.43e-01 4.21e-01 -2.01 0.0449 * SVL[..] 1.05e-01 7.76e-02 1.36 0.1746 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Random Effects: Estimate Std. Error z value Pr(>|z|) Std. Dev. 0.630 0.186 3.38 0.00073 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Number of iterations: 7 > BTabilities(Whiting.model3) ability s.e. lizard003 0.95355 2.933e-01 lizard005 0.53816 3.183e-01 lizard006 -0.14147 3.377e-01 lizard009 0.73757 3.665e-01 lizard010 0.64227 3.232e-01 lizard011 -0.06162 4.258e-01 lizard012 -1.84812 6.110e-01 lizard013 0.22103 1.920e-01 lizard016 0.91544 3.519e-01 lizard018 -0.56996 4.624e-01 lizard020 1.40642 4.914e-01 lizard023 -0.36857 1.222e-01 lizard024 -0.75717 3.324e-01 lizard025 0.72348 2.553e-01 lizard026 0.88139 2.870e-01 lizard028 -0.18453 2.132e-01 lizard029 -0.45028 3.763e-01 lizard030 0.81456 2.863e-01 lizard031 -0.17223 2.208e-01 lizard032 -0.25860 2.435e-01 lizard033 -0.47074 2.662e-01 lizard037 -0.70113 3.183e-01 lizard038 0.14185 3.460e-01 lizard040 0.01696 2.886e-01 lizard045 -0.42884 3.105e-01 lizard047 0.43601 1.649e-01 lizard048 0.63648 2.739e-01 lizard050 -0.60271 1.899e-01 lizard052 -0.44522 3.262e-01 lizard053 -0.71500 3.899e-01 lizard055 -1.07833 3.624e-01 lizard058 -0.54361 4.181e-01 lizard060 0.49446 3.946e-01 lizard061 0.46849 2.358e-01 lizard062 0.34632 2.301e-01 lizard063 0.33597 3.267e-01 lizard064 0.33638 4.172e-01 lizard065 -0.28462 2.293e-01 lizard066 0.95006 5.584e-01 lizard069 -1.33897 3.664e-01 lizard070 1.24275 5.065e-01 lizard071 -0.23844 3.145e-01 lizard073 -0.08660 1.703e-01 lizard074 -0.41429 1.308e-01 lizard075 -0.46101 3.525e-01 lizard077 -1.25546 4.186e-01 lizard078 -0.49810 2.521e-01 lizard081 0.50978 2.047e-01 lizard082 -0.44364 3.049e-01 lizard083 -0.10509 2.741e-01 lizard084 -0.37502 2.736e-01 lizard086 0.43640 4.175e-01 lizard089 0.46797 2.035e-01 lizard091 0.76537 2.618e-01 lizard096 20.46403 3.875e+07 lizard099 0.59770 7.377e-01 lizard100 -0.82767 2.606e-01 lizard102 -0.67043 3.512e-01 lizard105 -1.23675 3.977e-01 lizard155 -0.55743 3.604e-01 lizard156 -0.38819 2.065e-01 lizard162 0.01532 1.011e-01 lizard163 0.39476 1.746e-01 lizard165 0.16178 3.076e-01 lizard167 0.53388 2.976e-01 lizard170 0.80979 2.315e-01 lizard171 0.65363 2.985e-01 lizard173 -1.71445 5.203e-01 lizard174 -0.03279 2.826e-01 lizard176 1.10974 3.245e-01 lizard177 1.10664 3.800e-01 lizard178 0.12929 1.987e-01 lizard179 -0.03706 3.023e-01 lizard180 0.48147 3.489e-01 lizard183 -1.33956 4.096e-01 lizard188 -0.27797 5.677e-01 lizard189 0.59487 2.500e-01 attr(,"separate") [1] "lizard096" "lizard099" > residuals(Whiting.model3, "grouped") [,1] ..lizard003 6.156e-01 ..lizard005 6.933e-01 ..lizard006 -6.310e-01 ..lizard009 6.884e-01 ..lizard010 6.040e-01 ..lizard011 -7.168e-01 ..lizard012 -4.018e-01 ..lizard013 -3.721e-01 ..lizard016 8.831e-01 ..lizard018 -7.042e-01 ..lizard020 4.062e-01 ..lizard023 1.059e+00 ..lizard024 -8.984e-01 ..lizard025 4.726e-01 ..lizard026 -1.439e+00 ..lizard028 1.066e-01 ..lizard029 -6.009e-01 ..lizard030 3.319e-01 ..lizard031 -4.843e-01 ..lizard032 -9.097e-01 ..lizard033 3.913e-01 ..lizard037 -6.366e-01 ..lizard038 2.846e-01 ..lizard040 7.221e-01 ..lizard045 4.416e-01 ..lizard047 -1.672e-01 ..lizard048 5.911e-01 ..lizard050 -7.611e-01 ..lizard052 3.784e-01 ..lizard053 8.984e-01 ..lizard055 -3.895e-01 ..lizard058 1.884e+00 ..lizard060 7.168e-01 ..lizard061 -6.822e-01 ..lizard062 -1.025e+00 ..lizard063 -1.059e+00 ..lizard064 7.004e-01 ..lizard065 8.470e-01 ..lizard066 6.191e-01 ..lizard069 -5.736e-01 ..lizard070 6.441e-01 ..lizard071 5.268e-01 ..lizard073 7.469e-01 ..lizard074 5.512e-01 ..lizard075 -7.052e-01 ..lizard077 -4.525e-01 ..lizard078 -5.894e-01 ..lizard081 -1.395e+00 ..lizard082 4.577e-02 ..lizard083 6.290e-01 ..lizard084 -6.531e-01 ..lizard086 -1.663e-01 ..lizard089 -9.758e-01 ..lizard091 -1.058e+00 ..lizard096 1.000e+00 ..lizard099 -8.397e-08 ..lizard100 -5.045e-01 ..lizard102 3.176e-01 ..lizard105 -4.232e-01 ..lizard155 -4.510e-01 ..lizard156 -5.075e-01 ..lizard162 -5.470e-01 ..lizard163 -7.967e-01 ..lizard165 -7.720e-01 ..lizard167 -4.290e-01 ..lizard170 6.048e-01 ..lizard171 -6.812e-01 ..lizard173 -4.104e-01 ..lizard174 7.924e-01 ..lizard176 1.871e-01 ..lizard177 2.405e-02 ..lizard178 -8.898e-01 ..lizard179 -5.000e-02 ..lizard180 -6.809e-01 ..lizard183 -3.574e-01 ..lizard188 -7.392e-01 ..lizard189 -1.101e+00 attr(,"weights") [,1] ..lizard003 9.978e-01 ..lizard005 1.307e+00 ..lizard006 4.136e-01 ..lizard009 4.686e-01 ..lizard010 7.195e-01 ..lizard011 4.917e-01 ..lizard012 2.593e-01 ..lizard013 1.214e+00 ..lizard016 1.147e+00 ..lizard018 6.742e-01 ..lizard020 1.103e-01 ..lizard023 1.583e+00 ..lizard024 5.889e-01 ..lizard025 2.059e-01 ..lizard026 6.302e-01 ..lizard028 1.572e+00 ..lizard029 9.465e-01 ..lizard030 3.083e-02 ..lizard031 2.474e+00 ..lizard032 5.927e-01 ..lizard033 1.595e+00 ..lizard037 9.638e-01 ..lizard038 2.280e+00 ..lizard040 3.002e+00 ..lizard045 2.617e+00 ..lizard047 1.220e+00 ..lizard048 8.228e-01 ..lizard050 5.230e-01 ..lizard052 2.769e+00 ..lizard053 5.889e-01 ..lizard055 8.877e-02 ..lizard058 5.896e-01 ..lizard060 4.917e-01 ..lizard061 9.241e-01 ..lizard062 1.079e+00 ..lizard063 6.253e-01 ..lizard064 1.332e+00 ..lizard065 5.690e-01 ..lizard066 4.008e-01 ..lizard069 1.287e+00 ..lizard070 8.399e-01 ..lizard071 2.845e-01 ..lizard073 2.565e+00 ..lizard074 1.697e+00 ..lizard075 4.826e-01 ..lizard077 4.335e-01 ..lizard078 5.834e-01 ..lizard081 6.327e-01 ..lizard082 1.581e+00 ..lizard083 8.115e-01 ..lizard084 1.179e+00 ..lizard086 1.806e+00 ..lizard089 1.694e+00 ..lizard091 1.232e+00 ..lizard096 6.661e-16 ..lizard099 2.203e+00 ..lizard100 2.527e-01 ..lizard102 1.615e+00 ..lizard105 2.031e-01 ..lizard155 3.411e-01 ..lizard156 2.572e-01 ..lizard162 3.123e-01 ..lizard163 5.442e-01 ..lizard165 5.298e-01 ..lizard167 1.557e+00 ..lizard170 1.199e+00 ..lizard171 4.623e-01 ..lizard173 2.376e-01 ..lizard174 9.911e-01 ..lizard176 2.012e+00 ..lizard177 1.990e+00 ..lizard178 5.859e-01 ..lizard179 1.318e+00 ..lizard180 1.360e+00 ..lizard183 5.284e-02 ..lizard188 9.479e-01 ..lizard189 1.142e+00 > ## Note the "separate" attribute here, identifying two lizards with > ## missing values of at least one predictor variable > ## > ## Modulo the usual scale change between logit and probit, the results > ## are (as expected) very similar to Whiting.model2. > > proc.time() user system elapsed 1.836 0.048 1.904 BradleyTerry2/tests/BTabilities.R0000744000176200001440000000233513152515662016436 0ustar liggesuserslibrary(BradleyTerry2) ## refcat is not the first level ## Convert frequencies to success/failure data citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") ## First fit the "standard" Bradley-Terry model citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) BTabilities(citeModel) ## Now the same thing with a different "reference" journal citeModel2 <- update(citeModel, refcat = "JASA") BTabilities(citeModel2) ## modelled by covariates where some parameters inestimable summary(chameleon.model <- BTm(player1 = winner, player2 = loser, formula = ~ prev.wins.2 + ch.res[ID] + prop.main[ID] + (1|ID), id = "ID", data = chameleons)) head(BTabilities(chameleon.model)) ## sum to zero contrasts mod3 <- BTm(cbind(win1, win2), journal1, journal2, ~ journal, id = "journal", x = FALSE, contrasts = list(journal = "contr.sum"), data = citations.sf) citations.sf$journal1 <- C(citations.sf$journal1, "contr.sum") citations.sf$journal2 <- C(citations.sf$journal2, "contr.sum") mod3b <- BTm(cbind(win1, win2), journal1, journal2, ~ journal, id = "journal", x = FALSE, data = citations.sf) BTabilities(mod3) BTabilities(mod3b) BradleyTerry2/tests/nested.Rout.save0000744000176200001440000000315413152515662017212 0ustar liggesusers R version 2.11.1 Patched (2010-09-30 r53083) Copyright (C) 2010 The R Foundation for Statistical Computing ISBN 3-900051-07-0 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > ## nested use of BTm > ## (in response to Jing Hua Zhao's bug report) > library(BradleyTerry2) > myfun <- function(x) { + c2b <- countsToBinomial(x) + names(c2b) <- c("allele1", "allele2", "transmitted", "nontransmitted") + btx <- BTm(cbind(transmitted, nontransmitted), allele1, allele2, + ~allele, id = "allele", data = c2b) + } > > x <- matrix(c(0,0, 0, 2, 0,0, 0, 0, 0, 0, 0, 0, + 0,0, 1, 3, 0,0, 0, 2, 3, 0, 0, 0, + 2,3,26,35, 7,0, 2,10,11, 3, 4, 1, + 2,3,22,26, 6,2, 4, 4,10, 2, 2, 0, + 0,1, 7,10, 2,0, 0, 2, 2, 1, 1, 0, + 0,0, 1, 4, 0,1, 0, 1, 0, 0, 0, 0, + 0,2, 5, 4, 1,1, 0, 0, 0, 2, 0, 0, + 0,0, 2, 6, 1,0, 2, 0, 2, 0, 0, 0, + 0,3, 6,19, 6,0, 0, 2, 5, 3, 0, 0, + 0,0, 3, 1, 1,0, 0, 0, 1, 0, 0, 0, + 0,0, 0, 2, 0,0, 0, 0, 0, 0, 0, 0, + 0,0, 1, 0, 0,0, 0, 0, 0, 0, 0, 0),nrow=12) > colnames(x) <- 1:12 > rownames(x) <- 1:12 > > xx <- myfun(x) > BradleyTerry2/tests/add1.R0000744000176200001440000000074013152515662015052 0ustar liggesuserslibrary(BradleyTerry2) data(flatlizards, package = "BradleyTerry2") result <- rep(1, nrow(flatlizards$contests)) BTmodel1 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + (1|..), data = flatlizards, tol = 1e-4, sigma = 2, trace = TRUE) drop1(BTmodel1) add1(BTmodel1, ~ . + head.length[..] + SVL[..], test = "Chisq") BTmodel2 <- update(BTmodel1, formula = ~ . + head.length[..]) drop1(BTmodel2, test = "Chisq") BradleyTerry2/tests/flatlizards.R0000744000176200001440000000565713152515662016574 0ustar liggesusersoptions(digits = 4) ## only applies to this file library(BradleyTerry2) data(flatlizards, package = "BradleyTerry2") ## ## Fit the standard Bradley-Terry model, using the bias-reduced ## maximum likelihood method: ## result <- rep(1, nrow(flatlizards$contests)) BTmodel <- BTm(result, winner, loser, br = TRUE, data = flatlizards$contests) summary(BTmodel) ## ## That's fairly useless, though, because of the rather small ## amount of data on each lizard. And really the scientific ## interest is not in the abilities of these particular 77 ## lizards, but in the relationship between ability and the ## measured predictor variables. ## ## So next fit (by maximum likelihood) a "structured" B-T model in ## which abilities are determined by a linear predictor. ## ## This reproduces results reported in Table 1 of Whiting et al. (2006): ## Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..], family = binomial, data = flatlizards) summary(Whiting.model) ## ## Equivalently, fit the same model using glmmPQL: ## Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), sigma = 0, sigma.fixed = TRUE, data = flatlizards) summary(Whiting.model) ## ## But that analysis assumes that the linear predictor formula for ## abilities is _perfect_, i.e., that there is no error in the linear ## predictor. This will always be unrealistic. ## ## So now fit the same predictor but with a normally distributed error ## term --- a generalized linear mixed model --- by using the BTm ## function instead of glm. ## Whiting.model2 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), data = flatlizards, trace = TRUE) summary(Whiting.model2) ## ## The estimated coefficients (of throat.PC1, throat.PC3, ## head.length and SVL are not changed substantially by ## the recognition of an error term in the model; but the estimated ## standard errors are larger, as expected. The main conclusions from ## Whiting et al. (2006) are unaffected. ## ## With the normally distributed random error included, it is perhaps ## at least as natural to use probit rather than logit as the link ## function: ## Whiting.model3 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), family = binomial(link = "probit"), data = flatlizards, trace = TRUE) summary(Whiting.model3) BTabilities(Whiting.model3) residuals(Whiting.model3, "grouped") ## Note the "separate" attribute here, identifying two lizards with ## missing values of at least one predictor variable ## ## Modulo the usual scale change between logit and probit, the results ## are (as expected) very similar to Whiting.model2. BradleyTerry2/NAMESPACE0000744000176200001440000000373113160030551014162 0ustar liggesusers# Generated by roxygen2: do not edit by hand S3method(add1,BTm) S3method(anova,BTm) S3method(coef,BTabilities) S3method(drop1,BTm) S3method(formula,BTm) S3method(model.matrix,BTm) S3method(predict,BTglmmPQL) S3method(predict,BTm) S3method(print,BTabilities) S3method(print,BTglmmPQL) S3method(print,BTm) S3method(print,summary.BTglmmPQL) S3method(qvcalc,BTabilities) S3method(residuals,BTm) S3method(summary,BTglmmPQL) S3method(vcov,BTabilities) S3method(vcov,BTglmmPQL) export(BTabilities) export(BTm) export(GenDavidson) export(countsToBinomial) export(glmmPQL) export(glmmPQL.control) export(plotProportions) export(qvcalc) importFrom(brglm,brglm) importFrom(graphics,curve) importFrom(graphics,plot) importFrom(graphics,points) importFrom(gtools,combinations) importFrom(lme4,findbars) importFrom(lme4,nobars) importFrom(qvcalc,qvcalc) importFrom(qvcalc,qvcalc.default) importFrom(stats,.checkMFClasses) importFrom(stats,.getXlevels) importFrom(stats,C) importFrom(stats,add.scope) importFrom(stats,as.formula) importFrom(stats,coef) importFrom(stats,contrasts) importFrom(stats,delete.response) importFrom(stats,drop.scope) importFrom(stats,family) importFrom(stats,fitted) importFrom(stats,formula) importFrom(stats,gaussian) importFrom(stats,glm.control) importFrom(stats,glm.fit) importFrom(stats,is.empty.model) importFrom(stats,model.frame) importFrom(stats,model.matrix) importFrom(stats,model.offset) importFrom(stats,model.response) importFrom(stats,model.weights) importFrom(stats,na.exclude) importFrom(stats,na.omit) importFrom(stats,na.pass) importFrom(stats,napredict) importFrom(stats,naprint) importFrom(stats,optimize) importFrom(stats,pchisq) importFrom(stats,pf) importFrom(stats,plogis) importFrom(stats,pnorm) importFrom(stats,printCoefmat) importFrom(stats,reformulate) importFrom(stats,relevel) importFrom(stats,runif) importFrom(stats,symnum) importFrom(stats,terms) importFrom(stats,update) importFrom(stats,update.formula) importFrom(stats,vcov) importFrom(utils,flush.console) BradleyTerry2/NEWS.md0000744000176200001440000000761313157761356014070 0ustar liggesusersChanges in BradleyTerry2 1.0-8 ============================== * fix bug in `qvcalc.BTabilities` Changes in BradleyTerry2 1.0-7 ============================== Improvements ------------ * new examples of prediction added, including using `predict.BTm` to estimate abilities with non-player abilities set to non-zero values (for models with a fixed reference category). * `qvcalc.BTabilities` moved over from package **qvcalc**. * package imports rather than depends on **lme4**. Changes in behaviour -------------------- * default `level` in `predict.BTm` and `predict.glmmPQL` is 0 if a fixed effects model has been fitted, 1 otherwise. Bug fixes --------- * BTabilities now works (again) for models where the reference category is not the first player. Players are kept in their original order (levels of `player1` and `player2`), but the abilities are returned with the appropriate reference. * BTabilities now works when ability is modelled by covariates and some parameters are inestimable (e.g. as in `chameleons.model` on `?chameleons`). * `predict.BTglmmPQL` now works for models with inestimable parameters Changes in BradleyTerry2 1.0-6 ============================== Changes in behaviour -------------------- * `BTabilities` now returns `NA` for unidentified abilities Bug fixes --------- * BTabilities now respects contrasts argument and contrasts attributes of `player1` and `player2` factors. Also handle unidentified coefficients correctly. Changes in BradleyTerry2 1.0-5 ============================== Bug fixes --------- * no longer imports from **gnm**, so **gnm** need not be installed. Changes in BradleyTerry2 1.0-4 ============================== Bug fixes --------- * depends on **lme4** (>=1.0). Changes in BradleyTerry2 1.0-3 ============================== New Features ------------ * updated football data to include full 2011-12 season. Changes in BradleyTerry2 1.0-2 ============================== New Features ------------ * added football example presented at useR! 2013 with generalised Davidson model for ties. Changes in BradleyTerry2 1.0-1 ============================== Bug fixes --------- * renamed `glmmPQL` object `BTglmmPQL` to avoid conflict with **lme4** (which loads **MASS**). * fixed `BTm` so that it is able to find variables when called inside another function (stackoverflow.com question 14911525). Changes in BradleyTerry2 1.0-0 ============================== * updated references and CITATION to cite JSS paper on BradleyTerry2 Changes in BradleyTerry2 0.9-7 ============================== Bug fixes --------- * fixed `anova.BTmlist` to work for models with random effects * allow models to be specified with no fixed effects Improvements ------------ * updated vignette, including example of bias-reduction, a new example incorporating random effects and a new example on preparing data for use with package Changes in BradleyTerry2 0.9-6 ============================== Bug fixes --------- * fixed `offset` argument to work as documented * corrected documentation for `citations` data Improvements ------------ * updated vignette, to provide more explanation of setting up the data Changes in BradleyTerry2 0.9-5 ============================== * updated contact details Changes in BradleyTerry2 0.9-4 ============================== New Features ------------ * added ice hockey example presented at useR! 2010 Bug fixes --------- * `predict.BTm` now works for models with no random effects and handles new individuals with missing values in predictors. Changes in BradleyTerry2 0.9-3 ============================= New Features ------------ * added predict method for BTm objects. Bug fixes --------- * fixed bug in `BTm.setup` causing problems in finding variables when `BTm` nested within another function. BradleyTerry2/data/0000755000176200001440000000000013152515662013663 5ustar liggesusersBradleyTerry2/data/baseball.RData0000744000176200001440000000054213152515662016347 0ustar liggesusersRN0 vxzƮp@2Q-m4Pq)yQ5g'u~,0֍="!vVlPJ~w Z{cE@@@ Mc4O*ŗng6_4Ujj(4fw!0ZUϕ EtJ(Mi5l%{wt&H7><믪.򓂂+(!u,x1ϺOx֡Ŧ%;rN>x<8#'u9ʟz桙 `-hcUGĐUM;/.vhk[k!ת+}^7BradleyTerry2/data/seeds.R0000744000176200001440000000105313152515662015111 0ustar liggesusersseeds <- structure(list(r = c(10, 23, 23, 26, 17, 5, 53, 55, 32, 46, 10, 8, 10, 8, 23, 0, 3, 22, 15, 32, 3), n = c(39, 62, 81, 51, 39, 6, 74, 72, 51, 79, 13, 16, 30, 28, 45, 4, 12, 41, 30, 51, 7), seed = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), extract = c(0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1)), .Names = c("r", "n", "seed", "extract" ), row.names = c("1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21"), class = "data.frame") BradleyTerry2/data/citations.R0000744000176200001440000000051613152515662016006 0ustar liggesuserscitations <- structure(c(714, 33, 320, 284, 730, 425, 813, 276, 498, 68, 1072, 325, 221, 17, 142, 188), .Dim = c(4L, 4L), .Dimnames = structure(list( cited = c("Biometrika", "Comm Statist", "JASA", "JRSS-B"), citing = c("Biometrika", "Comm Statist", "JASA", "JRSS-B" )), .Names = c("cited", "citing")), class = "table") BradleyTerry2/data/sound.fields.R0000744000176200001440000001010213152515662016376 0ustar liggesuserssound.fields <- structure(list(comparisons = structure(list(field1 = structure(c(8L, 8L, 8L, 8L, 8L, 8L, 8L, 7L, 7L, 7L, 7L, 7L, 7L, 6L, 6L, 6L, 6L, 6L, 5L, 5L, 5L, 5L, 4L, 4L, 4L, 3L, 3L, 2L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 7L, 7L, 7L, 7L, 7L, 7L, 6L, 6L, 6L, 6L, 6L, 5L, 5L, 5L, 5L, 4L, 4L, 4L, 3L, 3L, 2L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 7L, 7L, 7L, 7L, 7L, 7L, 6L, 6L, 6L, 6L, 6L, 5L, 5L, 5L, 5L, 4L, 4L, 4L, 3L, 3L, 2L), .Label = c("000", "001", "010", "011", "100", "101", "110", "111"), class = "factor"), field2 = structure(c(7L, 6L, 5L, 4L, 3L, 2L, 1L, 6L, 5L, 4L, 3L, 2L, 1L, 5L, 4L, 3L, 2L, 1L, 4L, 3L, 2L, 1L, 3L, 2L, 1L, 2L, 1L, 1L, 7L, 6L, 5L, 4L, 3L, 2L, 1L, 6L, 5L, 4L, 3L, 2L, 1L, 5L, 4L, 3L, 2L, 1L, 4L, 3L, 2L, 1L, 3L, 2L, 1L, 2L, 1L, 1L, 7L, 6L, 5L, 4L, 3L, 2L, 1L, 6L, 5L, 4L, 3L, 2L, 1L, 5L, 4L, 3L, 2L, 1L, 4L, 3L, 2L, 1L, 3L, 2L, 1L, 2L, 1L, 1L), .Label = c("000", "001", "010", "011", "100", "101", "110", "111"), class = "factor"), win1 = c(4L, 4L, 6L, 3L, 8L, 7L, 7L, 6L, 6L, 7L, 3L, 7L, 7L, 5L, 5L, 4L, 6L, 7L, 5L, 3L, 7L, 3L, 6L, 5L, 8L, 6L, 6L, 5L, 0L, 3L, 2L, 2L, 4L, 4L, 2L, 1L, 1L, 4L, 2L, 4L, 4L, 1L, 2L, 1L, 3L, 3L, 1L, 2L, 5L, 4L, 0L, 2L, 4L, 3L, 4L, 1L, 1L, 0L, 0L, 2L, 3L, 5L, 5L, 3L, 1L, 2L, 2L, 4L, 3L, 3L, 2L, 3L, 3L, 3L, 1L, 1L, 5L, 4L, 2L, 4L, 1L, 5L, 4L, 2L), tie = c(4L, 2L, 2L, 4L, 0L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 1L, 2L, 3L, 2L, 1L, 1L, 2L, 2L, 2L, 4L, 1L, 4L, 0L, 1L, 2L, 2L, 3L, 1L, 1L, 2L, 0L, 0L, 3L, 3L, 1L, 1L, 3L, 0L, 1L, 2L, 3L, 3L, 2L, 2L, 3L, 1L, 0L, 0L, 3L, 3L, 0L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 0L, 0L, 0L, 2L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 2L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 2L, 0L, 1L, 2L), win2 = c(2L, 4L, 2L, 3L, 2L, 1L, 1L, 2L, 2L, 0L, 4L, 0L, 2L, 3L, 2L, 4L, 3L, 2L, 3L, 5L, 1L, 3L, 3L, 1L, 2L, 3L, 2L, 3L, 2L, 1L, 2L, 1L, 1L, 1L, 0L, 1L, 3L, 0L, 0L, 1L, 0L, 2L, 0L, 1L, 0L, 0L, 1L, 2L, 0L, 1L, 2L, 0L, 1L, 0L, 0L, 2L, 3L, 4L, 4L, 1L, 1L, 0L, 0L, 2L, 2L, 2L, 2L, 0L, 1L, 2L, 3L, 2L, 0L, 1L, 4L, 3L, 0L, 1L, 3L, 0L, 2L, 0L, 0L, 1L), win1.adj = c(6, 5, 7, 5, 8, 8, 8, 7, 7, 8.5, 4.5, 8.5, 7.5, 6, 6.5, 5, 6.5, 7.5, 6, 4, 8, 5, 6.5, 7, 8, 6.5, 7, 6, 1.5, 3.5, 2.5, 3, 4, 4, 3.5, 2.5, 1.5, 4.5, 3.5, 4, 4.5, 2, 3.5, 2.5, 4, 4, 2.5, 2.5, 5, 4, 1.5, 3.5, 4, 4, 4.5, 2, 1.5, 0.5, 0.5, 3, 3.5, 5, 5, 3, 2, 2.5, 2.5, 4.5, 3.5, 3, 2, 3, 4, 3.5, 1, 1.5, 5, 4, 2, 4.5, 2, 5, 4.5, 3), win2.adj = c(4, 5, 3, 5, 2, 2, 2, 3, 3, 1.5, 5.5, 1.5, 2.5, 4, 3.5, 5, 3.5, 2.5, 4, 6, 2, 5, 3.5, 3, 2, 3.5, 3, 4, 3.5, 1.5, 2.5, 2, 1, 1, 1.5, 2.5, 3.5, 0.5, 1.5, 1, 0.5, 3, 1.5, 2.5, 1, 1, 2.5, 2.5, 0, 1, 3.5, 1.5, 1, 1, 0.5, 3, 3.5, 4.5, 4.5, 2, 1.5, 0, 0, 2, 3, 2.5, 2.5, 0.5, 1.5, 2, 3, 2, 1, 1.5, 4, 3.5, 0, 1, 3, 0.5, 3, 0, 0.5, 2), instrument = structure(c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c("cello", "flute", "violin"), class = "factor")), .Names = c("field1", "field2", "win1", "tie", "win2", "win1.adj", "win2.adj", "instrument" ), class = "data.frame", row.names = c(NA, -84L)), design = structure(list( a = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L), .Label = c("0", "1"), class = "factor", contrasts = structure(c(-1, 1), .Dim = c(2L, 1L), .Dimnames = list(c("0", "1"), NULL))), b = structure(c(1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L), .Label = c("0", "1"), class = "factor", contrasts = structure(c(-1, 1), .Dim = c(2L, 1L), .Dimnames = list(c("0", "1"), NULL))), c = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), .Label = c("0", "1"), class = "factor", contrasts = structure(c(-1, 1), .Dim = c(2L, 1L), .Dimnames = list(c("0", "1"), NULL)))), .Names = c("a", "b", "c"), row.names = c("000", "001", "010", "011", "100", "101", "110", "111"), class = "data.frame")), .Names = c("comparisons", "design")) BradleyTerry2/data/chameleons.R0000744000176200001440000002073313152515662016132 0ustar liggesuserschameleons <- structure(list(winner = structure(list(ID = structure(c(22L, 4L, 24L, 30L, 17L, 26L, 4L, 23L, 24L, 8L, 16L, 21L, 22L, 17L, 26L, 23L, 24L, 12L, 31L, 28L, 33L, 15L, 10L, 18L, 22L, 20L, 26L, 28L, 33L, 5L, 19L, 8L, 25L, 12L, 31L, 15L, 18L, 8L, 16L, 9L, 12L, 10L, 19L, 11L, 18L, 28L, 5L, 19L, 8L, 12L, 28L, 26L, 34L, 18L, 31L, 16L, 12L, 17L, 34L, 15L, 16L, 25L, 31L, 34L, 15L, 18L, 31L, 12L, 21L, 26L, 33L, 8L, 20L, 34L, 24L, 25L, 33L, 16L, 21L, 15L, 10L, 12L, 10L, 18L, 8L, 13L, 12L, 33L, 28L, 34L, 26L, 25L, 9L, 23L, 22L, 10L, 10L, 9L, 33L, 34L, 26L, 31L, 23L, 13L, 25L, 30L), .Label = c("C01", "C02", "C03", "C04", "C05", "C06", "C08", "C10", "C11", "C12", "C13", "C14", "C15", "C16", "C17", "C19", "C20", "C21", "C22", "C24", "C25", "C26", "C28", "C29", "C30", "C31", "C32", "C34", "C35", "C36", "C37", "C38", "C39", "C40", "C41"), class = "factor"), prev.wins.1 = c(0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 0L, 1L, 1L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 0L), prev.wins.2 = c(0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 2L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 2L, 0L, 2L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 1L, 1L, 1L, 2L, 1L, 0L, 2L, 1L, 1L, 0L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 0L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 0L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 0L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 0L), prev.wins.all = c(0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 2L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 2L, 0L, 2L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 1L, 1L, 1L, 2L, 1L, 0L, 2L, 1L, 1L, 0L, 2L, 2L, 1L, 2L, 3L, 3L, 3L, 3L, 0L, 3L, 2L, 2L, 4L, 2L, 1L, 2L, 3L, 1L, 3L, 2L, 3L, 4L, 4L, 5L, 1L, 4L, 2L, 4L, 1L, 3L, 3L, 2L, 3L, 4L, 2L, 4L, 2L, 6L, 3L, 5L, 5L, 0L, 7L, 4L, 4L, 4L, 5L, 3L, 1L, 2L, 3L, 4L, 5L, 2L, 5L, 5L, 6L, 5L, 3L, 1L, 4L, 1L)), .Names = c("ID", "prev.wins.1", "prev.wins.2", "prev.wins.all"), row.names = c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 17L, 18L, 19L, 20L, 21L, 22L, 23L, 24L, 25L, 26L, 27L, 28L, 29L, 30L, 31L, 32L, 33L, 34L, 35L, 36L, 37L, 38L, 39L, 40L, 41L, 42L, 43L, 44L, 45L, 46L, 47L, 48L, 49L, 50L, 51L, 52L, 53L, 54L, 55L, 56L, 57L, 58L, 59L, 60L, 61L, 62L, 63L, 64L, 65L, 66L, 67L, 68L, 69L, 70L, 71L, 72L, 73L, 74L, 75L, 76L, 77L, 78L, 79L, 80L, 81L, 82L, 83L, 84L, 85L, 86L, 87L, 88L, 89L, 90L, 91L, 92L, 93L, 94L, 95L, 96L, 97L, 98L, 99L, 100L, 101L, 102L, 103L, 104L, 105L, 106L, 107L), class = "data.frame"), loser = structure(list(ID = structure(c(2L, 7L, 23L, 35L, 29L, 34L, 6L, 30L, 35L, 13L, 25L, 2L, 20L, 34L, 29L, 35L, 30L, 9L, 27L, 3L, 1L, 32L, 11L, 11L, 21L, 2L, 17L, 1L, 3L, 10L, 14L, 16L, 13L, 27L, 9L, 11L, 32L, 25L, 13L, 27L, 31L, 14L, 5L, 32L, 15L, 33L, 14L, 10L, 27L, 3L, 20L, 4L, 6L, 23L, 8L, 27L, 28L, 22L, 7L, 30L, 9L, 27L, 3L, 4L, 14L, 5L, 1L, 20L, 29L, 15L, 31L, 2L, 7L, 35L, 19L, 3L, 4L, 29L, 14L, 17L, 30L, 21L, 17L, 24L, 4L, 27L, 17L, 20L, 30L, 11L, 19L, 6L, 7L, 5L, 5L, 15L, 12L, 14L, 28L, 20L, 24L, 11L, 15L, 3L, 29L, 14L), .Label = c("C01", "C02", "C03", "C04", "C05", "C06", "C08", "C10", "C11", "C12", "C13", "C14", "C15", "C16", "C17", "C19", "C20", "C21", "C22", "C24", "C25", "C26", "C28", "C29", "C30", "C31", "C32", "C34", "C35", "C36", "C37", "C38", "C39", "C40", "C41"), class = "factor"), prev.wins.1 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 1L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 1L, 1L, 0L, 1L, 1L, 0L, 1L, 1L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), prev.wins.2 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 2L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 2L, 0L, 1L, 0L, 2L, 2L, 0L, 1L, 0L, 0L, 1L, 2L, 0L, 2L, 2L, 0L, 2L, 2L, 0L, 0L, 1L, 0L, 0L, 1L, 0L, 1L, 0L, 1L, 0L, 2L, 2L, 0L, 0L, 0L, 2L, 0L, 0L, 0L, 0L, 1L, 0L, 2L, 1L, 2L, 0L, 0L, 0L, 1L, 0L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 2L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L), prev.wins.all = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 3L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 2L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 2L, 0L, 1L, 0L, 2L, 2L, 0L, 2L, 0L, 0L, 1L, 3L, 0L, 2L, 4L, 0L, 4L, 3L, 0L, 1L, 1L, 0L, 0L, 3L, 0L, 2L, 0L, 1L, 0L, 4L, 5L, 0L, 0L, 0L, 3L, 0L, 3L, 0L, 0L, 3L, 1L, 3L, 3L, 4L, 3L, 0L, 3L, 2L, 1L, 1L, 3L, 0L, 0L, 2L, 2L, 5L, 8L, 0L, 5L, 2L, 4L, 1L, 5L, 0L, 0L, 0L)), .Names = c("ID", "prev.wins.1", "prev.wins.2", "prev.wins.all"), row.names = c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 17L, 18L, 19L, 20L, 21L, 22L, 23L, 24L, 25L, 26L, 27L, 28L, 29L, 30L, 31L, 32L, 33L, 34L, 35L, 36L, 37L, 38L, 39L, 40L, 41L, 42L, 43L, 44L, 45L, 46L, 47L, 48L, 49L, 50L, 51L, 52L, 53L, 54L, 55L, 56L, 57L, 58L, 59L, 60L, 61L, 62L, 63L, 64L, 65L, 66L, 67L, 68L, 69L, 70L, 71L, 72L, 73L, 74L, 75L, 76L, 77L, 78L, 79L, 80L, 81L, 82L, 83L, 84L, 85L, 86L, 87L, 88L, 89L, 90L, 91L, 92L, 93L, 94L, 95L, 96L, 97L, 98L, 99L, 100L, 101L, 102L, 103L, 104L, 105L, 106L, 107L), class = "data.frame"), predictors = structure(list(ch.res = c(0.94, 0.85, 0.02, -1.76, 0.02, -0.37, -0.87, -0.01, 0.63, 1.63, -0.58, 1.46, -0.54, -0.11, 0.11, -0.21, 0.49, 0.24, 0.02, 0.18, 0.45, -0.59, 0.25, 0.24, 0.18, 0.39, -1.11, -0.4, -1.02, 1.63, -0.47, -0.3, 0.26, -1.88, -1.08), jl.res = c(0.52, 1.41, -0.16, -0.79, 0.25, -0.24, -0.72, -0.21, -0.04, -1.2, -0.19, -1.43, -0.1, 0.33, -0.44, -0.23, 0.73, -0.12, 0.65, 0.72, 0.19, 0.22, -0.03, 0.12, 0.58, 0.22, -0.45, -0.4, -0.83, 1.3, 0.02, -0.26, 0.24, -0.48, -0.14), tl.res = c(-1.58, 5.04, 3.43, -5.63, 2.97, -4.1, 1.83, -3.8, -0.15, -8.68, -1.22, -1.85, 1.59, 3.24, -2.48, 0.74, 4.73, 3.07, -6.44, -0.41, -7.03, -0.84, 8.08, -9.25, -1.48, -1.66, 1.22, -1.84, -2, 11.13, 1.25, 11.95, 5.39, 1.09, 4.31), mass.res = c(-0.45, 0.81, 0.31, -1.4, -0.19, -1.21, -1.46, 0.28, 0.9, 0.52, -0.42, 0.09, 0.24, 0.77, -0.86, -0.16, 0.29, -0.04, 1.19, 0.01, 0.79, -0.11, 0.21, 0.2, 0.24, 0.91, 0.19, 1.29, -0.42, 2.39, 0.07, -0.45, 0.19, -0.91, -0.62), SVL = c(72.8, 74.57, 74.66, 79.17, 89.52, 80.26, 80.31, 72.22, 70.62, 85.91, 81.84, 73.74, 69.32, 90.43, 81.29, 68.47, 77.04, 81.99, 88.7, 76.03, 76.48, 77.83, 82.73, 85.96, 70.12, 78.45, 72.2, 73.74, 77.95, 82.7, 70.32, 81.34, 75.8, 79, 83.3), prop.main = c(35.66853756, 21.13429221, 25.84193276, 33.21091076, 25.10409025, 25.10409025, 27.97210113, 24.3500636, 24.3500636, 31.30644625, 30.65729899, 31.94805943, 34.44990199, 31.30644625, 33.21091076, 22.786498, 31.30644625, 33.21091076, 27.27472868, 27.97210113, 37.46496893, 31.94805943, 27.97210113, 22.786498, 25.10409025, 45, 35.66853756, 33.21091076, 21.13429221, 26.56505118, 27.27472868, 25.10409025, 32.58270626, 31.30644625, 31.30644625), prop.patch = c(50.1848799, 30, 28.65818058, 41.55394871, 38.05672982, 39.8151201, 43.28009362, 34.44990199, 36.86989765, 31.30644625, 39.23152048, 48.44605129, 42.13041476, 38.6454835, 49.02392312, 29.33387425, 38.6454835, 39.8151201, 33.83315867, 30, 49.60344811, 43.85377861, 42.70571713, 27.27472868, 40.97607688, 50.76847952, 47.29428287, 42.13041476, 21.97275978, 44.427004, 37.46496893, 34.44990199, 45, 53.72880156, 43.28009362)), .Names = c("ch.res", "jl.res", "tl.res", "mass.res", "SVL", "prop.main", "prop.patch"), row.names = c("C01", "C02", "C03", "C04", "C05", "C06", "C08", "C10", "C11", "C12", "C13", "C14", "C15", "C16", "C17", "C19", "C20", "C21", "C22", "C24", "C25", "C26", "C28", "C29", "C30", "C31", "C32", "C34", "C35", "C36", "C37", "C38", "C39", "C40", "C41"), class = "data.frame")), .Names = c("winner", "loser", "predictors")) BradleyTerry2/data/icehockey.R0000744000176200001440000015317613152515662015767 0ustar liggesusersicehockey <- structure(list(date = c(20091008L, 20091008L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091009L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091010L, 20091011L, 20091011L, 20091015L, 20091015L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091016L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091017L, 20091018L, 20091020L, 20091022L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091023L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091024L, 20091025L, 20091025L, 20091025L, 20091027L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091030L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091031L, 20091101L, 20091101L, 20091101L, 20091101L, 20091105L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091106L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091107L, 20091108L, 20091108L, 20091108L, 20091108L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091113L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091114L, 20091115L, 20091115L, 20091115L, 20091115L, 20091117L, 20091118L, 20091119L, 20091119L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091120L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091121L, 20091122L, 20091124L, 20091124L, 20091124L, 20091124L, 20091124L, 20091124L, 20091124L, 20091125L, 20091125L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091127L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091128L, 20091129L, 20091130L, 20091201L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091204L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091205L, 20091206L, 20091208L, 20091209L, 20091209L, 20091210L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091211L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091212L, 20091213L, 20091213L, 20091215L, 20091218L, 20091218L, 20091219L, 20091219L, 20091220L, 20091229L, 20091229L, 20091229L, 20091229L, 20091229L, 20091229L, 20091229L, 20091230L, 20091230L, 20091230L, 20091230L, 20091230L, 20091230L, 20100101L, 20100101L, 20100101L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100102L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100103L, 20100104L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100108L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100109L, 20100110L, 20100110L, 20100110L, 20100110L, 20100110L, 20100112L, 20100112L, 20100112L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100115L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100116L, 20100118L, 20100119L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100122L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100123L, 20100124L, 20100125L, 20100126L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100129L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100130L, 20100201L, 20100201L, 20100203L, 20100204L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100205L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100206L, 20100208L, 20100208L, 20100209L, 20100209L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100212L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100213L, 20100216L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100219L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100220L, 20100221L, 20100221L, 20100221L, 20100223L, 20100225L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100226L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100227L, 20100228L, 20100228L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100305L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100306L, 20100307L, 20100307L, 20100307L, 20100307L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100312L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100313L, 20100314L, 20100314L, 20100314L, 20100314L, 20100314L, 20100314L, 20100314L, 20100314L, 20100314L, 20100318L, 20100319L, 20100319L, 20100319L, 20100319L, 20100319L, 20100319L, 20100319L, 20100319L, 20100319L, 20100319L, 20100320L, 20100320L, 20100320L, 20100320L, 20100320L, 20100320L, 20100320L, 20100320L), visitor = structure(c(47L, 49L, 2L, 3L, 6L, 8L, 16L, 15L, 21L, 24L, 27L, 28L, 34L, 29L, 38L, 41L, 47L, 51L, 31L, 55L, 2L, 3L, 8L, 16L, 15L, 21L, 23L, 27L, 28L, 34L, 29L, 39L, 38L, 51L, 53L, 31L, 55L, 24L, 41L, 20L, 46L, 3L, 12L, 11L, 13L, 14L, 18L, 20L, 23L, 28L, 32L, 35L, 33L, 30L, 42L, 40L, 46L, 49L, 50L, 48L, 54L, 3L, 6L, 12L, 13L, 14L, 18L, 28L, 32L, 35L, 33L, 30L, 42L, 40L, 49L, 50L, 48L, 54L, 7L, 43L, 40L, 4L, 10L, 7L, 15L, 18L, 20L, 27L, 35L, 36L, 26L, 37L, 38L, 44L, 46L, 50L, 48L, 52L, 55L, 57L, 56L, 4L, 10L, 20L, 23L, 27L, 28L, 29L, 35L, 36L, 26L, 37L, 38L, 44L, 46L, 50L, 48L, 52L, 53L, 57L, 56L, 18L, 23L, 28L, 4L, 1L, 6L, 11L, 8L, 9L, 16L, 18L, 21L, 22L, 23L, 25L, 28L, 34L, 32L, 29L, 30L, 39L, 40L, 44L, 47L, 49L, 52L, 51L, 55L, 57L, 2L, 4L, 6L, 10L, 8L, 16L, 18L, 21L, 28L, 32L, 29L, 35L, 30L, 39L, 40L, 44L, 46L, 47L, 52L, 51L, 31L, 58L, 2L, 1L, 10L, 7L, 42L, 4L, 6L, 12L, 7L, 11L, 8L, 9L, 13L, 16L, 19L, 20L, 22L, 23L, 24L, 32L, 33L, 26L, 42L, 40L, 37L, 41L, 43L, 48L, 53L, 31L, 58L, 4L, 6L, 12L, 8L, 9L, 16L, 19L, 20L, 22L, 24L, 28L, 32L, 33L, 26L, 37L, 38L, 41L, 43L, 48L, 52L, 53L, 58L, 11L, 25L, 39L, 46L, 2L, 5L, 1L, 4L, 11L, 15L, 14L, 17L, 19L, 21L, 22L, 24L, 27L, 35L, 36L, 39L, 49L, 50L, 48L, 52L, 51L, 31L, 54L, 57L, 2L, 5L, 1L, 6L, 10L, 15L, 14L, 18L, 17L, 19L, 21L, 22L, 24L, 27L, 34L, 29L, 36L, 39L, 40L, 41L, 49L, 50L, 52L, 51L, 31L, 54L, 55L, 57L, 10L, 18L, 41L, 55L, 9L, 49L, 25L, 43L, 2L, 3L, 5L, 6L, 7L, 11L, 8L, 13L, 16L, 23L, 32L, 26L, 30L, 42L, 37L, 44L, 45L, 46L, 47L, 48L, 52L, 53L, 56L, 2L, 3L, 5L, 6L, 7L, 13L, 16L, 23L, 34L, 32L, 29L, 26L, 30L, 42L, 39L, 37L, 38L, 44L, 45L, 47L, 48L, 52L, 53L, 56L, 58L, 35L, 12L, 11L, 15L, 18L, 25L, 52L, 31L, 40L, 45L, 1L, 12L, 8L, 13L, 16L, 24L, 27L, 28L, 32L, 36L, 33L, 41L, 44L, 46L, 51L, 55L, 57L, 56L, 1L, 11L, 8L, 13L, 18L, 24L, 25L, 28L, 32L, 36L, 33L, 39L, 41L, 44L, 51L, 53L, 54L, 57L, 56L, 58L, 19L, 58L, 9L, 3L, 5L, 4L, 12L, 7L, 16L, 15L, 18L, 17L, 20L, 24L, 29L, 35L, 36L, 30L, 42L, 40L, 37L, 38L, 43L, 45L, 46L, 47L, 50L, 52L, 51L, 31L, 55L, 3L, 5L, 4L, 12L, 7L, 15L, 14L, 18L, 17L, 24L, 25L, 34L, 29L, 35L, 36L, 33L, 42L, 40L, 37L, 43L, 45L, 46L, 47L, 50L, 52L, 51L, 53L, 39L, 4L, 7L, 54L, 47L, 14L, 20L, 21L, 28L, 32L, 35L, 33L, 30L, 42L, 39L, 41L, 43L, 45L, 49L, 56L, 4L, 7L, 8L, 14L, 20L, 28L, 34L, 33L, 30L, 42L, 40L, 38L, 41L, 44L, 45L, 53L, 57L, 56L, 29L, 52L, 48L, 36L, 42L, 19L, 30L, 15L, 12L, 14L, 35L, 45L, 47L, 49L, 54L, 17L, 27L, 25L, 29L, 49L, 54L, 42L, 50L, 53L, 2L, 3L, 10L, 7L, 8L, 9L, 16L, 15L, 21L, 23L, 24L, 25L, 28L, 34L, 42L, 40L, 38L, 50L, 54L, 2L, 10L, 9L, 16L, 15L, 17L, 23L, 24L, 28L, 34L, 26L, 37L, 38L, 41L, 58L, 47L, 3L, 5L, 1L, 4L, 6L, 11L, 8L, 9L, 13L, 14L, 18L, 22L, 23L, 24L, 32L, 29L, 36L, 30L, 49L, 53L, 31L, 54L, 3L, 5L, 1L, 4L, 6L, 8L, 13L, 14L, 18L, 21L, 22L, 23L, 24L, 25L, 34L, 36L, 30L, 53L, 31L, 54L, 57L, 21L, 25L, 49L, 48L, 58L, 41L, 46L, 58L, 2L, 3L, 5L, 1L, 4L, 6L, 10L, 11L, 8L, 15L, 17L, 27L, 28L, 32L, 35L, 26L, 42L, 37L, 38L, 50L, 51L, 31L, 56L, 2L, 3L, 5L, 1L, 6L, 10L, 8L, 9L, 15L, 17L, 19L, 23L, 27L, 25L, 28L, 34L, 32L, 26L, 42L, 37L, 38L, 43L, 49L, 50L, 51L, 56L, 19L, 46L, 12L, 11L, 9L, 13L, 14L, 18L, 19L, 20L, 21L, 22L, 23L, 34L, 32L, 33L, 26L, 39L, 40L, 37L, 41L, 43L, 44L, 50L, 48L, 57L, 58L, 10L, 12L, 7L, 9L, 13L, 14L, 18L, 19L, 20L, 22L, 23L, 32L, 29L, 39L, 40L, 37L, 41L, 43L, 44L, 46L, 50L, 48L, 51L, 55L, 57L, 58L, 55L, 18L, 52L, 2L, 5L, 12L, 16L, 20L, 21L, 27L, 25L, 29L, 33L, 30L, 42L, 39L, 40L, 45L, 46L, 47L, 49L, 48L, 52L, 51L, 53L, 31L, 54L, 57L, 56L, 2L, 5L, 12L, 16L, 20L, 21L, 27L, 25L, 34L, 35L, 33L, 30L, 42L, 40L, 45L, 49L, 48L, 52L, 51L, 53L, 54L, 57L, 56L, 22L, 38L, 48L, 29L, 2L, 1L, 6L, 12L, 7L, 11L, 13L, 16L, 15L, 17L, 19L, 22L, 24L, 28L, 35L, 26L, 39L, 38L, 41L, 43L, 44L, 50L, 53L, 55L, 1L, 6L, 12L, 13L, 16L, 15L, 17L, 19L, 20L, 22L, 24L, 28L, 29L, 35L, 26L, 39L, 41L, 43L, 44L, 50L, 53L, 31L, 55L, 7L, 22L, 8L, 18L, 3L, 5L, 10L, 7L, 9L, 21L, 23L, 27L, 28L, 34L, 32L, 29L, 33L, 30L, 37L, 38L, 45L, 46L, 47L, 49L, 50L, 52L, 54L, 57L, 58L, 5L, 4L, 10L, 12L, 9L, 13L, 21L, 27L, 25L, 34L, 32L, 29L, 33L, 30L, 37L, 45L, 47L, 49L, 50L, 31L, 54L, 57L, 58L, 17L, 2L, 1L, 6L, 12L, 7L, 11L, 16L, 15L, 14L, 17L, 23L, 24L, 25L, 35L, 36L, 26L, 42L, 40L, 41L, 43L, 44L, 45L, 47L, 51L, 53L, 31L, 55L, 2L, 1L, 4L, 6L, 16L, 15L, 14L, 18L, 17L, 21L, 24L, 34L, 36L, 26L, 42L, 40L, 41L, 43L, 44L, 45L, 46L, 47L, 48L, 52L, 51L, 53L, 31L, 55L, 4L, 38L, 48L, 34L, 43L, 3L, 5L, 6L, 10L, 12L, 11L, 8L, 9L, 13L, 19L, 20L, 22L, 24L, 27L, 25L, 28L, 33L, 39L, 37L, 44L, 46L, 49L, 52L, 54L, 57L, 56L, 58L, 3L, 1L, 4L, 10L, 9L, 13L, 19L, 20L, 21L, 22L, 23L, 24L, 27L, 28L, 32L, 29L, 35L, 33L, 37L, 38L, 49L, 52L, 31L, 54L, 56L, 58L, 3L, 11L, 4L, 10L, 7L, 8L, 9L, 16L, 14L, 19L, 22L, 24L, 25L, 34L, 36L, 26L, 40L, 38L, 43L, 51L, 55L, 57L, 56L, 10L, 11L, 8L, 9L, 16L, 18L, 19L, 20L, 22L, 24L, 25L, 36L, 26L, 30L, 39L, 43L, 46L, 48L, 55L, 57L, 9L, 16L, 19L, 56L, 3L, 5L, 1L, 6L, 9L, 13L, 14L, 18L, 22L, 23L, 25L, 34L, 29L, 36L, 33L, 30L, 42L, 40L, 44L, 47L, 53L, 31L, 55L, 5L, 1L, 6L, 9L, 13L, 14L, 18L, 22L, 23L, 25L, 34L, 29L, 36L, 33L, 30L, 42L, 40L, 44L, 47L, 48L, 53L, 31L, 55L, 9L, 14L, 34L, 33L, 30L, 44L, 47L, 31L, 55L, 26L, 2L, 9L, 13L, 27L, 29L, 37L, 41L, 51L, 53L, 55L, 9L, 21L, 27L, 29L, 37L, 52L, 54L, 56L), .Label = c("Alaska Anchorage", "Air Force", "Alab-Huntsville", "American Int'l", "Alaska", "Army", "Boston College", "Bowling Green", "Brown", "Bemidji State", "Boston University", "Bentley", "Canisius", "Colorado College", "Colgate", "Clarkson", "Cornell", "Connecticut", "Dartmouth", "Denver", "Ferris State", "Harvard", "Holy Cross", "Lake Superior", "Massachusetts", "Minnesota Duluth", "Maine", "Mercyhurst", "Michigan", "Minnesota State", "UMass Lowell", "Miami", "Minnesota", "Merrimack", "Michigan State", "Michigan Tech", "North Dakota", "Northeastern", "New Hampshire", "Niagara", "Northern Michigan", "Nebraska-Omaha", "Notre Dame", "Ohio State", "Princeton", "Providence", "Quinnipiac", "Robert Morris", "Rensselaer", "RIT", "St. Cloud State", "Sacred Heart", "St. Lawrence", "Union", "Vermont", "Wisconsin", "Western Michigan", "Yale"), class = "factor"), v_goals = c(4L, 2L, 1L, 3L, 4L, 2L, 1L, 4L, 5L, 3L, 1L, 3L, 2L, 0L, 2L, 3L, 3L, 2L, 3L, 4L, 3L, 1L, 1L, 3L, 3L, 0L, 1L, 3L, 1L, 2L, 6L, 1L, 4L, 0L, 5L, 3L, 6L, 3L, 3L, 2L, 3L, 4L, 2L, 2L, 4L, 3L, 1L, 0L, 1L, 1L, 6L, 5L, 0L, 2L, 1L, 3L, 0L, 0L, 1L, 2L, 3L, 4L, 3L, 2L, 0L, 1L, 1L, 3L, 5L, 3L, 3L, 2L, 4L, 3L, 1L, 3L, 5L, 1L, 1L, 3L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 3L, 5L, 2L, 2L, 8L, 4L, 4L, 3L, 2L, 2L, 6L, 2L, 5L, 3L, 3L, 2L, 3L, 2L, 3L, 5L, 2L, 1L, 1L, 4L, 3L, 2L, 1L, 6L, 1L, 2L, 2L, 1L, 3L, 4L, 3L, 1L, 4L, 5L, 3L, 0L, 1L, 2L, 1L, 5L, 0L, 5L, 2L, 3L, 4L, 5L, 3L, 1L, 2L, 1L, 5L, 4L, 3L, 1L, 1L, 1L, 1L, 4L, 2L, 2L, 1L, 2L, 0L, 2L, 6L, 2L, 6L, 5L, 4L, 1L, 2L, 2L, 2L, 5L, 3L, 3L, 3L, 5L, 4L, 1L, 2L, 3L, 0L, 1L, 1L, 3L, 4L, 0L, 1L, 3L, 6L, 2L, 1L, 3L, 4L, 4L, 3L, 3L, 2L, 4L, 3L, 1L, 4L, 1L, 3L, 0L, 5L, 3L, 2L, 1L, 1L, 0L, 2L, 1L, 1L, 2L, 3L, 3L, 1L, 3L, 5L, 5L, 2L, 4L, 1L, 4L, 1L, 1L, 5L, 2L, 3L, 2L, 4L, 3L, 0L, 1L, 1L, 1L, 1L, 3L, 5L, 1L, 2L, 1L, 2L, 1L, 0L, 6L, 3L, 3L, 3L, 5L, 2L, 5L, 3L, 2L, 3L, 4L, 2L, 2L, 5L, 2L, 0L, 1L, 3L, 3L, 4L, 6L, 2L, 1L, 3L, 3L, 2L, 4L, 0L, 1L, 4L, 3L, 3L, 1L, 3L, 1L, 3L, 4L, 5L, 1L, 2L, 6L, 0L, 2L, 3L, 3L, 1L, 4L, 1L, 2L, 1L, 3L, 5L, 4L, 2L, 4L, 7L, 3L, 1L, 3L, 4L, 8L, 1L, 0L, 1L, 2L, 1L, 5L, 3L, 2L, 3L, 1L, 3L, 3L, 3L, 3L, 3L, 4L, 2L, 1L, 3L, 1L, 4L, 3L, 1L, 3L, 3L, 2L, 3L, 5L, 4L, 3L, 1L, 3L, 2L, 4L, 6L, 1L, 3L, 6L, 2L, 0L, 6L, 2L, 2L, 5L, 1L, 0L, 2L, 1L, 1L, 5L, 0L, 3L, 8L, 2L, 2L, 0L, 3L, 1L, 2L, 3L, 3L, 3L, 7L, 3L, 3L, 4L, 2L, 1L, 2L, 4L, 2L, 5L, 2L, 2L, 5L, 4L, 2L, 3L, 1L, 5L, 2L, 2L, 7L, 6L, 0L, 4L, 2L, 1L, 3L, 5L, 3L, 3L, 1L, 0L, 2L, 2L, 2L, 3L, 1L, 2L, 3L, 0L, 2L, 4L, 1L, 0L, 1L, 1L, 4L, 1L, 1L, 3L, 1L, 3L, 4L, 4L, 2L, 3L, 4L, 5L, 4L, 1L, 2L, 5L, 3L, 1L, 2L, 2L, 0L, 6L, 1L, 2L, 2L, 0L, 1L, 1L, 1L, 3L, 2L, 3L, 4L, 5L, 2L, 3L, 5L, 1L, 4L, 3L, 3L, 3L, 6L, 3L, 3L, 5L, 3L, 4L, 1L, 1L, 1L, 5L, 3L, 2L, 3L, 1L, 4L, 3L, 3L, 1L, 3L, 3L, 5L, 3L, 1L, 3L, 2L, 3L, 0L, 3L, 4L, 0L, 4L, 5L, 1L, 1L, 3L, 1L, 6L, 4L, 4L, 10L, 6L, 1L, 4L, 3L, 2L, 3L, 1L, 5L, 1L, 7L, 0L, 1L, 5L, 1L, 3L, 0L, 3L, 1L, 1L, 0L, 2L, 1L, 0L, 7L, 3L, 0L, 4L, 2L, 1L, 7L, 0L, 2L, 1L, 3L, 3L, 3L, 1L, 5L, 4L, 2L, 1L, 1L, 2L, 3L, 2L, 4L, 2L, 3L, 1L, 3L, 1L, 2L, 3L, 3L, 2L, 4L, 4L, 2L, 1L, 2L, 2L, 2L, 1L, 4L, 2L, 1L, 4L, 7L, 2L, 7L, 3L, 1L, 2L, 3L, 0L, 3L, 3L, 1L, 2L, 5L, 1L, 0L, 3L, 2L, 2L, 1L, 2L, 3L, 6L, 2L, 1L, 4L, 4L, 4L, 2L, 4L, 1L, 1L, 2L, 6L, 2L, 0L, 6L, 1L, 2L, 4L, 1L, 0L, 0L, 3L, 1L, 3L, 4L, 2L, 2L, 1L, 3L, 3L, 4L, 3L, 2L, 4L, 3L, 2L, 3L, 5L, 4L, 4L, 3L, 5L, 6L, 1L, 2L, 0L, 2L, 2L, 2L, 4L, 5L, 4L, 4L, 1L, 2L, 4L, 1L, 1L, 3L, 5L, 1L, 1L, 4L, 5L, 2L, 2L, 6L, 0L, 1L, 3L, 0L, 4L, 6L, 4L, 1L, 3L, 1L, 3L, 6L, 0L, 2L, 6L, 2L, 5L, 1L, 4L, 3L, 5L, 3L, 1L, 3L, 5L, 1L, 2L, 3L, 3L, 3L, 3L, 5L, 2L, 4L, 3L, 3L, 3L, 1L, 4L, 1L, 4L, 3L, 4L, 1L, 2L, 2L, 3L, 1L, 6L, 2L, 2L, 1L, 1L, 2L, 3L, 6L, 2L, 2L, 7L, 1L, 5L, 2L, 2L, 2L, 2L, 3L, 3L, 1L, 4L, 5L, 1L, 1L, 3L, 1L, 5L, 3L, 2L, 5L, 3L, 4L, 3L, 4L, 4L, 3L, 4L, 1L, 3L, 2L, 2L, 5L, 4L, 5L, 3L, 4L, 3L, 2L, 1L, 0L, 0L, 1L, 4L, 2L, 0L, 4L, 3L, 5L, 7L, 3L, 3L, 2L, 3L, 2L, 2L, 5L, 0L, 1L, 0L, 1L, 2L, 5L, 3L, 2L, 5L, 5L, 3L, 5L, 0L, 4L, 2L, 5L, 1L, 0L, 3L, 5L, 2L, 3L, 4L, 3L, 2L, 1L, 3L, 3L, 4L, 1L, 3L, 4L, 2L, 1L, 1L, 4L, 1L, 0L, 2L, 3L, 4L, 2L, 1L, 3L, 3L, 6L, 4L, 3L, 2L, 3L, 3L, 1L, 3L, 3L, 2L, 3L, 2L, 5L, 5L, 3L, 4L, 5L, 2L, 6L, 3L, 3L, 5L, 0L, 2L, 4L, 2L, 2L, 3L, 3L, 10L, 1L, 1L, 4L, 8L, 2L, 1L, 3L, 5L, 1L, 3L, 1L, 2L, 6L, 2L, 2L, 4L, 1L, 2L, 4L, 2L, 6L, 0L, 3L, 1L, 0L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 1L, 3L, 5L, 5L, 4L, 4L, 0L, 3L, 2L, 2L, 4L, 4L, 4L, 2L, 4L, 2L, 1L, 3L, 3L, 1L, 4L, 1L, 4L, 1L, 3L, 7L, 1L, 3L, 3L, 5L, 4L, 2L, 4L, 3L, 3L, 1L, 4L, 0L, 0L, 3L, 7L, 8L, 3L, 1L, 3L, 2L, 3L, 0L, 3L, 3L, 1L, 1L, 3L, 1L, 5L, 3L, 4L, 3L, 2L, 0L, 2L, 3L, 1L, 2L, 5L, 7L, 1L, 2L, 2L, 2L, 3L, 4L, 3L, 4L, 1L, 3L, 1L, 0L, 5L, 2L, 0L, 3L, 2L, 0L, 3L, 1L, 1L, 1L, 2L, 2L, 5L, 3L, 3L, 2L, 2L, 3L, 3L, 1L, 3L, 2L, 2L, 2L, 4L, 2L, 5L, 5L, 1L, 2L, 3L, 4L, 1L, 2L, 2L, 0L, 3L, 2L, 4L, 1L, 1L, 4L, 2L, 6L, 7L, 3L, 0L, 4L, 2L, 6L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 3L, 2L, 1L, 1L, 1L, 3L, 1L, 0L, 3L, 5L, 2L, 3L, 1L, 0L, 5L, 2L, 5L, 1L, 0L, 5L, 1L, 5L, 2L, 3L, 2L, 2L, 4L, 1L, 2L, 2L, 3L, 5L, 5L, 1L, 0L, 3L, 2L, 3L, 5L, 2L, 4L, 2L, 2L, 2L, 5L, 1L, 3L, 4L, 0L, 1L, 1L, 0L, 0L, 1L, 2L, 1L, 1L, 2L, 1L, 0L, 1L, 0L, 0L, 5L, 5L, 4L, 5L, 2L, 1L, 0L, 3L, 1L, 6L, 2L, 5L, 1L, 0L, 6L), opponent = structure(c(44L, 25L, 10L, 43L, 42L, 30L, 35L, 40L, 13L, 26L, 54L, 1L, 37L, 5L, 14L, 36L, 44L, 32L, 53L, 20L, 10L, 43L, 30L, 35L, 50L, 13L, 46L, 54L, 5L, 37L, 1L, 49L, 14L, 32L, 6L, 42L, 20L, 36L, 26L, 44L, 43L, 2L, 38L, 25L, 24L, 56L, 21L, 44L, 34L, 57L, 39L, 27L, 37L, 26L, 15L, 16L, 43L, 1L, 53L, 5L, 51L, 2L, 34L, 47L, 24L, 56L, 21L, 57L, 39L, 27L, 37L, 26L, 15L, 53L, 5L, 16L, 1L, 51L, 55L, 11L, 29L, 54L, 41L, 43L, 31L, 12L, 33L, 25L, 32L, 14L, 51L, 1L, 39L, 24L, 8L, 2L, 21L, 49L, 34L, 3L, 30L, 49L, 41L, 33L, 13L, 39L, 6L, 11L, 32L, 14L, 51L, 1L, 31L, 24L, 8L, 2L, 21L, 54L, 16L, 3L, 30L, 34L, 13L, 6L, 52L, 33L, 15L, 31L, 42L, 45L, 26L, 50L, 5L, 19L, 4L, 46L, 12L, 7L, 41L, 24L, 20L, 56L, 17L, 43L, 48L, 54L, 53L, 36L, 27L, 35L, 13L, 23L, 49L, 3L, 42L, 26L, 50L, 5L, 12L, 41L, 24L, 57L, 20L, 56L, 15L, 43L, 25L, 48L, 53L, 36L, 11L, 45L, 13L, 33L, 3L, 34L, 35L, 18L, 50L, 2L, 39L, 38L, 21L, 54L, 28L, 47L, 17L, 1L, 15L, 52L, 57L, 29L, 56L, 14L, 35L, 25L, 36L, 44L, 5L, 10L, 45L, 55L, 49L, 18L, 50L, 2L, 21L, 49L, 45L, 15L, 1L, 17L, 57L, 13L, 29L, 56L, 14L, 36L, 7L, 44L, 5L, 10L, 23L, 47L, 54L, 27L, 40L, 31L, 55L, 23L, 8L, 56L, 6L, 34L, 9L, 30L, 58L, 45L, 32L, 47L, 42L, 38L, 29L, 26L, 25L, 16L, 28L, 40L, 12L, 37L, 3L, 53L, 44L, 23L, 8L, 56L, 4L, 33L, 58L, 30L, 13L, 9L, 47L, 32L, 45L, 42L, 38L, 11L, 35L, 26L, 25L, 48L, 43L, 53L, 28L, 12L, 37L, 3L, 16L, 7L, 44L, 33L, 13L, 43L, 7L, 46L, 40L, 58L, 35L, 18L, 10L, 24L, 12L, 27L, 39L, 29L, 4L, 19L, 28L, 57L, 33L, 1L, 41L, 20L, 21L, 17L, 38L, 15L, 14L, 50L, 22L, 51L, 18L, 10L, 24L, 12L, 27L, 4L, 22L, 28L, 31L, 57L, 8L, 33L, 1L, 41L, 11L, 20L, 46L, 21L, 15L, 17L, 14L, 50L, 19L, 51L, 9L, 43L, 9L, 22L, 17L, 23L, 55L, 58L, 46L, 48L, 47L, 14L, 49L, 43L, 6L, 7L, 54L, 31L, 4L, 10L, 30L, 29L, 5L, 37L, 19L, 20L, 38L, 42L, 35L, 14L, 17L, 43L, 6L, 9L, 12L, 47L, 4L, 37L, 30L, 35L, 34L, 5L, 10L, 20L, 27L, 49L, 42L, 29L, 23L, 22L, 55L, 22L, 48L, 57L, 2L, 13L, 25L, 53L, 54L, 28L, 49L, 14L, 8L, 44L, 41L, 56L, 33L, 21L, 10L, 26L, 34L, 32L, 9L, 27L, 58L, 23L, 6L, 1L, 39L, 11L, 48L, 57L, 2L, 13L, 11L, 49L, 20L, 28L, 54L, 8L, 31L, 38L, 44L, 41L, 56L, 30L, 21L, 10L, 26L, 32L, 58L, 27L, 9L, 23L, 6L, 1L, 16L, 55L, 9L, 22L, 49L, 23L, 51L, 26L, 57L, 2L, 44L, 8L, 36L, 10L, 5L, 46L, 24L, 29L, 31L, 11L, 37L, 47L, 46L, 35L, 51L, 26L, 2L, 25L, 36L, 10L, 5L, 50L, 27L, 24L, 32L, 31L, 55L, 21L, 37L, 43L, 19L, 13L, 41L, 30L, 55L, 42L, 13L, 18L, 17L, 36L, 27L, 22L, 29L, 25L, 45L, 14L, 12L, 36L, 35L, 18L, 20L, 30L, 7L, 52L, 55L, 57L, 20L, 33L, 51L, 41L, 43L, 58L, 31L, 48L, 11L, 26L, 56L, 53L, 37L, 19L, 30L, 6L, 52L, 57L, 51L, 8L, 40L, 39L, 19L, 48L, 3L, 21L, 55L, 43L, 31L, 33L, 56L, 45L, 16L, 42L, 20L, 28L, 52L, 7L, 44L, 19L, 12L, 26L, 2L, 33L, 50L, 35L, 48L, 57L, 51L, 37L, 47L, 40L, 39L, 45L, 16L, 42L, 20L, 28L, 52L, 44L, 12L, 26L, 2L, 43L, 33L, 50L, 35L, 39L, 27L, 51L, 37L, 40L, 38L, 47L, 29L, 43L, 38L, 45L, 32L, 19L, 36L, 7L, 22L, 12L, 48L, 29L, 36L, 23L, 13L, 40L, 46L, 41L, 53L, 16L, 7L, 52L, 21L, 43L, 30L, 24L, 33L, 55L, 18L, 47L, 25L, 14L, 12L, 48L, 29L, 36L, 13L, 40L, 41L, 58L, 16L, 53L, 39L, 4L, 46L, 31L, 52L, 11L, 21L, 30L, 24L, 33L, 55L, 35L, 54L, 18L, 47L, 14L, 22L, 38L, 28L, 7L, 53L, 52L, 1L, 6L, 49L, 56L, 29L, 54L, 2L, 31L, 5L, 51L, 10L, 46L, 47L, 17L, 42L, 24L, 35L, 4L, 15L, 8L, 16L, 26L, 28L, 31L, 16L, 52L, 1L, 6L, 54L, 56L, 49L, 2L, 5L, 21L, 11L, 47L, 17L, 42L, 24L, 35L, 34L, 4L, 15L, 33L, 25L, 8L, 53L, 25L, 45L, 4L, 6L, 41L, 50L, 15L, 37L, 44L, 55L, 11L, 35L, 1L, 36L, 43L, 34L, 3L, 22L, 7L, 19L, 9L, 10L, 18L, 14L, 17L, 38L, 58L, 32L, 26L, 6L, 41L, 50L, 17L, 37L, 44L, 55L, 46L, 39L, 29L, 1L, 36L, 43L, 3L, 19L, 58L, 10L, 18L, 14L, 15L, 9L, 32L, 26L, 7L, 11L, 40L, 8L, 14L, 51L, 23L, 4L, 25L, 31L, 18L, 54L, 45L, 47L, 58L, 9L, 32L, 20L, 5L, 36L, 27L, 34L, 21L, 57L, 42L, 52L, 49L, 46L, 51L, 23L, 4L, 18L, 49L, 47L, 45L, 9L, 2L, 58L, 32L, 14L, 56L, 5L, 36L, 27L, 21L, 57L, 42L, 52L, 54L, 34L, 46L, 11L, 38L, 29L, 23L, 40L, 44L, 48L, 31L, 17L, 24L, 12L, 11L, 13L, 55L, 8L, 42L, 20L, 56L, 51L, 25L, 16L, 39L, 53L, 22L, 6L, 4L, 19L, 41L, 15L, 44L, 52L, 48L, 23L, 15L, 28L, 24L, 11L, 38L, 55L, 8L, 42L, 20L, 56L, 51L, 53L, 16L, 19L, 6L, 7L, 22L, 41L, 17L, 15L, 50L, 30L, 28L, 18L, 38L, 46L, 9L, 19L, 33L, 22L, 52L, 5L, 34L, 21L, 20L, 37L, 32L, 10L, 29L, 8L, 57L, 54L, 49L, 56L, 58L, 27L, 39L, 50L, 30L, 13L, 28L, 58L, 22L, 33L, 12L, 19L, 35L, 5L, 25L, 20L, 37L, 32L, 10L, 29L, 8L, 57L, 49L, 11L, 54L, 3L, 23L, 56L, 9L, 27L, 39L, 13L, 7L, 3L, 7L, 29L, 40L, 1L, 4L, 42L, 23L, 55L, 35L, 47L, 50L, 53L, 30L, 16L, 41L, 34L, 7L, 18L, 26L, 38L, 14L, 32L, 31L, 15L, 2L, 17L, 21L, 36L, 45L, 40L, 5L, 6L, 42L, 45L, 50L, 16L, 30L, 57L, 53L, 12L, 41L, 34L, 18L, 44L, 43L, 8L, 26L, 14L, 39L, 17L, 2L, 46L, 15L, 36L, 47L, 40L, 55L, 23L, 3L, 39L, 42L, 49L, 53L, 20L, 47L, 45L, 29L, 27L, 46L, 37L, 1L, 48L, 11L, 44L, 30L, 31L, 5L, 33L, 3L, 38L, 42L, 49L, 53L, 12L, 47L, 14L, 45L, 29L, 27L, 37L, 1L, 51L, 7L, 44L, 34L, 40L, 31L, 5L, 49L, 53L, 47L, 33L, 48L, 41L, 56L, 2L, 58L, 28L, 26L, 50L, 17L, 52L, 7L, 11L, 35L, 20L, 37L, 51L, 21L, 10L, 32L, 54L, 15L, 27L, 39L, 41L, 56L, 2L, 58L, 28L, 26L, 50L, 17L, 52L, 7L, 11L, 35L, 20L, 37L, 51L, 21L, 3L, 32L, 54L, 10L, 15L, 27L, 39L, 58L, 26L, 11L, 37L, 51L, 32L, 54L, 27L, 39L, 37L, 52L, 17L, 50L, 11L, 32L, 20L, 21L, 56L, 54L, 7L, 53L, 32L, 7L, 41L, 51L, 50L, 17L, 20L), .Label = c("Alaska Anchorage", "Air Force", "Alab-Huntsville", "American Int'l", "Alaska", "Army", "Boston College", "Bowling Green", "Brown", "Bemidji State", "Boston University", "Bentley", "Canisius", "Colorado College", "Colgate", "Clarkson", "Cornell", "Connecticut", "Dartmouth", "Denver", "Ferris State", "Harvard", "Holy Cross", "Lake Superior", "Massachusetts", "Minnesota Duluth", "Maine", "Mercyhurst", "Michigan", "Minnesota State", "UMass Lowell", "Miami", "Minnesota", "Merrimack", "Michigan State", "Michigan Tech", "North Dakota", "Northeastern", "New Hampshire", "Niagara", "Northern Michigan", "Nebraska-Omaha", "Notre Dame", "Ohio State", "Princeton", "Providence", "Quinnipiac", "Robert Morris", "Rensselaer", "RIT", "St. Cloud State", "Sacred Heart", "St. Lawrence", "Union", "Vermont", "Wisconsin", "Western Michigan", "Yale"), class = "factor"), o_goals = c(2L, 5L, 3L, 2L, 6L, 3L, 6L, 4L, 1L, 4L, 4L, 5L, 5L, 2L, 4L, 5L, 1L, 3L, 0L, 5L, 7L, 3L, 4L, 4L, 2L, 1L, 2L, 6L, 5L, 3L, 1L, 3L, 3L, 2L, 2L, 4L, 4L, 1L, 1L, 0L, 2L, 2L, 3L, 3L, 5L, 2L, 2L, 4L, 3L, 5L, 3L, 3L, 4L, 5L, 1L, 4L, 2L, 3L, 3L, 4L, 3L, 2L, 6L, 3L, 3L, 1L, 6L, 4L, 5L, 5L, 3L, 3L, 1L, 4L, 1L, 5L, 3L, 3L, 4L, 0L, 3L, 4L, 3L, 2L, 5L, 4L, 0L, 5L, 2L, 4L, 3L, 0L, 4L, 3L, 2L, 5L, 5L, 4L, 5L, 1L, 0L, 3L, 0L, 0L, 2L, 5L, 3L, 3L, 2L, 8L, 4L, 2L, 3L, 3L, 1L, 3L, 4L, 5L, 4L, 1L, 3L, 4L, 2L, 1L, 5L, 5L, 4L, 4L, 3L, 1L, 4L, 6L, 3L, 3L, 3L, 3L, 4L, 4L, 3L, 1L, 4L, 4L, 3L, 3L, 0L, 3L, 6L, 3L, 4L, 2L, 3L, 4L, 1L, 1L, 3L, 4L, 7L, 3L, 2L, 1L, 3L, 3L, 4L, 6L, 4L, 2L, 1L, 8L, 3L, 1L, 2L, 2L, 1L, 4L, 1L, 5L, 3L, 3L, 6L, 3L, 4L, 1L, 5L, 3L, 4L, 4L, 5L, 2L, 5L, 2L, 3L, 1L, 4L, 3L, 3L, 4L, 2L, 0L, 2L, 2L, 2L, 3L, 5L, 4L, 2L, 3L, 3L, 3L, 4L, 3L, 7L, 6L, 2L, 4L, 1L, 2L, 6L, 1L, 5L, 5L, 3L, 6L, 4L, 5L, 3L, 3L, 2L, 6L, 3L, 0L, 3L, 5L, 2L, 6L, 3L, 0L, 4L, 2L, 2L, 5L, 4L, 2L, 2L, 2L, 4L, 2L, 1L, 3L, 6L, 4L, 1L, 3L, 4L, 2L, 3L, 6L, 2L, 4L, 3L, 2L, 4L, 0L, 4L, 1L, 3L, 1L, 5L, 6L, 2L, 8L, 2L, 3L, 2L, 2L, 2L, 1L, 2L, 0L, 1L, 7L, 4L, 2L, 1L, 2L, 2L, 4L, 4L, 3L, 1L, 2L, 6L, 4L, 1L, 3L, 4L, 2L, 0L, 2L, 5L, 0L, 3L, 2L, 5L, 1L, 8L, 5L, 4L, 3L, 4L, 3L, 2L, 4L, 1L, 6L, 2L, 4L, 3L, 0L, 2L, 4L, 6L, 1L, 1L, 2L, 4L, 6L, 3L, 3L, 3L, 6L, 5L, 2L, 4L, 5L, 3L, 1L, 5L, 4L, 2L, 5L, 4L, 5L, 2L, 8L, 4L, 0L, 5L, 5L, 5L, 2L, 4L, 6L, 0L, 1L, 2L, 3L, 5L, 6L, 3L, 4L, 4L, 5L, 2L, 4L, 3L, 2L, 3L, 4L, 7L, 8L, 2L, 5L, 0L, 5L, 3L, 1L, 4L, 4L, 1L, 2L, 10L, 4L, 4L, 3L, 3L, 2L, 1L, 1L, 2L, 3L, 5L, 4L, 1L, 3L, 3L, 5L, 1L, 1L, 1L, 5L, 0L, 8L, 1L, 2L, 8L, 2L, 3L, 1L, 3L, 3L, 7L, 1L, 2L, 1L, 2L, 3L, 2L, 4L, 6L, 9L, 1L, 3L, 4L, 3L, 2L, 2L, 5L, 2L, 1L, 3L, 6L, 2L, 3L, 3L, 3L, 4L, 4L, 3L, 2L, 2L, 4L, 2L, 3L, 2L, 5L, 2L, 4L, 1L, 4L, 2L, 0L, 3L, 0L, 2L, 2L, 1L, 4L, 3L, 5L, 4L, 4L, 3L, 3L, 5L, 1L, 4L, 1L, 6L, 1L, 4L, 2L, 2L, 3L, 2L, 5L, 6L, 4L, 2L, 3L, 4L, 3L, 2L, 1L, 5L, 2L, 1L, 4L, 2L, 7L, 1L, 2L, 1L, 6L, 3L, 3L, 4L, 3L, 2L, 4L, 3L, 6L, 0L, 7L, 6L, 2L, 4L, 4L, 0L, 4L, 3L, 5L, 4L, 5L, 6L, 3L, 6L, 7L, 6L, 5L, 2L, 3L, 0L, 3L, 1L, 5L, 0L, 5L, 4L, 5L, 2L, 2L, 0L, 4L, 5L, 5L, 3L, 1L, 2L, 2L, 4L, 3L, 1L, 1L, 7L, 3L, 2L, 7L, 9L, 5L, 5L, 2L, 5L, 5L, 4L, 3L, 3L, 4L, 4L, 1L, 2L, 2L, 3L, 3L, 2L, 3L, 5L, 5L, 4L, 2L, 4L, 2L, 2L, 3L, 2L, 3L, 7L, 4L, 5L, 3L, 4L, 5L, 1L, 6L, 2L, 1L, 1L, 1L, 2L, 1L, 4L, 3L, 3L, 3L, 6L, 2L, 1L, 5L, 6L, 3L, 6L, 4L, 1L, 6L, 6L, 0L, 5L, 1L, 2L, 3L, 1L, 0L, 1L, 3L, 0L, 0L, 4L, 3L, 4L, 4L, 3L, 3L, 8L, 2L, 1L, 5L, 1L, 3L, 1L, 3L, 6L, 4L, 3L, 5L, 5L, 9L, 4L, 3L, 2L, 2L, 6L, 4L, 3L, 2L, 4L, 5L, 4L, 1L, 5L, 2L, 3L, 2L, 1L, 4L, 5L, 1L, 4L, 4L, 2L, 2L, 1L, 7L, 1L, 5L, 3L, 4L, 4L, 2L, 4L, 6L, 3L, 2L, 5L, 2L, 1L, 1L, 4L, 3L, 4L, 0L, 3L, 1L, 3L, 1L, 5L, 1L, 2L, 2L, 0L, 3L, 1L, 3L, 3L, 4L, 0L, 8L, 1L, 4L, 2L, 4L, 6L, 0L, 5L, 3L, 6L, 3L, 4L, 4L, 3L, 3L, 3L, 1L, 5L, 4L, 4L, 5L, 3L, 4L, 2L, 2L, 4L, 4L, 2L, 3L, 3L, 4L, 5L, 2L, 3L, 4L, 1L, 4L, 5L, 2L, 2L, 3L, 3L, 3L, 0L, 2L, 2L, 6L, 2L, 2L, 2L, 4L, 6L, 2L, 4L, 1L, 2L, 4L, 5L, 6L, 1L, 2L, 1L, 11L, 1L, 0L, 4L, 2L, 2L, 4L, 2L, 3L, 3L, 1L, 3L, 7L, 5L, 2L, 3L, 1L, 6L, 1L, 3L, 2L, 2L, 2L, 5L, 5L, 1L, 6L, 10L, 5L, 3L, 1L, 2L, 6L, 0L, 4L, 4L, 5L, 4L, 3L, 1L, 3L, 4L, 4L, 3L, 2L, 1L, 5L, 4L, 5L, 4L, 4L, 7L, 1L, 2L, 2L, 4L, 3L, 4L, 4L, 0L, 4L, 5L, 0L, 4L, 0L, 1L, 4L, 7L, 3L, 3L, 4L, 2L, 4L, 6L, 2L, 1L, 5L, 6L, 2L, 2L, 4L, 5L, 8L, 1L, 2L, 4L, 4L, 2L, 2L, 4L, 2L, 1L, 2L, 3L, 5L, 4L, 4L, 3L, 5L, 3L, 6L, 3L, 0L, 3L, 4L, 4L, 4L, 5L, 5L, 6L, 3L, 1L, 4L, 1L, 4L, 5L, 1L, 7L, 8L, 5L, 3L, 4L, 2L, 4L, 5L, 2L, 7L, 4L, 5L, 3L, 5L, 0L, 5L, 5L, 2L, 6L, 5L, 1L, 3L, 0L, 2L, 7L, 4L, 3L, 7L, 2L, 2L, 3L, 5L, 7L, 2L, 7L, 4L, 4L, 4L, 6L, 1L, 4L, 7L, 5L, 8L, 8L, 2L, 1L, 2L, 2L, 6L, 2L, 1L, 2L, 4L, 2L, 6L, 4L, 3L, 6L, 4L, 2L, 2L, 4L, 3L, 3L, 2L, 3L, 7L, 5L, 3L, 3L, 1L, 4L, 1L, 3L, 1L, 3L, 0L, 5L, 2L, 3L, 2L, 3L, 1L, 8L, 2L, 5L, 2L, 4L, 2L, 3L, 4L, 2L, 3L, 6L, 1L, 3L, 1L, 3L, 2L, 5L, 2L, 0L, 5L, 3L, 1L, 5L, 3L, 4L, 5L, 4L, 2L, 2L, 3L, 6L, 4L, 3L, 1L, 3L, 3L, 0L, 6L, 3L, 3L, 3L, 2L, 3L, 8L, 1L, 4L, 1L, 4L, 2L, 3L, 2L, 6L, 0L, 4L, 4L, 3L, 2L, 1L, 3L, 4L, 5L, 4L, 6L, 3L, 1L, 2L, 6L, 4L, 3L, 4L, 6L, 2L, 1L, 1L, 7L, 5L, 7L, 4L, 6L, 4L, 3L, 4L, 3L, 4L, 5L, 2L, 3L, 4L, 2L, 3L, 3L, 3L, 4L, 3L, 3L, 3L, 2L, 0L, 0L, 4L, 3L, 4L, 3L, 2L, 2L, 3L, 0L, 2L, 2L, 3L, 4L, 2L, 2L, 3L, 4L, 0L, 3L, 3L, 0L, 2L, 7L, 1L, 3L, 6L, 3L, 3L), conference = structure(c(6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 5L, 6L, 7L, 6L, 6L, 6L, 6L, 6L, 6L, 7L, 7L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 7L, 6L, 6L, 6L, 6L, 7L, 7L, 6L, 6L, 6L, 6L, 6L, 6L, 5L, 6L, 6L, 6L, 6L, 6L, 6L, 1L, 7L, 5L, 2L, 7L, 7L, 7L, 5L, 2L, 6L, 1L, 6L, 6L, 5L, 6L, 7L, 6L, 6L, 7L, 1L, 5L, 1L, 6L, 2L, 7L, 7L, 7L, 5L, 2L, 6L, 1L, 6L, 6L, 6L, 6L, 7L, 6L, 1L, 1L, 1L, 7L, 6L, 5L, 2L, 6L, 6L, 1L, 2L, 4L, 1L, 5L, 1L, 5L, 2L, 2L, 7L, 6L, 6L, 2L, 6L, 6L, 6L, 7L, 5L, 2L, 1L, 1L, 6L, 3L, 2L, 6L, 1L, 2L, 1L, 2L, 2L, 2L, 7L, 6L, 6L, 2L, 5L, 6L, 6L, 7L, 5L, 6L, 1L, 7L, 3L, 5L, 2L, 1L, 1L, 1L, 5L, 5L, 2L, 4L, 1L, 4L, 4L, 7L, 4L, 1L, 2L, 2L, 7L, 7L, 2L, 6L, 7L, 2L, 2L, 3L, 4L, 5L, 4L, 1L, 1L, 1L, 2L, 4L, 4L, 4L, 7L, 4L, 2L, 1L, 2L, 7L, 7L, 7L, 5L, 2L, 2L, 3L, 1L, 4L, 4L, 5L, 6L, 5L, 5L, 1L, 2L, 7L, 1L, 5L, 4L, 7L, 4L, 4L, 2L, 4L, 2L, 5L, 2L, 7L, 5L, 4L, 1L, 3L, 1L, 7L, 6L, 4L, 2L, 1L, 2L, 7L, 1L, 6L, 4L, 7L, 1L, 4L, 4L, 2L, 4L, 2L, 5L, 5L, 2L, 7L, 5L, 3L, 2L, 4L, 1L, 1L, 7L, 6L, 4L, 5L, 2L, 6L, 1L, 2L, 5L, 6L, 6L, 6L, 2L, 1L, 3L, 2L, 1L, 5L, 5L, 2L, 1L, 4L, 1L, 2L, 7L, 7L, 2L, 7L, 2L, 4L, 5L, 4L, 6L, 1L, 4L, 7L, 1L, 3L, 2L, 1L, 5L, 1L, 4L, 1L, 5L, 2L, 2L, 7L, 7L, 2L, 5L, 7L, 5L, 2L, 4L, 4L, 6L, 1L, 4L, 7L, 4L, 2L, 6L, 6L, 4L, 1L, 5L, 6L, 5L, 3L, 4L, 7L, 6L, 2L, 1L, 6L, 6L, 5L, 1L, 6L, 7L, 6L, 2L, 6L, 6L, 7L, 5L, 2L, 6L, 7L, 6L, 2L, 1L, 6L, 6L, 6L, 1L, 6L, 7L, 6L, 5L, 2L, 6L, 7L, 6L, 6L, 2L, 6L, 6L, 6L, 6L, 4L, 3L, 2L, 1L, 1L, 5L, 4L, 4L, 1L, 4L, 7L, 2L, 2L, 2L, 7L, 7L, 2L, 3L, 7L, 5L, 2L, 4L, 5L, 4L, 1L, 1L, 7L, 5L, 5L, 3L, 2L, 1L, 1L, 5L, 4L, 7L, 1L, 4L, 2L, 5L, 5L, 2L, 2L, 7L, 7L, 2L, 3L, 7L, 2L, 4L, 5L, 4L, 1L, 1L, 7L, 4L, 5L, 6L, 6L, 4L, 6L, 7L, 7L, 2L, 1L, 2L, 2L, 7L, 6L, 2L, 5L, 2L, 2L, 6L, 6L, 7L, 6L, 5L, 2L, 7L, 7L, 1L, 5L, 7L, 6L, 2L, 6L, 5L, 2L, 2L, 6L, 6L, 2L, 7L, 2L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 1L, 6L, 6L, 6L, 4L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 1L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 5L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 1L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 4L, 6L, 2L, 7L, 1L, 1L, 5L, 2L, 4L, 1L, 7L, 1L, 6L, 1L, 2L, 6L, 2L, 7L, 7L, 4L, 6L, 5L, 4L, 6L, 2L, 7L, 1L, 1L, 2L, 1L, 7L, 1L, 2L, 6L, 1L, 2L, 5L, 5L, 7L, 7L, 6L, 5L, 4L, 2L, 2L, 5L, 4L, 6L, 4L, 6L, 5L, 4L, 1L, 3L, 2L, 7L, 1L, 1L, 3L, 5L, 2L, 4L, 4L, 5L, 1L, 2L, 2L, 7L, 2L, 7L, 5L, 1L, 6L, 5L, 7L, 1L, 3L, 2L, 7L, 1L, 3L, 2L, 4L, 4L, 4L, 6L, 1L, 5L, 5L, 1L, 5L, 2L, 7L, 2L, 7L, 5L, 2L, 4L, 1L, 6L, 7L, 4L, 5L, 1L, 5L, 4L, 1L, 7L, 1L, 4L, 7L, 2L, 4L, 1L, 5L, 2L, 7L, 6L, 5L, 6L, 6L, 2L, 2L, 2L, 1L, 6L, 2L, 4L, 6L, 1L, 5L, 4L, 1L, 7L, 1L, 4L, 7L, 4L, 1L, 2L, 2L, 5L, 6L, 6L, 2L, 2L, 2L, 5L, 1L, 6L, 7L, 5L, 2L, 4L, 5L, 6L, 1L, 1L, 2L, 1L, 4L, 7L, 2L, 5L, 5L, 2L, 7L, 7L, 2L, 5L, 3L, 4L, 5L, 4L, 4L, 3L, 1L, 7L, 4L, 5L, 4L, 2L, 7L, 1L, 2L, 1L, 4L, 7L, 2L, 5L, 5L, 5L, 2L, 7L, 7L, 2L, 3L, 4L, 4L, 3L, 1L, 7L, 4L, 4L, 2L, 7L, 6L, 6L, 3L, 2L, 6L, 7L, 1L, 1L, 5L, 5L, 1L, 4L, 4L, 4L, 4L, 4L, 2L, 6L, 2L, 7L, 5L, 5L, 2L, 2L, 2L, 1L, 4L, 5L, 7L, 1L, 1L, 1L, 4L, 4L, 4L, 4L, 6L, 4L, 2L, 6L, 6L, 2L, 7L, 5L, 2L, 2L, 2L, 1L, 4L, 5L, 5L, 6L, 6L, 2L, 1L, 3L, 2L, 3L, 5L, 4L, 2L, 1L, 5L, 1L, 5L, 2L, 2L, 7L, 7L, 7L, 5L, 4L, 5L, 4L, 4L, 1L, 1L, 4L, 2L, 4L, 2L, 1L, 3L, 1L, 4L, 1L, 2L, 5L, 5L, 5L, 2L, 2L, 7L, 7L, 7L, 4L, 4L, 4L, 1L, 5L, 4L, 2L, 4L, 4L, 1L, 7L, 1L, 1L, 5L, 5L, 4L, 4L, 7L, 4L, 1L, 2L, 5L, 2L, 7L, 7L, 2L, 3L, 2L, 2L, 2L, 4L, 4L, 7L, 4L, 5L, 5L, 1L, 7L, 1L, 1L, 4L, 4L, 7L, 1L, 4L, 2L, 2L, 5L, 7L, 7L, 2L, 3L, 2L, 2L, 2L, 4L, 5L, 4L, 3L, 1L, 7L, 4L, 5L, 5L, 1L, 5L, 3L, 5L, 2L, 3L, 6L, 1L, 6L, 1L, 5L, 2L, 4L, 1L, 4L, 7L, 4L, 2L, 5L, 5L, 1L, 7L, 5L, 7L, 2L, 5L, 4L, 1L, 4L, 2L, 7L, 4L, 3L, 6L, 1L, 6L, 4L, 1L, 4L, 7L, 2L, 4L, 1L, 2L, 5L, 1L, 2L, 2L, 2L, 7L, 7L, 5L, 4L, 1L, 5L, 4L, 7L, 4L, 3L, 5L, 6L, 3L, 5L, 6L, 6L, 6L, 7L, 6L, 6L, 6L, 5L, 5L, 7L, 7L, 3L, 5L, 6L, 7L, 5L, 6L, 7L, 3L, 5L, 6L, 6L, 6L, 6L, 6L, 7L, 6L, 6L, 5L, 7L, 7L, 7L, 5L, 6L, 5L, 3L, 5L, 6L, 6L, 6L, 6L, 7L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L), .Label = c("AH", "CC", "CH", "EC", "HE", "NC", "WC"), class = "factor"), result = c(1, 0, 0, 1, 0, 0, 0, 0.5, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0.5, 0, 0, 0, 0, 0, 0.5, 1, 0, 0, 0, 0.5, 0, 0, 0.5, 0, 0.5, 0, 1, 0, 0.5, 0, 1, 0, 0, 1, 0, 0, 0.5, 1, 0, 0, 1, 0, 0, 0, 0.5, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0.5, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0.5, 1, 0.5, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0.5, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0.5, 0, 0, 0.5, 1, 0, 0.5, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0.5, 0.5, 0, 0, 0.5, 1, 0, 0, 1, 0, 1, 0.5, 1, 0, 1, 0.5, 0, 1, 1, 1, 0, 1, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0.5, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0.5, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0.5, 1, 0, 0, 0, 0.5, 1, 0.5, 1, 0, 0.5, 0.5, 1, 0, 0, 0, 0, 1, 0.5, 1, 0, 1, 0.5, 1, 1, 1, 0, 0, 1, 0, 0.5, 1, 0, 0, 1, 0.5, 0.5, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0.5, 1, 0.5, 0, 0, 0.5, 1, 1, 0, 0, 0.5, 0, 0.5, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0.5, 1, 1, 0, 0, 0, 0.5, 0, 0, 0, 1, 0, 1, 1, 0.5, 0.5, 0, 0, 0.5, 0, 1, 0.5, 0, 1, 1, 0.5, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0.5, 0, 0, 1, 1, 0.5, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0.5, 0, 1, 0, 0.5, 1, 0.5, 0, 0, 1, 1, 0.5, 0, 0.5, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0.5, 0.5, 1, 1, 0.5, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0.5, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0.5, 0, 0.5, 0, 0, 0, 1, 1, 1, 0.5, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.5, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0.5, 1, 1, 0.5, 0, 0, 1, 0.5, 0, 0.5, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0.5, 1, 0.5, 0, 0, 0, 0, 0, 1, 0, 0.5, 1, 0, 0, 0.5, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0.5, 0, 0, 1, 0, 0, 1, 0.5, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0.5, 1, 1, 1, 0, 1, 1, 0, 0.5, 1, 0.5, 1, 0.5, 0, 1, 0.5, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0.5, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0.5, 0, 1, 1, 0, 0.5, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0.5, 1, 1, 0, 0, 1, 0.5, 0, 1, 1, 0, 0.5, 0, 1, 0, 1, 0.5, 1, 0, 0.5, 1, 0, 1, 0.5, 1, 0, 0, 0, 1, 0, 1, 0, 0.5, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0.5, 0, 1, 0, 1, 0.5, 0.5, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0.5, 0, 0, 0, 0, 0.5, 1, 0, 0.5, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0.5, 0, 0.5, 1, 0.5, 1, 0, 1, 0, 1, 0, 0, 0, 0.5, 1, 0, 0, 0, 0, 0.5, 1, 0, 1, 0, 0, 0, 0, 0, 0.5, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0.5, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0.5, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0.5, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0.5, 0, 0, 0, 0, 0.5, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0.5, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0.5, 0.5, 1, 0, 0, 0, 1, 0, 0.5, 1, 0.5, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0.5, 1, 0, 0, 0, 0, 0, 0.5, 1, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 1, 0.5, 0, 0.5, 0, 1, 0, 0.5, 0, 0.5, 0, 1, 0, 0.5, 0, 0.5, 0, 1, 0, 1, 0, 0, 1, 0.5, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0.5, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0.5, 0, 0, 0.5, 0, 0.5, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0.5, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1), home.ice = c(TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, FALSE, TRUE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, TRUE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE)), .Names = c("date", "visitor", "v_goals", "opponent", "o_goals", "conference", "result", "home.ice" ), row.names = c(1L, 2L, 3L, 4L, 5L, 6L, 8L, 7L, 9L, 10L, 12L, 13L, 15L, 14L, 16L, 17L, 18L, 19L, 11L, 20L, 21L, 22L, 23L, 25L, 24L, 26L, 27L, 29L, 30L, 32L, 31L, 34L, 33L, 35L, 36L, 28L, 37L, 38L, 39L, 40L, 41L, 42L, 44L, 43L, 46L, 45L, 47L, 48L, 49L, 51L, 53L, 50L, 54L, 52L, 55L, 56L, 57L, 59L, 60L, 58L, 61L, 62L, 63L, 64L, 66L, 65L, 67L, 69L, 71L, 68L, 72L, 70L, 73L, 74L, 76L, 77L, 75L, 78L, 79L, 80L, 81L, 82L, 84L, 83L, 85L, 86L, 87L, 91L, 89L, 90L, 88L, 92L, 93L, 94L, 95L, 97L, 96L, 98L, 99L, 100L, 101L, 102L, 103L, 104L, 105L, 109L, 110L, 111L, 107L, 108L, 106L, 112L, 113L, 114L, 115L, 117L, 116L, 118L, 119L, 120L, 121L, 122L, 123L, 124L, 125L, 126L, 127L, 129L, 128L, 130L, 131L, 132L, 133L, 135L, 134L, 136L, 137L, 141L, 140L, 138L, 139L, 142L, 143L, 144L, 145L, 146L, 148L, 147L, 149L, 150L, 151L, 152L, 153L, 155L, 154L, 156L, 157L, 158L, 161L, 164L, 162L, 160L, 163L, 165L, 166L, 167L, 168L, 169L, 171L, 170L, 159L, 172L, 174L, 173L, 176L, 175L, 177L, 178L, 179L, 184L, 180L, 182L, 181L, 183L, 185L, 186L, 188L, 187L, 190L, 189L, 191L, 194L, 195L, 192L, 198L, 199L, 196L, 197L, 200L, 201L, 202L, 193L, 203L, 204L, 205L, 208L, 206L, 207L, 209L, 211L, 210L, 212L, 213L, 215L, 216L, 217L, 214L, 218L, 219L, 220L, 221L, 222L, 223L, 224L, 225L, 226L, 227L, 228L, 229L, 231L, 233L, 230L, 232L, 234L, 236L, 235L, 237L, 238L, 239L, 240L, 241L, 245L, 243L, 244L, 246L, 248L, 249L, 247L, 251L, 250L, 242L, 252L, 253L, 255L, 256L, 254L, 257L, 258L, 260L, 259L, 262L, 261L, 263L, 264L, 265L, 266L, 269L, 271L, 270L, 268L, 272L, 274L, 273L, 275L, 276L, 278L, 277L, 267L, 279L, 280L, 281L, 282L, 283L, 284L, 285L, 286L, 287L, 288L, 289L, 290L, 291L, 292L, 293L, 294L, 296L, 295L, 297L, 298L, 299L, 302L, 300L, 301L, 304L, 303L, 305L, 306L, 307L, 308L, 309L, 310L, 311L, 312L, 313L, 314L, 315L, 316L, 317L, 318L, 319L, 320L, 325L, 324L, 322L, 321L, 323L, 329L, 328L, 326L, 327L, 330L, 331L, 332L, 333L, 334L, 335L, 336L, 337L, 338L, 340L, 339L, 341L, 342L, 343L, 345L, 344L, 346L, 347L, 348L, 350L, 349L, 351L, 352L, 353L, 355L, 356L, 357L, 354L, 358L, 359L, 360L, 361L, 362L, 363L, 364L, 365L, 366L, 368L, 367L, 369L, 370L, 371L, 372L, 374L, 375L, 373L, 376L, 377L, 378L, 379L, 380L, 381L, 382L, 383L, 384L, 385L, 386L, 387L, 388L, 389L, 391L, 390L, 393L, 392L, 395L, 394L, 397L, 396L, 398L, 399L, 403L, 401L, 402L, 404L, 407L, 408L, 405L, 406L, 409L, 410L, 411L, 412L, 413L, 415L, 414L, 400L, 416L, 417L, 419L, 418L, 421L, 420L, 423L, 422L, 425L, 424L, 426L, 427L, 432L, 430L, 428L, 429L, 431L, 434L, 435L, 433L, 436L, 437L, 438L, 439L, 440L, 442L, 441L, 443L, 444L, 445L, 446L, 447L, 448L, 449L, 450L, 451L, 453L, 455L, 452L, 456L, 454L, 459L, 457L, 458L, 460L, 461L, 462L, 463L, 464L, 465L, 466L, 467L, 468L, 469L, 472L, 471L, 470L, 475L, 476L, 473L, 474L, 477L, 478L, 479L, 480L, 481L, 482L, 483L, 484L, 485L, 486L, 487L, 488L, 489L, 490L, 491L, 492L, 493L, 494L, 495L, 496L, 497L, 499L, 498L, 500L, 501L, 502L, 503L, 504L, 505L, 506L, 507L, 510L, 508L, 509L, 511L, 513L, 512L, 514L, 515L, 516L, 517L, 518L, 519L, 521L, 522L, 520L, 523L, 524L, 525L, 526L, 527L, 529L, 528L, 530L, 531L, 532L, 534L, 535L, 533L, 536L, 537L, 538L, 539L, 540L, 542L, 544L, 541L, 543L, 545L, 547L, 546L, 548L, 550L, 549L, 551L, 553L, 552L, 554L, 559L, 557L, 556L, 558L, 560L, 561L, 555L, 562L, 564L, 566L, 563L, 565L, 567L, 568L, 570L, 569L, 571L, 572L, 574L, 573L, 575L, 576L, 580L, 578L, 579L, 581L, 577L, 582L, 583L, 584L, 585L, 587L, 586L, 588L, 589L, 590L, 591L, 593L, 594L, 596L, 592L, 595L, 597L, 599L, 600L, 598L, 601L, 602L, 606L, 607L, 608L, 605L, 603L, 611L, 609L, 610L, 612L, 613L, 604L, 614L, 616L, 617L, 618L, 615L, 619L, 621L, 620L, 622L, 623L, 624L, 625L, 626L, 629L, 627L, 630L, 632L, 631L, 628L, 635L, 633L, 634L, 636L, 637L, 638L, 639L, 640L, 641L, 642L, 645L, 643L, 644L, 647L, 646L, 648L, 650L, 649L, 651L, 653L, 652L, 657L, 655L, 656L, 654L, 659L, 661L, 658L, 660L, 662L, 663L, 665L, 664L, 666L, 667L, 669L, 671L, 668L, 670L, 673L, 672L, 674L, 676L, 675L, 678L, 677L, 680L, 679L, 682L, 684L, 681L, 683L, 685L, 686L, 687L, 689L, 688L, 690L, 691L, 692L, 693L, 694L, 695L, 696L, 697L, 698L, 699L, 700L, 701L, 702L, 705L, 703L, 706L, 708L, 707L, 710L, 709L, 711L, 712L, 713L, 714L, 716L, 715L, 718L, 717L, 719L, 704L, 720L, 721L, 722L, 723L, 724L, 725L, 726L, 727L, 728L, 731L, 729L, 734L, 730L, 733L, 732L, 735L, 736L, 737L, 739L, 738L, 741L, 740L, 742L, 743L, 744L, 745L, 746L, 747L, 748L, 749L, 751L, 750L, 752L, 755L, 753L, 754L, 756L, 758L, 757L, 759L, 760L, 761L, 762L, 765L, 764L, 763L, 767L, 766L, 768L, 769L, 770L, 771L, 772L, 773L, 774L, 775L, 776L, 777L, 779L, 778L, 780L, 782L, 781L, 783L, 784L, 788L, 789L, 787L, 785L, 790L, 791L, 792L, 793L, 794L, 795L, 786L, 796L, 797L, 798L, 799L, 800L, 801L, 802L, 804L, 803L, 805L, 806L, 807L, 808L, 809L, 814L, 812L, 810L, 813L, 811L, 815L, 816L, 817L, 818L, 819L, 820L, 821L, 822L, 823L, 824L, 825L, 827L, 826L, 828L, 830L, 829L, 831L, 832L, 835L, 833L, 840L, 838L, 836L, 839L, 837L, 841L, 842L, 843L, 844L, 845L, 834L, 846L, 847L, 848L, 849L, 851L, 850L, 852L, 855L, 853L, 854L, 858L, 857L, 856L, 859L, 860L, 861L, 862L, 865L, 866L, 863L, 868L, 869L, 867L, 870L, 871L, 872L, 873L, 874L, 875L, 864L, 876L, 878L, 877L, 879L, 880L, 883L, 882L, 881L, 885L, 884L, 886L, 887L, 891L, 890L, 888L, 893L, 894L, 892L, 895L, 896L, 897L, 898L, 899L, 900L, 902L, 901L, 903L, 889L, 904L, 905L, 906L, 907L, 908L, 909L, 910L, 911L, 912L, 914L, 917L, 915L, 913L, 916L, 918L, 920L, 919L, 921L, 922L, 924L, 923L, 925L, 926L, 928L, 927L, 929L, 930L, 931L, 932L, 933L, 934L, 935L, 936L, 938L, 937L, 939L, 940L, 941L, 942L, 944L, 943L, 945L, 947L, 946L, 948L, 951L, 952L, 954L, 953L, 950L, 955L, 956L, 957L, 958L, 959L, 949L, 960L, 961L, 962L, 963L, 964L, 965L, 968L, 966L, 967L, 969L, 971L, 970L, 972L, 973L, 974L, 975L, 978L, 977L, 976L, 980L, 979L, 981L, 982L, 983L, 984L, 985L, 987L, 988L, 986L, 989L, 990L, 991L, 993L, 992L, 994L, 995L, 996L, 998L, 997L, 999L, 1000L, 1001L, 1002L, 1003L, 1004L, 1005L, 1006L, 1007L, 1008L, 1009L, 1011L, 1012L, 1010L, 1013L, 1014L, 1016L, 1015L, 1017L, 1019L, 1018L, 1020L, 1026L, 1023L, 1022L, 1025L, 1024L, 1027L, 1028L, 1029L, 1030L, 1031L, 1021L, 1032L, 1034L, 1033L, 1035L, 1036L, 1038L, 1037L, 1039L, 1041L, 1040L, 1042L, 1048L, 1045L, 1044L, 1047L, 1046L, 1049L, 1050L, 1051L, 1052L, 1053L, 1054L, 1043L, 1055L, 1056L, 1057L, 1061L, 1060L, 1059L, 1062L, 1063L, 1058L, 1064L, 1065L, 1066L, 1067L, 1068L, 1069L, 1070L, 1071L, 1072L, 1073L, 1074L, 1075L, 1076L, 1077L, 1078L, 1079L, 1080L, 1081L, 1082L, 1083L ), class = "data.frame") BradleyTerry2/data/springall.R0000744000176200001440000000367013152515662016010 0ustar liggesusersspringall <- structure(list(contests = structure(list(row = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 7L, 7L, 8L), .Label = c("1", "2", "3", "4", "5", "6", "7", "8", "9"), class = "factor"), col = structure(c(2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 4L, 5L, 6L, 7L, 8L, 9L, 5L, 6L, 7L, 8L, 9L, 6L, 7L, 8L, 9L, 7L, 8L, 9L, 8L, 9L, 9L), .Label = c("1", "2", "3", "4", "5", "6", "7", "8", "9"), class = "factor"), win = c(2L, 0L, 5L, 2L, 0L, 12L, 13L, 10L, 6L, 17L, 9L, 8L, 24L, 16L, 15L, 22L, 14L, 11L, 24L, 22L, 17L, 3L, 2L, 12L, 10L, 8L, 5L, 20L, 15L, 13L, 27L, 19L, 18L, 2L, 2L, 6L), loss = c(16L, 21L, 10L, 15L, 22L, 3L, 9L, 12L, 11L, 2L, 6L, 12L, 2L, 3L, 2L, 2L, 4L, 4L, 1L, 0L, 2L, 14L, 18L, 6L, 6L, 9L, 11L, 0L, 2L, 5L, 1L, 1L, 2L, 13L, 21L, 8L), tie = c(7L, 1L, 10L, 7L, 2L, 9L, 5L, 3L, 5L, 7L, 8L, 6L, 0L, 5L, 7L, 4L, 5L, 7L, 3L, 3L, 6L, 9L, 4L, 5L, 6L, 11L, 9L, 2L, 5L, 9L, 0L, 5L, 2L, 8L, 5L, 8L), win.adj = c(5.5, 0.5, 10, 5.5, 1, 16.5, 15.5, 11.5, 8.5, 20.5, 13, 11, 24, 18.5, 18.5, 24, 16.5, 14.5, 25.5, 23.5, 20, 7.5, 4, 14.5, 13, 13.5, 9.5, 21, 17.5, 17.5, 27, 21.5, 19, 6, 4.5, 10), loss.adj = c(19.5, 21.5, 15, 18.5, 23, 7.5, 11.5, 13.5, 13.5, 5.5, 10, 15, 2, 5.5, 5.5, 4, 6.5, 7.5, 2.5, 1.5, 5, 18.5, 20, 8.5, 9, 14.5, 15.5, 1, 4.5, 9.5, 1, 3.5, 3, 17, 23.5, 12)), .Names = c("row", "col", "win", "loss", "tie", "win.adj", "loss.adj"), row.names = c(NA, -36L), class = "data.frame"), predictors = structure(list(flav = c(0.6, 4.8, 9, 0.6, 4.8, 9, 0.6, 4.8, 9), gel = c(0, 0, 0, 2.4, 2.4, 2.4, 4.8, 4.8, 4.8), flav.2 = c(0.36, 23.04, 81, 0.36, 23.04, 81, 0.36, 23.04, 81), gel.2 = c(0, 0, 0, 5.76, 5.76, 5.76, 23.04, 23.04, 23.04), flav.gel = c(0, 0, 0, 1.44, 11.52, 21.6, 2.88, 23.04, 43.2)), .Names = c("flav", "gel", "flav.2", "gel.2", "flav.gel" ), row.names = c(NA, -9L), class = "data.frame")), .Names = c("contests", "predictors")) BradleyTerry2/data/football.RData0000744000176200001440000001420313152515662016403 0ustar liggesusersyW$%TdY@a}ԿII( 쿴Mh1ABη+pN=[J"@q:=SX˿nɵ'o\|Ƶqr׷zo?|xrro>}×˱˱P˱˱kQ˱˱kQn˱˱kQywgQ{Gp:7M{.:o'ev>z|/'?X~2|65\y`39坁eW˭݁7qsqu^9|>G#F#Gwu~|ӁÁptGwa2c??X^2>y56??[ٽOvO.?nowsn_v_.w|ߗ~nj:_}~v>>f{_g_~v꼾T//e]&2 ȴ,v-K/qY's}68[s18s\o\7xpsn\ I#Ϛ['WUW\o \՚u|\|\7xps\uspzoplo^ƺ|}d7~MZ~Kc\=l}8$WWU㪹ukpۃs}:8s78 eкB\[^-Xk?Z- }8$WWU㪹>\=8s}08{,[|tr|?*ׇs譹~<8gs]6_%ךN38׵Wuwp[s>qxE|?z˶s_{[skxx~:-g{غn~{-}꫎}k/zgk[zxlV_og<˶Cz~u<懵7Uk۱uqvX;[yoU/y8gΧ~:t^:r/_}^}ggm{?t>i׫-|~=?lW:ʿ~׏Z緎mzkT?|[{Wo~5_+oe]7O^lX;o_C}뺇Sz5Gk~/Zש?Wwh>_ovϫʷv>߻Ρmz_[[om[Uo/7?/mm{^?:NӲ׏~(˶[/O׎}nm{_uY;>zlϭ^?y~՛޽lq?>{\븕gxq?me[;mqozWxz]{~oԼkC,[[ڛgWǷ޶v8[߫׶w}~zv|y^νquxW+ޛڛ:_mזmwh]oZ;|W=ڿV|kۣmߺ~ӛ/[νX>Źc鍯ZmmkwZ;?m[mZ;?z<9yg7?}G˟ã?.|wk9'>miz?O_]G}q7q~ccWǡ:qsu\?ξuvw:@:P&UML5j0`Ts u&uM5kRפICMjP,` 5XB kX,UkXƚ4֤&5iISMjT,`K5Xr k\,Uk\4פ&-5iIKMZjR`+5\5\5\u5\5鼐ڧL#dD " @8p wz9yfY`fY`B 4 B 4 BV f999999999000000000V#333333333 Q3g A8p `+V[ lӚY/_[ l%J`+V[ l%J`+V/_ l%J`+V[ l%J`+V  %K`+V[ l%J`+V[ lfK/_[ l%J`+V[ l%J`+` %J`+V [2leʰa+V [2 2e˰a+V [2leʰa+V [2le>>>>>>>>555555555#3Q3Q3Q3Q3Q3Q3Q3Q%BBBBBBBBBB3QgD:u<P(CQ E2[[B(Q(CQ E2e(P(CQ E2e(po5o ~(Q(CQ E2e(P(CQ E2eHO?~(P(CQ E2e(P(CQ E2[; яE?~e(P(CQ E2e(P(CQ E-!ӏE?2e(P)CS M24ehД)CS ͇EӏM?~4ehД)CS M24ehД)CS MM?~4hД)CS M24ehД)CS M24M?~4hє)CS M24ehД)CS M24eh>,~4h)CS M24ehД)CS M24eh|X4hjtojˀ@:@u~[{"F$ IdD "`fP BP  BШU`6 f`6####j59999999999000000000V33333    g A8pZ,~4h)CS M24ehД)CS M24eh|X4hGS M24ehД)CS M24ehДhGӏ M24ehД)CS M24ehД)CaGӏy7yo5eu ԁXRuԁ: 潊L#$D2"0 @( @( AhԪl0l0@@@@@AAAAQ3g A8pF`>>>>>>>>555555555#3Q3Q3Q3Q3Q3Q3Q3Q%BBBBBBBBBB3QgD:u<P(CQ E2_; ^Gя E2e(P(CQ E2e(P(ռGяE2e(P(CQ E2e(P(CQyg!ӏE?2e(P(CQ E2e(P(CQ W^GяE?2e(P(CQ E2e(P(CQ WBE?~e(P(CS M24ehД)CS MM?~4hД)CS M24ehД)CS M24M?~4hє)CS M24ehД)CS M24eh>,~4h)CS M24ehД)CS M24eh|X4hGS M24ehД)CS M24ehДhy''yQz(bb"BradleyTerry2/data/CEMS.R0000744000176200001440000044206513152515662014551 0ustar liggesusersCEMS <- structure(list(preferences = structure(list(student = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 37, 37, 37, 37, 37, 37, 37, 37, 37, 37, 37, 37, 37, 37, 37, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 43, 43, 43, 43, 43, 43, 43, 43, 43, 43, 43, 43, 43, 43, 43, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 46, 46, 46, 46, 46, 46, 46, 46, 46, 46, 46, 46, 46, 46, 46, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 54, 54, 54, 54, 54, 54, 54, 54, 54, 54, 54, 54, 54, 54, 54, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 56, 56, 56, 56, 56, 56, 56, 56, 56, 56, 56, 56, 56, 56, 56, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 60, 60, 60, 60, 60, 60, 60, 60, 60, 60, 60, 60, 60, 60, 60, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 62, 62, 62, 62, 62, 62, 62, 62, 62, 62, 62, 62, 62, 62, 62, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 69, 69, 69, 69, 69, 69, 69, 69, 69, 69, 69, 69, 69, 69, 69, 70, 70, 70, 70, 70, 70, 70, 70, 70, 70, 70, 70, 70, 70, 70, 71, 71, 71, 71, 71, 71, 71, 71, 71, 71, 71, 71, 71, 71, 71, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72, 73, 73, 73, 73, 73, 73, 73, 73, 73, 73, 73, 73, 73, 73, 73, 74, 74, 74, 74, 74, 74, 74, 74, 74, 74, 74, 74, 74, 74, 74, 75, 75, 75, 75, 75, 75, 75, 75, 75, 75, 75, 75, 75, 75, 75, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 77, 77, 77, 77, 77, 77, 77, 77, 77, 77, 77, 77, 77, 77, 77, 78, 78, 78, 78, 78, 78, 78, 78, 78, 78, 78, 78, 78, 78, 78, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 81, 81, 81, 81, 81, 81, 81, 81, 81, 81, 81, 81, 81, 81, 81, 82, 82, 82, 82, 82, 82, 82, 82, 82, 82, 82, 82, 82, 82, 82, 83, 83, 83, 83, 83, 83, 83, 83, 83, 83, 83, 83, 83, 83, 83, 84, 84, 84, 84, 84, 84, 84, 84, 84, 84, 84, 84, 84, 84, 84, 85, 85, 85, 85, 85, 85, 85, 85, 85, 85, 85, 85, 85, 85, 85, 86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 87, 87, 87, 87, 87, 87, 87, 87, 87, 87, 87, 87, 87, 87, 87, 88, 88, 88, 88, 88, 88, 88, 88, 88, 88, 88, 88, 88, 88, 88, 89, 89, 89, 89, 89, 89, 89, 89, 89, 89, 89, 89, 89, 89, 89, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 91, 91, 91, 91, 91, 91, 91, 91, 91, 91, 91, 91, 91, 91, 91, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 92, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93, 94, 94, 94, 94, 94, 94, 94, 94, 94, 94, 94, 94, 94, 94, 94, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 96, 96, 96, 96, 96, 96, 96, 96, 96, 96, 96, 96, 96, 96, 96, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 101, 101, 101, 101, 101, 101, 101, 101, 101, 101, 101, 101, 101, 101, 101, 102, 102, 102, 102, 102, 102, 102, 102, 102, 102, 102, 102, 102, 102, 102, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 104, 104, 104, 104, 104, 104, 104, 104, 104, 104, 104, 104, 104, 104, 104, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 106, 106, 106, 106, 106, 106, 106, 106, 106, 106, 106, 106, 106, 106, 106, 107, 107, 107, 107, 107, 107, 107, 107, 107, 107, 107, 107, 107, 107, 107, 108, 108, 108, 108, 108, 108, 108, 108, 108, 108, 108, 108, 108, 108, 108, 109, 109, 109, 109, 109, 109, 109, 109, 109, 109, 109, 109, 109, 109, 109, 110, 110, 110, 110, 110, 110, 110, 110, 110, 110, 110, 110, 110, 110, 110, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 113, 113, 113, 113, 113, 113, 113, 113, 113, 113, 113, 113, 113, 113, 113, 114, 114, 114, 114, 114, 114, 114, 114, 114, 114, 114, 114, 114, 114, 114, 115, 115, 115, 115, 115, 115, 115, 115, 115, 115, 115, 115, 115, 115, 115, 116, 116, 116, 116, 116, 116, 116, 116, 116, 116, 116, 116, 116, 116, 116, 117, 117, 117, 117, 117, 117, 117, 117, 117, 117, 117, 117, 117, 117, 117, 118, 118, 118, 118, 118, 118, 118, 118, 118, 118, 118, 118, 118, 118, 118, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 121, 121, 121, 121, 121, 121, 121, 121, 121, 121, 121, 121, 121, 121, 121, 122, 122, 122, 122, 122, 122, 122, 122, 122, 122, 122, 122, 122, 122, 122, 123, 123, 123, 123, 123, 123, 123, 123, 123, 123, 123, 123, 123, 123, 123, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124, 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, 126, 126, 126, 126, 126, 126, 126, 126, 126, 126, 126, 126, 126, 126, 126, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 129, 129, 129, 129, 129, 129, 129, 129, 129, 129, 129, 129, 129, 129, 129, 130, 130, 130, 130, 130, 130, 130, 130, 130, 130, 130, 130, 130, 130, 130, 131, 131, 131, 131, 131, 131, 131, 131, 131, 131, 131, 131, 131, 131, 131, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 133, 133, 133, 133, 133, 133, 133, 133, 133, 133, 133, 133, 133, 133, 133, 134, 134, 134, 134, 134, 134, 134, 134, 134, 134, 134, 134, 134, 134, 134, 135, 135, 135, 135, 135, 135, 135, 135, 135, 135, 135, 135, 135, 135, 135, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 137, 137, 137, 137, 137, 137, 137, 137, 137, 137, 137, 137, 137, 137, 137, 138, 138, 138, 138, 138, 138, 138, 138, 138, 138, 138, 138, 138, 138, 138, 139, 139, 139, 139, 139, 139, 139, 139, 139, 139, 139, 139, 139, 139, 139, 140, 140, 140, 140, 140, 140, 140, 140, 140, 140, 140, 140, 140, 140, 140, 141, 141, 141, 141, 141, 141, 141, 141, 141, 141, 141, 141, 141, 141, 141, 142, 142, 142, 142, 142, 142, 142, 142, 142, 142, 142, 142, 142, 142, 142, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 144, 144, 144, 144, 144, 144, 144, 144, 144, 144, 144, 144, 144, 144, 144, 145, 145, 145, 145, 145, 145, 145, 145, 145, 145, 145, 145, 145, 145, 145, 146, 146, 146, 146, 146, 146, 146, 146, 146, 146, 146, 146, 146, 146, 146, 147, 147, 147, 147, 147, 147, 147, 147, 147, 147, 147, 147, 147, 147, 147, 148, 148, 148, 148, 148, 148, 148, 148, 148, 148, 148, 148, 148, 148, 148, 149, 149, 149, 149, 149, 149, 149, 149, 149, 149, 149, 149, 149, 149, 149, 150, 150, 150, 150, 150, 150, 150, 150, 150, 150, 150, 150, 150, 150, 150, 151, 151, 151, 151, 151, 151, 151, 151, 151, 151, 151, 151, 151, 151, 151, 152, 152, 152, 152, 152, 152, 152, 152, 152, 152, 152, 152, 152, 152, 152, 153, 153, 153, 153, 153, 153, 153, 153, 153, 153, 153, 153, 153, 153, 153, 154, 154, 154, 154, 154, 154, 154, 154, 154, 154, 154, 154, 154, 154, 154, 155, 155, 155, 155, 155, 155, 155, 155, 155, 155, 155, 155, 155, 155, 155, 156, 156, 156, 156, 156, 156, 156, 156, 156, 156, 156, 156, 156, 156, 156, 157, 157, 157, 157, 157, 157, 157, 157, 157, 157, 157, 157, 157, 157, 157, 158, 158, 158, 158, 158, 158, 158, 158, 158, 158, 158, 158, 158, 158, 158, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 161, 161, 161, 161, 161, 161, 161, 161, 161, 161, 161, 161, 161, 161, 161, 162, 162, 162, 162, 162, 162, 162, 162, 162, 162, 162, 162, 162, 162, 162, 163, 163, 163, 163, 163, 163, 163, 163, 163, 163, 163, 163, 163, 163, 163, 164, 164, 164, 164, 164, 164, 164, 164, 164, 164, 164, 164, 164, 164, 164, 165, 165, 165, 165, 165, 165, 165, 165, 165, 165, 165, 165, 165, 165, 165, 166, 166, 166, 166, 166, 166, 166, 166, 166, 166, 166, 166, 166, 166, 166, 167, 167, 167, 167, 167, 167, 167, 167, 167, 167, 167, 167, 167, 167, 167, 168, 168, 168, 168, 168, 168, 168, 168, 168, 168, 168, 168, 168, 168, 168, 169, 169, 169, 169, 169, 169, 169, 169, 169, 169, 169, 169, 169, 169, 169, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 171, 171, 171, 171, 171, 171, 171, 171, 171, 171, 171, 171, 171, 171, 171, 172, 172, 172, 172, 172, 172, 172, 172, 172, 172, 172, 172, 172, 172, 172, 173, 173, 173, 173, 173, 173, 173, 173, 173, 173, 173, 173, 173, 173, 173, 174, 174, 174, 174, 174, 174, 174, 174, 174, 174, 174, 174, 174, 174, 174, 175, 175, 175, 175, 175, 175, 175, 175, 175, 175, 175, 175, 175, 175, 175, 176, 176, 176, 176, 176, 176, 176, 176, 176, 176, 176, 176, 176, 176, 176, 177, 177, 177, 177, 177, 177, 177, 177, 177, 177, 177, 177, 177, 177, 177, 178, 178, 178, 178, 178, 178, 178, 178, 178, 178, 178, 178, 178, 178, 178, 179, 179, 179, 179, 179, 179, 179, 179, 179, 179, 179, 179, 179, 179, 179, 180, 180, 180, 180, 180, 180, 180, 180, 180, 180, 180, 180, 180, 180, 180, 181, 181, 181, 181, 181, 181, 181, 181, 181, 181, 181, 181, 181, 181, 181, 182, 182, 182, 182, 182, 182, 182, 182, 182, 182, 182, 182, 182, 182, 182, 183, 183, 183, 183, 183, 183, 183, 183, 183, 183, 183, 183, 183, 183, 183, 184, 184, 184, 184, 184, 184, 184, 184, 184, 184, 184, 184, 184, 184, 184, 185, 185, 185, 185, 185, 185, 185, 185, 185, 185, 185, 185, 185, 185, 185, 186, 186, 186, 186, 186, 186, 186, 186, 186, 186, 186, 186, 186, 186, 186, 187, 187, 187, 187, 187, 187, 187, 187, 187, 187, 187, 187, 187, 187, 187, 188, 188, 188, 188, 188, 188, 188, 188, 188, 188, 188, 188, 188, 188, 188, 189, 189, 189, 189, 189, 189, 189, 189, 189, 189, 189, 189, 189, 189, 189, 190, 190, 190, 190, 190, 190, 190, 190, 190, 190, 190, 190, 190, 190, 190, 191, 191, 191, 191, 191, 191, 191, 191, 191, 191, 191, 191, 191, 191, 191, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 193, 193, 193, 193, 193, 193, 193, 193, 193, 193, 193, 193, 193, 193, 193, 194, 194, 194, 194, 194, 194, 194, 194, 194, 194, 194, 194, 194, 194, 194, 195, 195, 195, 195, 195, 195, 195, 195, 195, 195, 195, 195, 195, 195, 195, 196, 196, 196, 196, 196, 196, 196, 196, 196, 196, 196, 196, 196, 196, 196, 197, 197, 197, 197, 197, 197, 197, 197, 197, 197, 197, 197, 197, 197, 197, 198, 198, 198, 198, 198, 198, 198, 198, 198, 198, 198, 198, 198, 198, 198, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 201, 201, 201, 201, 201, 201, 201, 201, 201, 201, 201, 201, 201, 201, 201, 202, 202, 202, 202, 202, 202, 202, 202, 202, 202, 202, 202, 202, 202, 202, 203, 203, 203, 203, 203, 203, 203, 203, 203, 203, 203, 203, 203, 203, 203, 204, 204, 204, 204, 204, 204, 204, 204, 204, 204, 204, 204, 204, 204, 204, 205, 205, 205, 205, 205, 205, 205, 205, 205, 205, 205, 205, 205, 205, 205, 206, 206, 206, 206, 206, 206, 206, 206, 206, 206, 206, 206, 206, 206, 206, 207, 207, 207, 207, 207, 207, 207, 207, 207, 207, 207, 207, 207, 207, 207, 208, 208, 208, 208, 208, 208, 208, 208, 208, 208, 208, 208, 208, 208, 208, 209, 209, 209, 209, 209, 209, 209, 209, 209, 209, 209, 209, 209, 209, 209, 210, 210, 210, 210, 210, 210, 210, 210, 210, 210, 210, 210, 210, 210, 210, 211, 211, 211, 211, 211, 211, 211, 211, 211, 211, 211, 211, 211, 211, 211, 212, 212, 212, 212, 212, 212, 212, 212, 212, 212, 212, 212, 212, 212, 212, 213, 213, 213, 213, 213, 213, 213, 213, 213, 213, 213, 213, 213, 213, 213, 214, 214, 214, 214, 214, 214, 214, 214, 214, 214, 214, 214, 214, 214, 214, 215, 215, 215, 215, 215, 215, 215, 215, 215, 215, 215, 215, 215, 215, 215, 216, 216, 216, 216, 216, 216, 216, 216, 216, 216, 216, 216, 216, 216, 216, 217, 217, 217, 217, 217, 217, 217, 217, 217, 217, 217, 217, 217, 217, 217, 218, 218, 218, 218, 218, 218, 218, 218, 218, 218, 218, 218, 218, 218, 218, 219, 219, 219, 219, 219, 219, 219, 219, 219, 219, 219, 219, 219, 219, 219, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 221, 221, 221, 221, 221, 221, 221, 221, 221, 221, 221, 221, 221, 221, 221, 222, 222, 222, 222, 222, 222, 222, 222, 222, 222, 222, 222, 222, 222, 222, 223, 223, 223, 223, 223, 223, 223, 223, 223, 223, 223, 223, 223, 223, 223, 224, 224, 224, 224, 224, 224, 224, 224, 224, 224, 224, 224, 224, 224, 224, 225, 225, 225, 225, 225, 225, 225, 225, 225, 225, 225, 225, 225, 225, 225, 226, 226, 226, 226, 226, 226, 226, 226, 226, 226, 226, 226, 226, 226, 226, 227, 227, 227, 227, 227, 227, 227, 227, 227, 227, 227, 227, 227, 227, 227, 228, 228, 228, 228, 228, 228, 228, 228, 228, 228, 228, 228, 228, 228, 228, 229, 229, 229, 229, 229, 229, 229, 229, 229, 229, 229, 229, 229, 229, 229, 230, 230, 230, 230, 230, 230, 230, 230, 230, 230, 230, 230, 230, 230, 230, 231, 231, 231, 231, 231, 231, 231, 231, 231, 231, 231, 231, 231, 231, 231, 232, 232, 232, 232, 232, 232, 232, 232, 232, 232, 232, 232, 232, 232, 232, 233, 233, 233, 233, 233, 233, 233, 233, 233, 233, 233, 233, 233, 233, 233, 234, 234, 234, 234, 234, 234, 234, 234, 234, 234, 234, 234, 234, 234, 234, 235, 235, 235, 235, 235, 235, 235, 235, 235, 235, 235, 235, 235, 235, 235, 236, 236, 236, 236, 236, 236, 236, 236, 236, 236, 236, 236, 236, 236, 236, 237, 237, 237, 237, 237, 237, 237, 237, 237, 237, 237, 237, 237, 237, 237, 238, 238, 238, 238, 238, 238, 238, 238, 238, 238, 238, 238, 238, 238, 238, 239, 239, 239, 239, 239, 239, 239, 239, 239, 239, 239, 239, 239, 239, 239, 240, 240, 240, 240, 240, 240, 240, 240, 240, 240, 240, 240, 240, 240, 240, 241, 241, 241, 241, 241, 241, 241, 241, 241, 241, 241, 241, 241, 241, 241, 242, 242, 242, 242, 242, 242, 242, 242, 242, 242, 242, 242, 242, 242, 242, 243, 243, 243, 243, 243, 243, 243, 243, 243, 243, 243, 243, 243, 243, 243, 244, 244, 244, 244, 244, 244, 244, 244, 244, 244, 244, 244, 244, 244, 244, 245, 245, 245, 245, 245, 245, 245, 245, 245, 245, 245, 245, 245, 245, 245, 246, 246, 246, 246, 246, 246, 246, 246, 246, 246, 246, 246, 246, 246, 246, 247, 247, 247, 247, 247, 247, 247, 247, 247, 247, 247, 247, 247, 247, 247, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 249, 249, 249, 249, 249, 249, 249, 249, 249, 249, 249, 249, 249, 249, 249, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 251, 251, 251, 251, 251, 251, 251, 251, 251, 251, 251, 251, 251, 251, 251, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 252, 253, 253, 253, 253, 253, 253, 253, 253, 253, 253, 253, 253, 253, 253, 253, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 257, 257, 257, 257, 257, 257, 257, 257, 257, 257, 257, 257, 257, 257, 257, 258, 258, 258, 258, 258, 258, 258, 258, 258, 258, 258, 258, 258, 258, 258, 259, 259, 259, 259, 259, 259, 259, 259, 259, 259, 259, 259, 259, 259, 259, 260, 260, 260, 260, 260, 260, 260, 260, 260, 260, 260, 260, 260, 260, 260, 261, 261, 261, 261, 261, 261, 261, 261, 261, 261, 261, 261, 261, 261, 261, 262, 262, 262, 262, 262, 262, 262, 262, 262, 262, 262, 262, 262, 262, 262, 263, 263, 263, 263, 263, 263, 263, 263, 263, 263, 263, 263, 263, 263, 263, 264, 264, 264, 264, 264, 264, 264, 264, 264, 264, 264, 264, 264, 264, 264, 265, 265, 265, 265, 265, 265, 265, 265, 265, 265, 265, 265, 265, 265, 265, 266, 266, 266, 266, 266, 266, 266, 266, 266, 266, 266, 266, 266, 266, 266, 267, 267, 267, 267, 267, 267, 267, 267, 267, 267, 267, 267, 267, 267, 267, 268, 268, 268, 268, 268, 268, 268, 268, 268, 268, 268, 268, 268, 268, 268, 269, 269, 269, 269, 269, 269, 269, 269, 269, 269, 269, 269, 269, 269, 269, 270, 270, 270, 270, 270, 270, 270, 270, 270, 270, 270, 270, 270, 270, 270, 271, 271, 271, 271, 271, 271, 271, 271, 271, 271, 271, 271, 271, 271, 271, 272, 272, 272, 272, 272, 272, 272, 272, 272, 272, 272, 272, 272, 272, 272, 273, 273, 273, 273, 273, 273, 273, 273, 273, 273, 273, 273, 273, 273, 273, 274, 274, 274, 274, 274, 274, 274, 274, 274, 274, 274, 274, 274, 274, 274, 275, 275, 275, 275, 275, 275, 275, 275, 275, 275, 275, 275, 275, 275, 275, 276, 276, 276, 276, 276, 276, 276, 276, 276, 276, 276, 276, 276, 276, 276, 277, 277, 277, 277, 277, 277, 277, 277, 277, 277, 277, 277, 277, 277, 277, 278, 278, 278, 278, 278, 278, 278, 278, 278, 278, 278, 278, 278, 278, 278, 279, 279, 279, 279, 279, 279, 279, 279, 279, 279, 279, 279, 279, 279, 279, 280, 280, 280, 280, 280, 280, 280, 280, 280, 280, 280, 280, 280, 280, 280, 281, 281, 281, 281, 281, 281, 281, 281, 281, 281, 281, 281, 281, 281, 281, 282, 282, 282, 282, 282, 282, 282, 282, 282, 282, 282, 282, 282, 282, 282, 283, 283, 283, 283, 283, 283, 283, 283, 283, 283, 283, 283, 283, 283, 283, 284, 284, 284, 284, 284, 284, 284, 284, 284, 284, 284, 284, 284, 284, 284, 285, 285, 285, 285, 285, 285, 285, 285, 285, 285, 285, 285, 285, 285, 285, 286, 286, 286, 286, 286, 286, 286, 286, 286, 286, 286, 286, 286, 286, 286, 287, 287, 287, 287, 287, 287, 287, 287, 287, 287, 287, 287, 287, 287, 287, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 289, 289, 289, 289, 289, 289, 289, 289, 289, 289, 289, 289, 289, 289, 289, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 291, 291, 291, 291, 291, 291, 291, 291, 291, 291, 291, 291, 291, 291, 291, 292, 292, 292, 292, 292, 292, 292, 292, 292, 292, 292, 292, 292, 292, 292, 293, 293, 293, 293, 293, 293, 293, 293, 293, 293, 293, 293, 293, 293, 293, 294, 294, 294, 294, 294, 294, 294, 294, 294, 294, 294, 294, 294, 294, 294, 295, 295, 295, 295, 295, 295, 295, 295, 295, 295, 295, 295, 295, 295, 295, 296, 296, 296, 296, 296, 296, 296, 296, 296, 296, 296, 296, 296, 296, 296, 297, 297, 297, 297, 297, 297, 297, 297, 297, 297, 297, 297, 297, 297, 297, 298, 298, 298, 298, 298, 298, 298, 298, 298, 298, 298, 298, 298, 298, 298, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 300, 300, 300, 300, 300, 300, 300, 300, 300, 300, 300, 300, 300, 300, 300, 301, 301, 301, 301, 301, 301, 301, 301, 301, 301, 301, 301, 301, 301, 301, 302, 302, 302, 302, 302, 302, 302, 302, 302, 302, 302, 302, 302, 302, 302, 303, 303, 303, 303, 303, 303, 303, 303, 303, 303, 303, 303, 303, 303, 303), school1 = structure(c(2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L, 2L, 2L, 4L, 2L, 4L, 3L, 2L, 4L, 3L, 5L, 2L, 4L, 3L, 5L, 1L), .Label = c("Barcelona", "London", "Milano", "Paris", "St.Gallen", "Stockholm"), class = "factor"), school2 = structure(c(4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L, 4L, 3L, 3L, 5L, 5L, 5L, 1L, 1L, 1L, 1L, 6L, 6L, 6L, 6L, 6L), .Label = c("Barcelona", "London", "Milano", "Paris", "St.Gallen", "Stockholm"), class = "factor"), win1 = c(1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, NA, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, NA, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, NA, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, NA, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, NA, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, NA, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, NA, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, NA, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, NA, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, NA, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, NA, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, NA, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, NA, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, NA, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, NA, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, NA, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, NA, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, NA, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, NA, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, NA, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, NA, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, NA, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, NA, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, NA, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, NA, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, NA, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, NA, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, NA, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, NA, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, NA, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, NA, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, NA, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, NA, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, NA, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, NA, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, NA, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, NA, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, NA, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, NA, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, NA, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, NA, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, NA, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, NA, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, NA, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, NA, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, NA, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, NA, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, NA, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, NA, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, NA, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, NA, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, NA, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, NA, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, NA, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, NA, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, NA, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, NA, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, NA, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, NA, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, NA, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, NA, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1), win2 = c(0, 0, NA, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, NA, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, NA, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, NA, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, NA, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, NA, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, NA, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, NA, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, NA, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, NA, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, NA, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, NA, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, NA, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, NA, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, NA, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, NA, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, NA, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, NA, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, NA, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, NA, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, NA, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, NA, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, NA, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, NA, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, NA, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, NA, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, NA, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, NA, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, NA, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, NA, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, NA, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, NA, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, NA, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, NA, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, NA, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, NA, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, NA, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, NA, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, NA, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, NA, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, NA, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, NA, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, NA, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, NA, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, NA, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0), tied = c(0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, NA, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, NA, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, NA, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, NA, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, NA, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, NA, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, NA, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, NA, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, NA, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, NA, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, NA, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, NA, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, NA, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, NA, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0), win1.adj = c(1, 1, NA, 0, 0, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0.5, 1, 1, NA, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0.5, 1, NA, 1, 1, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0.5, 1, 1, NA, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, NA, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, NA, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, NA, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, NA, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, NA, 1, 1, 1, 0, 0, 0, 0, 0, 0.5, 0, 0, 1, 0, 0, NA, 1, 1, 1, 0, 1, 0.5, 0, 0.5, 1, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 1, 0.5, 1, 1, 1, 0, 1, 0.5, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0.5, NA, 1, 1, 1, 1, 1, 1, 0.5, 1, 1, 1, 0.5, 0.5, 1, 1, NA, 0, 0, 0, 1, 0.5, 0.5, 1, 0, 0, 0, 0, 0, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, NA, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, NA, 1, 0, 1, 0, 0, 0, 0, 1, 0.5, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 0.5, 0.5, 1, 1, 0, 0, 1, 0, 0, 0, NA, 0, 0, 1, 0, 0, 0.5, 0, 1, 1, 1, 1, 1, 1, 1, NA, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, NA, 0, 0, 0, 1, 1, 1, 1, 0, 0.5, 0, 0.5, 0, 0, 0, NA, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0.5, 1, 1, NA, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, NA, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, NA, 1, 0, 0, 1, 0.5, 0.5, 1, 0, 0, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 0.5, 0, 1, 1, 1, 0, 1, 0, 0, NA, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0.5, 0, 0, 1, 0.5, NA, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, NA, 1, 1, 0, 0, 0, 0, 0, 1, 0.5, 0, 0, 1, 0, 0, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0.5, NA, 0, 0, 0, 0.5, 1, 0, 1, 0.5, 1, 0.5, 1, 1, 1, 1, NA, 1, 1, 0.5, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, NA, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, NA, 1, 0, 1, 1, 0, 0.5, 0, 0, 0, 1, 0, 0.5, 1, 0.5, NA, 1, 1, 1, 0.5, 0.5, 0.5, 0, 1, 0.5, 0.5, 0, 1, 1, 1, NA, 1, 1, 0.5, 1, 1, 0.5, 0, 0, 1, 0, 0, 0, 0.5, 1, NA, 0, 0.5, 1, 0, 0, 1, 0, 1, 1, 1, 0.5, 1, 0, 0, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, NA, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0, 0, 1, 0.5, NA, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, NA, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0.5, 1, NA, 1, 1, 1, 0, 0.5, 0, 0, 1, 1, 1, 0, 1, 0.5, 1, NA, 1, 1, 0.5, 0, 0.5, 0, 0, 1, 1, 0, 0, 1, 1, 1, NA, 1, 0, 0, 0.5, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, NA, 0, 1, 0, 1, 1, 0.5, 1, 1, 1, 0, 1, 0, 1, 1, NA, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0.5, 1, 0, 0, 1, NA, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0.5, NA, 0, 0, 0.5, 0.5, 0, 0.5, 0.5, 1, 0.5, 1, 1, 1, 0, 1, NA, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, NA, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, NA, 0, 1, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0.5, 0, 1, NA, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0.5, 1, 1, 1, 1, 1, 1, 0, 0, NA, 0, 1, 1, 0, 0, 0.5, 0, 0.5, 1, 1, 0.5, 1, 1, 1, NA, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, NA, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, NA, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, NA, 1, 0, 1, 1, 0, 1, 0.5, 1, 0, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0.5, 0.5, NA, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0, 1, NA, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, NA, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0.5, 0, 0.5, 0, 0, NA, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, NA, 1, 0, 1, 1, 0, 1, 0, 1, 0.5, 1, 1, 1, 1, 1, NA, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, NA, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, NA, 0.5, 0, 0, 1, 1, 0.5, 1, 0.5, 0, 0, 0.5, 0, 1, 1, NA, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0.5, 1, 0, 0.5, NA, 0.5, 0.5, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, NA, 0, 0, 0, 1, 1, 0.5, 1, 0, 0, 0, 1, 0, 1, 1, NA, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, NA, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, NA, 1, 1, 0.5, 1, 0, 0, 0, 1, 0.5, 0.5, 1, 1, 1, 1, NA, 1, 1, 0, 1, 0, 0.5, 0, 1, 1, 0.5, 0, 1, 1, 0.5, NA, 1, 0, 1, 1, 0, 1, 0, 0.5, 0, 0.5, 0, 0.5, 0, 1, NA, 0.5, 1, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0, 0, 1, NA, 1, 1, 0.5, 1, 1, 0.5, 0.5, 1, 1, 0.5, 0.5, 0.5, 1, 0.5, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0.5, 1, 0.5, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0.5, 0, 0.5, 1, 0, 0.5, 0.5, 1, 0, 0, 0.5, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0.5, 0, 0, 1, 1, 0.5, 0, 1, 0.5, 1, 1, 0.5, 1, 0.5, 1, 0.5, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 1, 0, 0, 1, 0.5, 0.5, 1, 1, 0, 0.5, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0.5, 1, 1, 0.5, 0.5, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0.5, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0.5, 0, 1, 0, 0, 0.5, 0, 0.5, 1, 0, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0.5, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 1, 0.5, 0.5, 0.5, 1, 0.5, 0.5, 0.5, 1, 0.5, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0.5, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0.5, 0.5, 0, 0, 1, 0.5, 0.5, 1, 1, 0.5, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0.5, 0.5, 1, 1, 0, 0, 1, 0, 0.5, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0.5, 0, 0.5, 0, 1, 1, 1, 1, 0, 0, 1, 0.5, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0.5, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0.5, 1, 1, 0.5, 1, 0, 1, 1, 0.5, 1, 0.5, 0.5, 0, 0.5, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0.5, 0.5, 0, 0, 0, 0.5, 1, 0.5, 1, 0, 1, 0.5, 1, 0.5, 0, 1, 1, 0, 1, 1, 0.5, 0.5, 0.5, 0, 1, 1, 1, 0.5, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0.5, 1, 1, 1, 0.5, 0.5, 1, 1, 0.5, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0.5, 1, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0.5, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0.5, 1, 1, 1, 0, 1, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 0, 0.5, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 1, 0, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 0.5, 0.5, 1, 1, 1, 1, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 0.5, 0, 0, 0.5, 0.5, 0.5, 0, 0, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0.5, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0.5, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0.5, 0, 0, 1, 1, 0.5, 1, 1, 1, 1, 1, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0.5, 0, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0, 0.5, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0.5, 0, 1, 1, 0.5, 1, 0, 0, 1, 0.5, 0.5, 1, 1, 0.5, 0, 1, 1, 1, 1, 0.5, 1, 0.5, 0.5, 1, 0, 0, 0, 1, 0.5, 0.5, 0.5, 1, 1, 0.5, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0.5, 0, 1, 0, 0.5, 0.5, 0.5, 0, 1, 1, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 0.5, 0.5, 1, 1, 1, 0.5, 0.5, 0.5, 0, 0.5, 0.5, 0.5, 0, 0.5, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0.5, 1, 1, 0.5, 0.5, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0.5, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0.5, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0.5, 1, 1, 0, 0, 0, 0.5, 0, 1, 1, 0, 0, 0, 1, 1, 0.5, 1, 1, 1, 1, 1, 0.5, 0, 1, 1, 0.5, 1, 0.5, 1, 1, 0.5, 0.5, 1, 1, 0.5, 0.5, 0.5, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 1, 0.5, 0, 0, 0, 0.5, 1, 1, 0, 0, 1, 0, 0.5, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0.5, 1, 1, 1, 0.5, 1, 1, 0.5, 1, 0.5, 0, 0, 1, 1, 0.5, 0, 1, 1, 1, 0.5, 0, 0, 0, 1, 0.5, 0.5, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0.5, 1, 1, 1, 1, 0.5, 0, 0, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0.5, 1, 1, 0.5, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0.5, 0, 0, 0, 0, 0.5, 1, 0.5, 0.5, 1, 0, 1, 1, 0.5, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 1, 1, 0.5, 0.5, 1, 1, 0.5, 0.5, 0.5, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0.5, 1, 0.5, 1, 1, 0.5, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0.5, 1, 0, 0, 0, 0, 0, 0, 1, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0.5, 1, 0.5, 0.5, 1, 0.5, 0.5, 0.5, 1, 0.5, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0.5, 1, 0, 0.5, 0, 1, 1, 1, 0.5, 0.5, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0.5, 1, 1, 0, 0.5, 0, 1, 1, 0.5, 1, 0.5, 0.5, 1, 0.5, 0.5, 0.5, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 0.5, 1, 0, 1, 1, 1, 1, 1, 1, 0.5, 1, 0.5, 0, 1, 1, 0, 0, 0, 1, 1, 0.5, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0.5, 1, 1, 1, 1, 0.5, 1, 0.5, 0.5, 1, 0.5, 0.5, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0.5, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0.5, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0.5, 0.5, 0.5, 1, 1, 1, 0.5, 0.5, 0.5, 0, 1, 1, 1, 0.5, 1, 0.5, 1, 1, 1, 1, 1, 0, 0.5, 0, 0, 0.5, 1, 1, 0, 1, 1, 1, 0.5, 1, 0.5, 1, 1, 0.5, 0.5, 1, 1, 0.5, 1, 0.5, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0.5, 1, 1, 1, 0, 1, 0, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 1, 1, 1, 0.5, 0.5, 0, 1, 1, 1, 1, 0.5, 0, 1, 0, 0, 1, 1, 0.5, 0.5, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0.5, 1, 1, 0.5, 0.5, 1, 1, 0.5, 0.5, 0.5, 1, 1, 0, 1, 0.5, 0.5, 1, 0, 0, 0, 0.5, 0, 0, 0, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0.5, 0.5, 1, 1, 0.5, 1, 0.5, 1, 1, 1, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0.5, 1, 0.5, 0.5, 0.5, 1, 1, 1, 1, 0, 0.5, 0.5, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 1, 1, 0.5, 0.5, 1, 1, 0, 0.5, 0.5, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0.5, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0.5, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0.5, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0.5, 0, 1, 1, 0, 1, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0.5, 0.5, 1, 1, 1, 1, 0, 1, 1, 0.5, 1, 1, 1, 0, 0.5, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0, 0, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0.5, 1, 1, 0.5, 0.5, 0, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 0.5, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0.5, 1, 1, 0, 0, 1, 1, 0.5, 0.5, 1, 0.5, 0.5, 1, 0, 0, 0, 0.5, 1, 0.5, 1, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0.5, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0.5, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0.5, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0.5, 1, 0, 0, 1, 0.5, 0.5, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0.5, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0.5, 0.5, 0.5, 0, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0.5, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0.5, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0.5, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0.5, 0.5, 1, 0, 1, 0.5, 0.5, 0.5, 0, 0.5, 0, 0.5, 0, 0.5, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0.5, 0, 0, 0.5, 0.5, 0.5, 0, 0, 0.5, 0, 0.5, 1, 1, 1, 1), win2.adj = c(0, 0, NA, 1, 1, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0.5, 0, 0, NA, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0.5, 0, NA, 0, 0, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0.5, 0, 0, NA, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, NA, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, NA, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, NA, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, NA, 0, 0, 0, 1, 1, 1, 1, 1, 0.5, 1, 1, 0, 1, 1, NA, 0, 0, 0, 1, 0, 0.5, 1, 0.5, 0, 0, 1, 0, 0, 0, NA, 0, 1, 1, 0, 0, 0.5, 0, 0, 0, 1, 0, 0.5, 0, 0, NA, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0.5, NA, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0.5, 0.5, 0, 0, NA, 1, 1, 1, 0, 0.5, 0.5, 0, 1, 1, 1, 1, 1, 0, 0, NA, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, NA, 0, 1, 0, 1, 1, 1, 1, 0, 0.5, 0, 1, 0, 0, 0, NA, 0, 1, 1, 0, 0.5, 0.5, 0, 0, 1, 1, 0, 1, 1, 1, NA, 1, 1, 0, 1, 1, 0.5, 1, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, NA, 1, 1, 1, 0, 0, 0, 0, 1, 0.5, 1, 0.5, 1, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0.5, 0, 0, NA, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, NA, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 1, 1, 0, 0.5, 0.5, 0, 1, 1, 1, 1, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, NA, 0, 0, 0, 1, 1, 0.5, 1, 0, 0, 0, 1, 0, 1, 1, NA, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0.5, 1, 1, 0, 0.5, NA, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 0, 1, 1, 1, 1, 1, 0, 0.5, 1, 1, 0, 1, 1, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0.5, NA, 1, 1, 1, 0.5, 0, 1, 0, 0.5, 0, 0.5, 0, 0, 0, 0, NA, 0, 0, 0.5, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, NA, 0, 1, 0, 0, 1, 0.5, 1, 1, 1, 0, 1, 0.5, 0, 0.5, NA, 0, 0, 0, 0.5, 0.5, 0.5, 1, 0, 0.5, 0.5, 1, 0, 0, 0, NA, 0, 0, 0.5, 0, 0, 0.5, 1, 1, 0, 1, 1, 1, 0.5, 0, NA, 1, 0.5, 0, 1, 1, 0, 1, 0, 0, 0, 0.5, 0, 1, 1, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, NA, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 1, 1, 0, 0.5, NA, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, NA, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0.5, 0, NA, 0, 0, 0, 1, 0.5, 1, 1, 0, 0, 0, 1, 0, 0.5, 0, NA, 0, 0, 0.5, 1, 0.5, 1, 1, 0, 0, 1, 1, 0, 0, 0, NA, 0, 1, 1, 0.5, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, NA, 1, 0, 1, 0, 0, 0.5, 0, 0, 0, 1, 0, 1, 0, 0, NA, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0.5, 0, 1, 1, 0, NA, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, NA, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, NA, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0.5, NA, 1, 1, 0.5, 0.5, 1, 0.5, 0.5, 0, 0.5, 0, 0, 0, 1, 0, NA, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, NA, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, NA, 1, 0, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0.5, 1, 0, NA, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, NA, 1, 1, 1, 1, 1, 0.5, 0, 0, 0, 0, 0, 0, 1, 1, NA, 1, 0, 0, 1, 1, 0.5, 1, 0.5, 0, 0, 0.5, 0, 0, 0, NA, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, NA, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, NA, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, NA, 0, 1, 0, 0, 1, 0, 0.5, 0, 1, 0, 1, 0, 0, 0, NA, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0.5, 0.5, NA, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 0, NA, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, NA, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0.5, 1, 0.5, 1, 1, NA, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, NA, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, NA, 0, 1, 0, 0, 1, 0, 1, 0, 0.5, 0, 0, 0, 0, 0, NA, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, NA, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, NA, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0.5, 1, 1, 0, 0, 0.5, 0, 0.5, 1, 1, 0.5, 1, 0, 0, NA, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0.5, 0, 1, 0.5, NA, 0.5, 0.5, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, NA, 1, 1, 1, 0, 0, 0.5, 0, 1, 1, 1, 0, 1, 0, 0, NA, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, NA, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, NA, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA, 0, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0.5, 0, 0, 0, 0, NA, 0, 0, 1, 0, 1, 0.5, 1, 0, 0, 0.5, 1, 0, 0, 0.5, NA, 0, 1, 0, 0, 1, 0, 1, 0.5, 1, 0.5, 1, 0.5, 1, 0, NA, 0.5, 0, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 1, 1, 0, NA, 0, 0, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5, 0.5, 0, 0.5, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0.5, 0, 0.5, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0.5, 1, 0.5, 0, 1, 0.5, 0.5, 0, 1, 1, 0.5, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0.5, 1, 1, 0, 0, 0.5, 1, 0, 0.5, 0, 0, 0.5, 0, 0.5, 0, 0.5, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 1, 1, 0, 0.5, 0.5, 0, 0, 1, 0.5, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0.5, 0, 0, 0.5, 0.5, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0.5, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0.5, 0.5, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.5, 1, 0, 1, 1, 0.5, 1, 0.5, 0, 1, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0.5, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 0, 0.5, 0.5, 0.5, 0, 0.5, 0.5, 0.5, 0, 0.5, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0.5, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0.5, 0.5, 1, 1, 0, 0.5, 0.5, 0, 0, 0.5, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0.5, 0.5, 0, 0, 1, 1, 0, 1, 0.5, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0.5, 1, 0.5, 1, 0, 0, 0, 0, 1, 1, 0, 0.5, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0.5, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.5, 0, 0, 0.5, 0, 1, 0, 0, 0.5, 0, 0.5, 0.5, 1, 0.5, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0.5, 0.5, 1, 1, 1, 0.5, 0, 0.5, 0, 1, 0, 0.5, 0, 0.5, 1, 0, 0, 1, 0, 0, 0.5, 0.5, 0.5, 1, 0, 0, 0, 0.5, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.5, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0.5, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0.5, 0, 0, 0, 1, 0, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 1, 0.5, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 1, 0, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0, 0.5, 0.5, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0, 0, 0, 0, 0, 0.5, 1, 1, 0.5, 0.5, 0.5, 1, 1, 0.5, 0.5, 0.5, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0.5, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0.5, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0.5, 1, 1, 0, 0, 0.5, 0, 0, 0, 0, 0, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0.5, 1, 1, 1, 0.5, 1, 1, 1, 0.5, 1, 1, 0.5, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.5, 1, 0, 0, 0.5, 0, 1, 1, 0, 0.5, 0.5, 0, 0, 0.5, 1, 0, 0, 0, 0, 0.5, 0, 0.5, 0.5, 0, 1, 1, 1, 0, 0.5, 0.5, 0.5, 0, 0, 0.5, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0.5, 1, 0, 1, 0.5, 0.5, 0.5, 1, 0, 0, 0.5, 0.5, 0.5, 0.5, 0, 0, 0, 0, 0, 0, 0.5, 0.5, 0, 0, 0, 0.5, 0.5, 0.5, 1, 0.5, 0.5, 0.5, 1, 0.5, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0.5, 0, 0, 0.5, 0.5, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0.5, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0.5, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0.5, 0, 0, 1, 1, 1, 0.5, 1, 0, 0, 1, 1, 1, 0, 0, 0.5, 0, 0, 0, 0, 0, 0.5, 1, 0, 0, 0.5, 0, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5, 0.5, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0, 0.5, 1, 1, 1, 0.5, 0, 0, 1, 1, 0, 1, 0.5, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0.5, 0, 0, 0, 0.5, 0, 0, 0.5, 0, 0.5, 1, 1, 0, 0, 0.5, 1, 0, 0, 0, 0.5, 1, 1, 1, 0, 0.5, 0.5, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0.5, 0, 0, 0, 0, 0.5, 1, 1, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0.5, 0, 0, 0.5, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0.5, 1, 1, 1, 1, 0.5, 0, 0.5, 0.5, 0, 1, 0, 0, 0.5, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5, 0.5, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0.5, 0, 0.5, 0, 0, 0.5, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 1, 1, 1, 1, 1, 1, 0, 0.5, 0.5, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0.5, 0, 0.5, 0.5, 0, 0.5, 0.5, 0.5, 0, 0.5, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.5, 0, 1, 0.5, 1, 0, 0, 0, 0.5, 0.5, 0.5, 1, 0, 0, 0, 1, 0.5, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0.5, 0, 0, 1, 0.5, 1, 0, 0, 0.5, 0, 0.5, 0.5, 0, 0.5, 0.5, 0.5, 0, 0.5, 0.5, 0.5, 0.5, 0.5, 0, 0, 0.5, 0, 1, 0, 0, 0, 0, 0, 0, 0.5, 0, 0.5, 1, 0, 0, 1, 1, 1, 0, 0, 0.5, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0.5, 0, 0, 0, 0, 0.5, 0, 0.5, 0.5, 0, 0.5, 0.5, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0.5, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0.5, 0.5, 0.5, 0, 0, 0, 0.5, 0.5, 0.5, 1, 0, 0, 0, 0.5, 0, 0.5, 0, 0, 0, 0, 0, 1, 0.5, 1, 1, 0.5, 0, 0, 1, 0, 0, 0, 0.5, 0, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0, 0.5, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0.5, 0, 0, 0, 1, 0, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0.5, 0.5, 1, 0, 0, 0, 0, 0.5, 1, 0, 1, 1, 0, 0, 0.5, 0.5, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5, 0.5, 0, 0, 1, 0, 0.5, 0.5, 0, 1, 1, 1, 0.5, 1, 1, 1, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0.5, 0.5, 0, 0, 0.5, 0, 0.5, 0, 0, 0, 0.5, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0.5, 0, 0.5, 0.5, 0.5, 0, 0, 0, 0, 1, 0.5, 0.5, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0.5, 0.5, 0, 0, 1, 0.5, 0.5, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0.5, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0.5, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0.5, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0.5, 1, 0, 0, 1, 0, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0.5, 0.5, 0, 0, 0, 0, 1, 0, 0, 0.5, 0, 0, 0, 1, 0.5, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 1, 1, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0.5, 0, 0, 0.5, 0.5, 1, 0, 0, 0.5, 0, 0, 0, 0.5, 0, 0.5, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0.5, 0, 0, 1, 1, 0, 0, 0.5, 0.5, 0, 0.5, 0.5, 0, 1, 1, 1, 0.5, 0, 0.5, 0, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0.5, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0.5, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0.5, 0, 1, 1, 0, 0.5, 0.5, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0.5, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0.5, 0.5, 0.5, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0.5, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0.5, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0.5, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0.5, 0.5, 0, 1, 0, 0.5, 0.5, 0.5, 1, 0.5, 1, 0.5, 1, 0.5, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0.5, 1, 1, 0.5, 0.5, 0.5, 1, 1, 0.5, 1, 0.5, 0, 0, 0, 0)), .Names = c("student", "school1", "school2", "win1", "win2", "tied", "win1.adj", "win2.adj" ), row.names = c(NA, -4545L), class = "data.frame"), students = structure(list( STUD = structure(c(1L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L), .Label = c("other", "commerce" ), class = "factor"), ENG = structure(c(1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L), .Label = c("good", "poor"), class = "factor"), FRA = structure(c(1L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L ), .Label = c("good", "poor"), class = "factor"), SPA = structure(c(2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L), .Label = c("good", "poor"), class = "factor"), ITA = structure(c(2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L), .Label = c("good", "poor"), class = "factor"), WOR = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L), .Label = c("no", "yes"), class = "factor"), DEG = structure(c(2L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 2L), .Label = c("no", "yes"), class = "factor"), SEX = structure(c(2L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L), .Label = c("female", "male"), class = "factor")), .Names = c("STUD", "ENG", "FRA", "SPA", "ITA", "WOR", "DEG", "SEX"), row.names = c(NA, -303L), class = "data.frame"), schools = structure(list(Barcelona = c(1, 0, 0, 0, 0, 0), London = c(0, 1, 0, 0, 0, 0), Milano = c(0, 0, 1, 0, 0, 0), Paris = c(0, 0, 0, 1, 0, 0), St.Gallen = c(0, 0, 0, 0, 1, 0), Stockholm = c(0, 0, 0, 0, 0, 1), LAT = c(1, 0, 1, 1, 0, 0)), .Names = c("Barcelona", "London", "Milano", "Paris", "St.Gallen", "Stockholm", "LAT"), row.names = c("Barcelona", "London", "Milano", "Paris", "St.Gallen", "Stockholm"), class = "data.frame")), .Names = c("preferences", "students", "schools")) BradleyTerry2/data/flatlizards.rda0000744000176200001440000002067613152515662016706 0ustar liggesusers{wTIs@@DIb@S슂a ( (M19ŀšA*# ( "I$D~]5 ۺws9=5Uw[{3u @V "WNr+/+GF Ж <}܂}C݃$Srҫ..1Dt$b#kF0V"$b$CX2jCt $I"=$W"]U S+T߾Ł[>ց9(%%lZmmCҚJk~E<Ze5?P@Ye @Y;u0quLTRa*f%VF< H a**S b"s*TK׿SI@EKŶQ "*L5b|TTP*aR}!]S{՘dF<uS1jpPahYza@oأxoZ5X`9Y>ԩ- CzR)jǏYm4sr˛ufJ7߃ zsVw1ֺcqoTcJna __(@ zDϠbQja|wXcwn@h&sVnJ1 B37]ey鉗lǞ^,$`vjOxF(<& OA>nm^ MJ>ZΙuZMJۉM @4@ާ~0Tzq!!4}cD}dٰKZA,嵻^'yP6nmַ%d};_b|A%*NFʥe`nhx4k9t<9$^֞c_Xe2l}bԀkUC+&/ Cgz2l5]`=5=*ۺ(& 5ƲD~ͳ@.}f xhj64m`0Y?v G3:e\9 ׾Wi{T}C+3@?dHEI[MwJ&vى*J_lV¶z7/c3M>}%)/Vb2;Cѓ|Q[mS|3ǝRzʕrKUx1)l|cԔ>Zn<±"vfNU:- +UVVw_ qzSl>mwq Szop060Tu0ۯ 4xagOD֖eu,O`~w8-\5=3@\P͈S{P_C &LGmKb6^q 5vy1_l9)nӋ 㳴CH׫Cֆ_S 3 ,BNbɗ{e,%a@i͛%\# *Y5^N5T͏0d *~y rKs/^UIQn$#k.Wጪz3ȉE5Vu[|3 Lz9{&z TlQ 1Wڸ_"qWðfľL(׫lu__Q~.8sȷ@U+'&zm=X ُ>޶_az o['1up*&,CE_~ x+XXV.gzzM%o .ls㷲x1Adɘ~ gq`͗(6vԙ'Η^`מ4M=wQ8V;_ܐZ%ߚ&c$KR_Y-"oxp4 x]36YC#,+X([]<M>+48AXE|bz-XQ{"g?{-g_V Uk]23t3(fvMt2ϰdwW@MC0io̲O[6I-{dF; ֯9t/%9~ʺk-eݮI. j&džV:~+YI`:eX;\0'M9XgsYS2^d?ªn7j0j}vB!oY󖢖o9nP'\S5?(?^H/X5Hms6^P uiaE Ɨ]sb2{z$PG4ugetjԾNɒ̛x£`%ܩ؄藗8AbVc6H,_ Q͞W-c\ mHuHhՇ}aag6v!Elm$H^*w( z'2T;$|N3% 9Va;ШM ,c: 3^ͪHe8A#9r:kna׃׬[C%dK[8=Ь+S_+s`yl칆OTOsL}=dH;†w("=/ oyj_;$/Q)79oX-?i؈ĝdXO؅}CṈ A}IT$:6?L I:n_K˜Hj#W,Kx ES~_H@E=Ԧ~Ы"9 :T`R_XR>-Q%ƳSQ7u~VȦy?Ah:qEX8(V~zC-wM|?=oPFPI8~Dϳ&PO+;l='-^a!'@w&>f0B=634wuAC_*cq4M_lk ֡uY'ބiޫ^18i"S]?nGZձ$ kczlꔒ>Ao[P8Z񇚄w%«ŤBYt*[K&",砂~ vRM`m2zu@PGтMI+S. y!]eqW7Ii=@ zUȾ1=&IM}؈>7 L@!.?-ZIc9hPM.9JyNڧ[Y hA?8Fb V?3M؝1?} H /Bs P#T|L @>mLҷj tm5hӵrKoBU,oroh~@5U򭕥Ѩ:$_W o@]+6O?MdLI(R|(Crh@h֝AI< s3hvԱ}5_eHfROzE *"(5{| *(,d%yWBr1:mZ߀,.$0J؍J󚉐|SA99Eʹ%˯a7Qj?} 69/V'Z|J |~lx2]V'~:ili@RE]'D׿< M|_\Zʮ2I L̒l?}>6wapy. Lyw/Vr~9+&W:}# \OېYG|AߕtR9^C"X13Ȁc*54 _&$MDbUP8#9O7H#~˵(9?othR!{>` /lv3HpxG}nK ^w? JqW$}?vpYMl?J>Fg \W%ww!=*F>0*?) D_h+H(~xyl3ȗkD9'HT5ѻGgsjֹ4AIιg u{1T}657O~,~/:7ONHܕ2T,^u+VęW=Rx~=$qqn sgG3\\sHKۼN+ ([Jv'UgH8i:>[2XߜDc~k&M +ƭ $ (E4ON?n[şOPVRf;\yh޸ayQAy)G@nf\P(=;.bJgty%@DxyPZ[4Dׂ) vK6_wI5zrŻ8f oٝ: =5˙ɡ}q~4^sn/^_vg [R{dM3=@Nܵط) lktq3dJI?t-Ԏ|l0 (ݼraˢj@V3Arv[Z?DúħkV7-iChn^0#]w=me_*`SWw#-uY7ߵ! m(|28C8?24&ob9*e~ } uPccv'r&Gw71r@Jc>色`OS!,|uy0:z%t8PS4닦|3BUWF^7EmǽA5S Ji$GW3kp%dЁz٭F;ՠTݔg7CwϽJR&i4Ag֢|}r)K (s`PzV!H[$&$] :mtz9+^(ʮM*nٻ$hiP(t|opϖa @bڷ4M{]G%3m IOY|x h<Pڃw2>VeE$ eԜ+&ҜE=ÐW%|ugQ r_@QKQo ?T[K ^-55+4A]v9 G>8=&a%ţFGOYI AC9egG%#y O+O8q˃ʖu=+A~IBy*E7=#A9m3R t$nOn= ޯ 6:a>x2 ciUU 5iYt+"eAz13@LFF6@o܇n;p擒n1:63ȯ{7{9m|u:ǧO| =Wr+/Nl^e̳ J,9ٖύj9c~vlG9 *vzvpqS_+Tm9x{P-}[s3nQ՚&wS?j.cz{|Z(/PX9xhUP6̦ @@a# [Y9 / Wp]zznFI__*V>?g֢)7k[Ğ@Br,ƀ"f=5tTչxmz9ވ>~y?y4 dH1]ҧvYF'Oy7Qj*7LrN#b& 8*Fgߔn<+t‰iP6g=qʗUTZo=ddo{T?r>PEM_u%VP֧mO%XJ óʶQکWɫPIPpoI7n#^1{X( cA~~:tU{@RZ¤hfgWU:+$/u--Pָ( d;o.D vjԬdV#]oԏ rŜR\P[gْ9ĩfܐA3lq >d-TTbƃqg!_r6WqjWrz6"g. 2$l `dUL+wa7.]?Qbv=2<]2:ˬawrGo0?FdfKgҟQ{Jt2F?P Xa]1q?YȣBn? ]JC<=$>9Lj^n}<|{IuC+ݹ2{Ա* |ݽ|8~%^HO?''?wf>H?ݛ)jAd .jvFBradleyTerry2/R/0000755000176200001440000000000013160030525013140 5ustar liggesusersBradleyTerry2/R/vcov.BTglmmPQL.R0000744000176200001440000000021113152515662016004 0ustar liggesusers#' @export vcov.BTglmmPQL <- function (object, ...) { so <- summary(object, corr = FALSE, ...) so$dispersion * so$cov.unscaled } BradleyTerry2/R/baseball.R0000744000176200001440000000350713152515662015051 0ustar liggesusers#' Baseball Data from Agresti (2002) #' #' Baseball results for games in the 1987 season between 7 teams in the Eastern #' Division of the American League. #' #' #' @name baseball #' @docType data #' @format A data frame with 42 observations on the following 4 variables. #' \describe{ #' \item{home.team}{a factor with levels `Baltimore`, #' `Boston`, `Cleveland`, `Detroit`, `Milwaukee`, `New York`, `Toronto`.} #' \item{away.team}{a factor with levels #' `Baltimore`, `Boston`, `Cleveland`, `Detroit`, #' `Milwaukee`, `New York`, `Toronto`.} #' \item{home.wins}{a numeric vector.} #' \item{away.wins}{a numeric vector.} } #' @note This dataset is in a simpler format than the one described in Firth #' (2005). #' @seealso [BTm()] #' @references Firth, D. (2005) Bradley-Terry models in R. *Journal of #' Statistical Software*, **12**(1), 1--12. #' #' Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 #' package. *Journal of Statistical Software*, **48**(9), 1--21. #' @source Page 438 of Agresti, A. (2002) *Categorical Data Analysis* (2nd #' Edn.). New York: Wiley. #' @keywords datasets #' @examples #' #' ## This reproduces the analysis in Sec 10.6 of Agresti (2002). #' data(baseball) # start with baseball data as provided by package #' #' ## Simple Bradley-Terry model, ignoring home advantage: #' baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, #' data = baseball, id = "team") #' #' ## Now incorporate the "home advantage" effect #' baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) #' baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) #' baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) #' #' ## Compare the fit of these two models: #' anova(baseballModel1, baseballModel2) #' #' "baseball" BradleyTerry2/R/print.BTm.R0000744000176200001440000000017613152515662015120 0ustar liggesusers#' @export print.BTm <- function (x, ...) { cat("Bradley Terry model fit by ") cat(x$method, "\n") NextMethod() } BradleyTerry2/R/predict.BTm.R0000744000176200001440000002654013152515662015421 0ustar liggesusers#' Predict Method for Bradley-Terry Models #' #' Obtain predictions and optionally standard errors of those predictions from #' a fitted Bradley-Terry model. #' #' If `newdata` is omitted the predictions are based on the data used for #' the fit. In that case how cases with missing values in the original fit are #' treated is determined by the `na.action` argument of that fit. If #' `na.action = na.omit` omitted cases will not appear in the residuals, #' whereas if `na.action = na.exclude` they will appear (in predictions #' and standard errors), with residual value `NA`. See also #' `napredict`. #' #' @param object a fitted object of class `"BTm"` #' @param newdata (optional) a data frame in which to look for variables with #' which to predict. If omitted, the fitted linear predictors are used. #' @param level for models with random effects: an integer vector giving the #' level(s) at which predictions are required. Level zero corresponds to #' population-level predictions (fixed effects only), whilst level one #' corresponds to the player-level predictions (full model) which are NA for #' contests involving players not in the original data. By default, `level = 0` #' for a fixed effects model, `1` otherwise. #' @param type the type of prediction required. The default is on the scale of #' the linear predictors; the alternative `"response"` is on the scale of #' the response variable. Thus for a default Bradley-Terry model the default #' predictions are of log-odds (probabilities on logit scale) and `type = #' "response"`` gives the predicted probabilities. The `"terms"` option #' returns a matrix giving the fitted values of each term in the model formula #' on the linear predictor scale (fixed effects only). #' @param se.fit logical switch indicating if standard errors are required. #' @param dispersion a value for the dispersion, not used for models with #' random effects. If omitted, that returned by `summary` applied to the #' object is used, where applicable. #' @param terms with `type ="terms"` by default all terms are returned. A #' character vector specifies which terms are to be returned. #' @param na.action function determining what should be done with missing #' values in `newdata`. The default is to predict `NA`. #' @param \dots further arguments passed to or from other methods. #' @return If `se.fit = FALSE`, a vector or matrix of predictions. If #' `se = TRUE`, a list with components \item{fit }{Predictions} #' \item{se.fit }{Estimated standard errors} #' @author Heather Turner #' @seealso [predict.glm()], [predict.glmmPQL()] #' @keywords models #' @examples #' #' ## The final model in example(flatlizards) #' result <- rep(1, nrow(flatlizards$contests)) #' Whiting.model3 <- BTm(1, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + #' head.length[..] + SVL[..] + (1|..), #' family = binomial(link = "probit"), #' data = flatlizards, trace = TRUE) #' #' ## `new' data for contests between four of the original lizards #' ## factor levels must correspond to original levels, but unused levels #' ## can be dropped - levels must match rows of predictors #' newdata <- list(contests = data.frame( #' winner = factor(c("lizard048", "lizard060"), #' levels = c("lizard006", "lizard011", #' "lizard048", "lizard060")), #' loser = factor(c("lizard006", "lizard011"), #' levels = c("lizard006", "lizard011", #' "lizard048", "lizard060")) #' ), #' predictors = flatlizards$predictors[c(3, 6, 27, 33), ]) #' #' predict(Whiting.model3, level = 1, newdata = newdata) #' #' ## same as #' predict(Whiting.model3, level = 1)[1:2] #' #' ## introducing a new lizard #' newpred <- rbind(flatlizards$predictors[c(3, 6, 27), #' c("throat.PC1","throat.PC3", "SVL", "head.length")], #' c(-5, 1.5, 1, 0.1)) #' rownames(newpred)[4] <- "lizard059" #' #' newdata <- list(contests = data.frame( #' winner = factor(c("lizard048", "lizard059"), #' levels = c("lizard006", "lizard011", #' "lizard048", "lizard059")), #' loser = factor(c("lizard006", "lizard011"), #' levels = c("lizard006", "lizard011", #' "lizard048", "lizard059")) #' ), #' predictors = newpred) #' #' ## can only predict at population level for contest with new lizard #' predict(Whiting.model3, level = 0:1, se.fit = TRUE, newdata = newdata) #' #' ## predicting at specific levels of covariates #' #' ## consider a model from example(CEMS) #' table6.model <- BTm(outcome = cbind(win1.adj, win2.adj), #' player1 = school1, player2 = school2, #' formula = ~ .. + #' WOR[student] * Paris[..] + #' WOR[student] * Milano[..] + #' WOR[student] * Barcelona[..] + #' DEG[student] * St.Gallen[..] + #' STUD[student] * Paris[..] + #' STUD[student] * St.Gallen[..] + #' ENG[student] * St.Gallen[..] + #' FRA[student] * London[..] + #' FRA[student] * Paris[..] + #' SPA[student] * Barcelona[..] + #' ITA[student] * London[..] + #' ITA[student] * Milano[..] + #' SEX[student] * Milano[..], #' refcat = "Stockholm", #' data = CEMS) #' #' ## estimate abilities for a combination not seen in the original data #' #' ## same schools #' schools <- levels(CEMS$preferences$school1) #' ## new student data #' students <- data.frame(STUD = "other", ENG = "good", FRA = "good", #' SPA = "good", ITA = "good", WOR = "yes", DEG = "no", #' SEX = "female", stringsAsFactors = FALSE) #' ## set levels to be the same as original data #' for (i in seq_len(ncol(students))){ #' students[,i] <- factor(students[,i], levels(CEMS$students[,i])) #' } #' newdata <- list(preferences = #' data.frame(student = factor(500), # new id matching with `students[1,]` #' school1 = factor("London", levels = schools), #' school2 = factor("Paris", levels = schools)), #' students = students, #' schools = CEMS$schools) #' #' ## warning can be ignored as model specification was over-parameterized #' predict(table6.model, newdata = newdata) #' #' ## if treatment contrasts are use (i.e. one player is set as the reference #' ## category), then predicting the outcome of contests against the reference #' ## is equivalent to estimating abilities with specific covariate values #' #' ## add student with all values at reference levels #' students <- rbind(students, #' data.frame(STUD = "other", ENG = "good", FRA = "good", #' SPA = "good", ITA = "good", WOR = "no", DEG = "no", #' SEX = "female", stringsAsFactors = FALSE)) #' ## set levels to be the same as original data #' for (i in seq_len(ncol(students))){ #' students[,i] <- factor(students[,i], levels(CEMS$students[,i])) #' } #' newdata <- list(preferences = #' data.frame(student = factor(rep(c(500, 502), each = 6)), #' school1 = factor(schools, levels = schools), #' school2 = factor("Stockholm", levels = schools)), #' students = students, #' schools = CEMS$schools) #' #' predict(table6.model, newdata = newdata, se.fit = TRUE) #' #' ## the second set of predictions (elements 7-12) are equivalent to the output #' ## of BTabilities; the first set are adjust for `WOR` being equal to "yes" #' BTabilities(table6.model) #' #' @importFrom stats model.matrix na.pass reformulate #' @export predict.BTm <- function (object, newdata = NULL, level = ifelse(is.null(object$random), 0, 1), type = c("link", "response", "terms"), se.fit = FALSE, dispersion = NULL, terms = NULL, na.action = na.pass, ...) { type <- match.arg(type) if (!is.null(newdata)) { ## need to define X so will work with model terms setup <- match(c("player1", "player2", "formula", "id", "separate.ability", "refcat", "weights", "subset", "offset", "contrasts"), names(object$call), 0L) setup <- do.call(BTm.setup, c(as.list(object$call)[setup], list(data = newdata)), envir = environment(object$formula)) nfix <- length(object$coefficients) newdata <- data.frame(matrix(, nrow(setup$X), 0)) keep <- match(names(object$coefficients), colnames(setup$X), nomatch = 0) if (0 %in% keep){ ## new players with missing data - set to NA missing <- rowSums(setup$X[,-keep, drop = FALSE]) != 0 setup$X <- setup$X[, keep] setup$X[missing,] <- NA } if (ncol(setup$X) != nfix) { ## newdata does not include original players with missing data X <- matrix(0, nrow(setup$X), nfix, dimnames = list(rownames(setup$X), names(object$coefficients))) X[, colnames(setup$X)] <- setup$X newdata$X <- X } else newdata$X <- setup$X nran <- length(attr(object$coefficients, "random")) if (1 %in% level && !is.null(object$random) && type != "terms"){ if (ncol(setup$random) != nran) { ## expand to give col for every random effect Z <- matrix(0, nrow(setup$random), nran, dimnames = list(rownames(setup$random), colnames(object$random))) #ranef need names!! ## set to NA for contests with new players ## (with predictors present) miss <- !colnames(setup$random) %in% colnames(Z) Z[, colnames(setup$random)[!miss]] <- setup$random[,!miss] if (any(miss)) { miss <- rowSums(setup$random[, miss, drop = FALSE] != 0) > 0 Z[miss,] <- NA } newrandom <- Z } else newrandom <- setup$random return(NextMethod(newrandom = newrandom)) } } if (type == "terms") { object$x <- model.matrix(object) attr(object$x, "assign") <- object$assign id <- unique(object$assign) terms <- paste("X", id, sep = "") object$terms <- terms(reformulate(c(0, terms))) splitX <- function(X) { newdata <- data.frame(matrix(, nrow(X), 0)) for (i in seq(id)) newdata[terms[i]] <- X[,object$assign == id[i]] newdata } if (is.null(newdata)) newdata <- splitX(object$x) else newdata <- splitX(newdata$X) tmp <- NextMethod(newdata = newdata) #tmp$fit[tmp$se.fit == 0] <- NA tmp$se.fit[tmp$se.fit == 0] <- NA colnames(tmp$fit) <- colnames(tmp$se.fit) <- c("(separate)"[0 %in% id], object$term.labels) return(tmp) } else NextMethod() } BradleyTerry2/R/seeds.R0000744000176200001440000000207213152515662014403 0ustar liggesusers#' Seed Germination Data from Crowder (1978) #' #' Data from Crowder(1978) giving the proportion of seeds germinated for 21 #' plates that were arranged according to a 2x2 factorial layout by seed #' variety and type of root extract. #' #' #' @name seeds #' @docType data #' @format A data frame with 21 observations on the following 4 variables. #' \describe{ #' \item{r}{the number of germinated seeds.} #' \item{n}{the total number of seeds.} #' \item{seed}{the seed #' variety.} #' \item{extract}{the type of root extract.} } #' @seealso [glmmPQL()] #' @references Breslow, N. E. and Clayton, D. G. (1993) Approximate inference #' in Generalized Linear Mixed Models. *Journal of the American #' Statistical Association*, **88**(421), 9--25. #' @source Crowder, M. (1978) Beta-Binomial ANOVA for proportions. #' *Applied Statistics*, **27**, 34--37. #' @keywords datasets #' @examples #' #' summary(glmmPQL(cbind(r, n - r) ~ seed + extract, #' random = diag(nrow(seeds)), #' family = binomial, #' data = seeds)) #' "seeds" BradleyTerry2/R/citations.R0000744000176200001440000000310213157761356015300 0ustar liggesusers#' Statistics Journal Citation Data from Stigler (1994) #' #' Extracted from a larger table in Stigler (1994). Inter-journal citation #' counts for four journals, \dQuote{Biometrika}, \dQuote{Comm Statist.}, #' \dQuote{JASA} and \dQuote{JRSS-B}, as used on p448 of Agresti (2002). #' #' In the context of paired comparisons, the \sQuote{winner} is the cited #' journal and the \sQuote{loser} is the one doing the citing. #' #' @name citations #' @docType data #' @format A 4 by 4 contingency table of citations, cross-classified by the #' factors `cited` and `citing` each with levels `Biometrika`, #' `Comm Statist`, `JASA`, and `JRSS-B`. #' @seealso [BTm()] #' @references Firth, D. (2005) Bradley-Terry models in R. *Journal of #' Statistical Software* **12**(1), 1--12. #' #' Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 #' package. *Journal of Statistical Software*, **48**(9), 1--21. #' #' Stigler, S. (1994) Citation patterns in the journals of statistics and #' probability. *Statistical Science* **9**, 94--108. #' @source Agresti, A. (2002) *Categorical Data Analysis* (2nd ed). New #' York: Wiley. #' @keywords datasets #' @examples #' #' ## Data as a square table, as in Agresti p448 #' citations #' #' ## #' ## Convert frequencies to success/failure data: #' ## #' citations.sf <- countsToBinomial(citations) #' names(citations.sf)[1:2] <- c("journal1", "journal2") #' #' ## Standard Bradley-Terry model fitted to these data #' citeModel <- BTm(cbind(win1, win2), journal1, journal2, #' data = citations.sf) #' "citations" BradleyTerry2/R/qvcalc.BTabilities.R0000744000176200001440000000701513160030525016732 0ustar liggesusers#' Quasi Variances for Estimated Abilities #' #' A method for [qvcalc::qvcalc()] to compute a set of quasi variances (and #' corresponding quasi standard errors) for estimated abilities from a #' Bradley-Terry model as returned by [BTabilities()]. #' #' For details of the method see Firth (2000), Firth (2003) or Firth and de #' Menezes (2004). Quasi variances generalize and improve the accuracy of #' \dQuote{floating absolute risk} (Easton et al., 1991). This device for #' economical model summary was first suggested by Ridout (1989). #' #' Ordinarily the quasi variances are positive and so their square roots #' (the quasi standard errors) exist and can be used in plots, etc. #' #' @param object a `"BTabilities"` object as returned by [BTabilities()]. #' @param ... additional arguments, currently ignored. #' @return A list of class `"qv"`, with components #' \item{covmat}{The full variance-covariance matrix for the estimated #' abilities.} #' \item{qvframe}{A data frame with variables `estimate`, `SE`, `quasiSE` and #' `quasiVar`, the last two being a quasi standard error and quasi-variance #' for each ability.} #' \item{dispersion}{`NULL` (dispersion is fixed to 1).} #' \item{relerrs}{Relative errors for approximating the standard errors of all #' simple contrasts.} #' \item{factorname}{The name of the ID factor identifying players in the `BTm` #' formula.} #' \item{coef.indices}{`NULL` (no required for this method).} #' \item{modelcall}{The call to `BTm` to fit the Bradley-Terry model from which #' the abilities were estimated.} #' @references #' Easton, D. F, Peto, J. and Babiker, A. G. A. G. (1991) Floating absolute #' risk: an alternative to relative risk in survival and case-control analysis #' avoiding an arbitrary reference group. *Statistics in Medicine* **10**, #' 1025--1035. #' #' Firth, D. (2000) Quasi-variances in Xlisp-Stat and on the web. #' *Journal of Statistical Software* **5.4**, 1--13. #' At http://www.jstatsoft.org #' #' Firth, D. (2003) Overcoming the reference category problem in the #' presentation of statistical models. *Sociological Methodology* #' **33**, 1--18. #' #' Firth, D. and de Menezes, R. X. (2004) Quasi-variances. #' *Biometrika* **91**, 65--80. #' #' Menezes, R. X. de (1999) More useful standard errors for group and factor #' effects in generalized linear models. *D.Phil. Thesis*, #' Department of Statistics, University of Oxford. #' #' Ridout, M.S. (1989). Summarizing the results of fitting generalized #' linear models to data from designed experiments. In: *Statistical #' Modelling: Proceedings of GLIM89 and the 4th International #' Workshop on Statistical Modelling held in Trento, Italy, July 17--21, #' 1989* (A. Decarli et al., eds.), pp 262--269. New York: Springer. #' @author David Firth #' @seealso [qvcalc::worstErrors()], [qvcalc::plot.qv()]. #' @examples #' example(baseball) #' baseball.qv <- qvcalc(BTabilities(baseballModel2)) #' print(baseball.qv) #' plot(baseball.qv, xlab = "team", #' levelNames = c("Bal", "Bos", "Cle", "Det", "Mil", "NY", "Tor")) #' @method qvcalc BTabilities #' @importFrom qvcalc qvcalc.default #' @importFrom stats coef vcov #' @export qvcalc.BTabilities <- function(object, ...){ vc <- vcov(object) cf <- coef(object) factorname <- attr(object, "factorname") modelcall <- attr(object, "modelcall") qvcalc.default(vc, factorname = factorname, estimates = cf, modelcall = modelcall) } #' @importFrom qvcalc qvcalc #' @export qvcalc::qvcalc BradleyTerry2/R/glmmPQL.control.R0000744000176200001440000000507013152515662016271 0ustar liggesusers#' Control Aspects of the glmmPQL Algorithm #' #' Set control variables for the glmmPQL algorithm. #' #' This function provides an interface to control the PQL algorithm used by #' [BTm()] for fitting Bradley Terry models with random effects. #' #' The algorithm iterates between a series of iterated weighted least squares #' iterations to update the fixed effects and a single Fisher scoring iteration #' to update the standard deviation of the random effects. #' #' Convergence of both the inner and outer iterations are judged by comparing #' the squared components of the relevant score vector with corresponding #' elements of the diagonal of the Fisher information matrix. If, for all #' components of the relevant score vector, the ratio is less than #' `tolerance^2`, or the corresponding diagonal element of the Fisher #' information matrix is less than 1e-20, iterations cease. #' #' @param maxiter the maximum number of outer iterations. #' @param IWLSiter the maximum number of iterated weighted least squares #' iterations used to estimate the fixed effects, given the standard deviation #' of the random effects. #' @param tol the tolerance used to determine convergence in the IWLS #' iterations and over all (see details). #' @param trace logical: whether or not to print the score for the random #' effects variance at the end of each iteration. #' @return A list with the arguments as components. #' @author Heather Turner #' @seealso [glmmPQL()], [BTm()] #' @references Breslow, N. E. and Clayton, D. G. (1993), Approximate inference #' in Generalized Linear Mixed Models. *Journal of the American #' Statistical Association* **88**(421), 9--25. #' @keywords models #' @examples #' #' ## Variation on example(flatlizards) #' result <- rep(1, nrow(flatlizards$contests)) #' #' ## BTm passes arguments on to glmmPQL.control() #' args(BTm) #' BTmodel <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + #' head.length[..] + SVL[..] + (1|..), #' data = flatlizards, tol = 1e-3, trace = TRUE) #' summary(BTmodel) #' #' @export glmmPQL.control <- function (maxiter = 50, IWLSiter = 10, tol = 1e-6, trace = FALSE) { call <- as.list(match.call()) if (length(call) > 1) { argPos <- match(c("maxiter", "IWLSiter", "tol"), names(call)) for (n in argPos[!is.na(argPos)]) { if (!is.numeric(call[[n]]) || call[[n]] <= 0) stop("value of '", names(call)[n], "' must be > 0") } } list(maxiter = maxiter, IWLSiter = IWLSiter, tol = tol, trace = trace) } BradleyTerry2/R/missToZero.R0000744000176200001440000000015413152515662015415 0ustar liggesusersmissToZero <- function(x, miss, dim = 1) { if (dim == 1) x[miss, ] <- 0 else x[, miss] <- 0 x } BradleyTerry2/R/add1.BTm.R0000744000176200001440000002200113152515662014564 0ustar liggesusers#' Add or Drop Single Terms to/from a Bradley Terry Model #' #' Add or drop single terms within the limit specified by the `scope` #' argument. For models with no random effects, compute an analysis of deviance #' table, otherwise compute the Wald statistic of the parameters that have been #' added to or dropped from the model. #' #' The hierarchy is respected when considering terms to be added or dropped: #' all main effects contained in a second-order interaction must remain, and so #' on. #' #' In a scope formula \samp{.} means \sQuote{what is already there}. #' #' For `drop1`, a missing `scope` is taken to mean that all terms in #' the model may be considered for dropping. #' #' If `scope` includes player covariates and there are players with #' missing values over these covariates, then a separate ability will be #' estimated for these players in *all* fitted models. Similarly if there #' are missing values in any contest-level variables in `scope`, the #' corresponding contests will be omitted from all models. #' #' If `formula` includes random effects, the same random effects structure #' will apply to all models. #' #' @aliases add1.BTm drop1.BTm #' @param object a fitted object of class inheriting from `"BTm"`. #' @param scope a formula specifying the model including all terms to be #' considered for adding or dropping. #' @param scale an estimate of the dispersion. Not implemented for models with #' random effects. #' @param test should a p-value be returned? The F test is only appropriate for #' models with no random effects for which the dispersion has been estimated. #' The Chisq test is a likelihood ratio test for models with no random effects, #' otherwise a Wald test. #' @param x a model matrix containing columns for all terms in the scope. #' Useful if `add1` is to be called repeatedly. **Warning:** no checks #' are done on its validity. #' @param \dots further arguments passed to [add1.glm()]. #' @return An object of class `"anova"` summarizing the differences in fit #' between the models. #' @author Heather Turner #' @seealso [BTm()], [anova.BTm()] #' @keywords models #' @examples #' #' result <- rep(1, nrow(flatlizards$contests)) #' BTmodel1 <- BTm(result, winner, loser, #' ~ throat.PC1[..] + throat.PC3[..] + (1|..), #' data = flatlizards, #' tol = 1e-4, sigma = 2, trace = TRUE) #' #' drop1(BTmodel1) #' #' add1(BTmodel1, ~ . + head.length[..] + SVL[..], test = "Chisq") #' #' BTmodel2 <- update(BTmodel1, formula = ~ . + head.length[..]) #' #' drop1(BTmodel2, test = "Chisq") #' #' @importFrom stats add.scope coef model.frame model.offset model.response model.weights formula pchisq pf reformulate terms update update.formula vcov #' @importFrom lme4 findbars nobars #' @export add1.BTm <- function(object, scope, scale = 0, test = c("none", "Chisq", "F"), x = NULL, ...) { old.form <- formula(object) new.form <- update.formula(old.form, scope) if (!is.character(scope)){ orandom <- findbars(old.form[[2]]) srandom <- findbars(new.form[[2]]) if (length(srandom) && !identical(orandom, srandom)) stop("Random effects structure of object and scope must be ", "identical.") scope <- add.scope(old.form, new.form) } if (!length(scope)) stop("no terms in scope for adding to object") if (is.null(x)) { # create model.matrix for maximum scope model <- Diff(object$player1, object$player2, new.form, object$id, object$data, object$separate.ability, object$refcat) if (sum(model$offset) > 0) warning("ignoring offset terms in scope") x <- model$X asgn <- attr(x, "assign") ## add dummy term for any separate effects oTerms <- c("sep"[0 %in% asgn], object$term.labels) object$terms <- terms(reformulate(oTerms)) y <- object$y dummy <- y ~ x - 1 if (!is.null(model$random)) { dummy <- update(dummy, .~ . + Z) Z <- model$random } argPos <- match(c("weights", "subset", "na.action"), names(object$call), 0) mf <- as.call(c(model.frame, as.list(object$call)[argPos], list(formula = dummy, offset = object$offset))) mf <- eval(mf, parent.frame()) x <- mf$x y <- model.response(mf) Z <- mf$Z wt <- model.weights(mf) if (is.null(wt)) wt <- rep.int(1, length(y)) offset <- model.offset(mf) } else { asgn <- attr(x, "assign") y <- object$y wt <- object$prior.weights offset <- object$offset Z <- object$random } if (is.null(object$random)){ attr(x, "assign") <- asgn + 1 object$formula <- formula(object$terms) object$x <- x object$y <- y object$random <- Z object$prior.weights <- wt object$offset <- offset stat.table <- NextMethod(x = x) attr(stat.table, "heading")[3] <- deparse(old.form) if (newsep <- sum(asgn == 0) - sum(object$assign ==0)) attr(stat.table, "heading") <- c(attr(stat.table, "heading"), paste("\n", newsep, " separate effects added\n", sep = "")) attr(stat.table, "separate.abilities") <- colnames(x)[asgn == 0] return(stat.table) } ## use original term labels: no sep effects or backticks (typically) oTerms <- attr(terms(nobars(old.form)), "term.labels") Terms <- attr(terms(nobars(new.form)), "term.labels") ousex <- asgn %in% c(0, which(Terms %in% oTerms)) sTerms <- vapply(strsplit(Terms, ":", fixed = TRUE), function(x) paste(sort(x), collapse = ":"), character(1)) method <- switch(object$method, glmmPQL.fit) control <- object$control control$trace <- FALSE if (scale == 0) dispersion <- 1 else dispersion <- scale ns <- length(scope) stat <- df <- numeric(ns) # don't add in original as don't need for tests names(stat) <- names(df) <- as.character(scope) tryerror <- FALSE for (i in seq(scope)) { stt <- paste(sort(strsplit(scope[i], ":")[[1]]), collapse = ":") usex <- match(asgn, match(stt, sTerms), 0) > 0 | ousex fit <- method(X = x[, usex, drop = FALSE], y = y, Z = Z, weights = wt, offset = offset, family = object$family, control = control, sigma = object$call$sigma, sigma.fixed = object$sigma.fixed) class(fit) <- oldClass(object) ind <- (usex & !ousex)[usex] trystat <- try(t(coef(fit)[ind]) %*% chol2inv(chol(vcov(fit, dispersion = dispersion)[ind, ind])) %*% coef(fit)[ind], silent = TRUE) #vcov should handle disp != 1 if (inherits(trystat, "try-error")) { stat[i] <- df[i] <- NA tryerror <- TRUE } else { stat[i] <- trystat df[i] <- sum(ind) } } table <- data.frame(stat, df) dimnames(table) <- list(names(df), c("Statistic", "Df")) title <- "Single term additions\n" topnote <- paste("Model: ", deparse(as.vector(formula(object))), if (scale > 0) paste("\nscale: ", format(scale), "\n"), if (tryerror) "\n\nTest statistic unestimable for at least one term") test <- match.arg(test) if (test == "Chisq") { dfs <- table[, "Df"] vals <- table[, "Statistic"] vals[dfs %in% 0] <- NA table <- cbind(table, `P(>|Chi|)` = pchisq(vals, abs(dfs), lower.tail = FALSE)) } else if (test == "F") { ## Assume dispersion fixed at one - if dispersion estimated, would use ## "residual" df from larger model in each comparison df.dispersion <- Inf if (df.dispersion == Inf) { fam <- object[[1]]$family$family if (fam == "binomial" || fam == "poisson") warning(gettextf( "using F test with a '%s' family is inappropriate", fam), domain = NA, call. = FALSE) else { warning("using F test with a fixed dispersion is inappropriate") } } dfs <- table[, "Df"] Fvalue <- table[, "Statistic"]/abs(dfs) Fvalue[dfs %in% 0] <- NA table <- cbind(table, F = Fvalue, `Pr(>F)` = pf(Fvalue, abs(dfs), df.dispersion, lower.tail = FALSE)) } if (newsep <- sum(asgn == 0) - sum(object$assign ==0)) heading <- c(heading, paste("\n", newsep, " separate effects added\n", sep = "")) structure(table, heading = c(title, topnote), class = c("anova", "data.frame"), separate.abilities = colnames(x)[asgn == 0]) } BradleyTerry2/R/model.matrix.BTm.R0000744000176200001440000000016313152515662016363 0ustar liggesusers#' @importFrom stats model.frame #' @export model.matrix.BTm <- function(object, ...){ model.frame(object)$X } BradleyTerry2/R/sound.fields.R0000744000176200001440000000715613152515662015705 0ustar liggesusers#' Kousgaard (1984) Data on Pair Comparisons of Sound Fields #' #' The results of a series of factorial subjective room acoustic experiments #' carried out at the Technical University of Denmark by A C Gade. #' #' The variables `win1.adj` and `win2.adj` are provided in order to #' allow a simple way of handling ties (in which a tie counts as half a win and #' half a loss), which is slightly different numerically from the Davidson #' (1970) method that is used by Kousgaard (1984): see the examples. #' #' @name sound.fields #' @docType data #' @format A list containing two data frames, `sound.fields$comparisons`, #' and `sound.fields$design`. #' #' The `sound.fields$comparisons` data frame has 84 observations on the #' following 8 variables: \describe{ #' \item{field1}{a factor with levels #' `c("000", "001", "010", "011", "100", "101", "110", "111")`, the first #' sound field in a comparison} #' \item{field2}{a factor with the same #' levels as `field1`; the second sound field in a comparison} #' \item{win1}{integer, the number of times that `field1` was #' preferred to `field2`} #' \item{tie}{integer, the number of times #' that no preference was expressed when comparing `field1` and #' `field2`} #' \item{win2}{integer, the number of times that #' `field2` was preferred to `field1`} #' \item{win1.adj}{numeric, equal to `win1 + tie/2`} #' \item{win2.adj}{numeric, equal to `win2 + tie/2`} #' \item{instrument}{a factor with 3 levels, `c("cello", "flute", #' "violin")`} } #' #' The `sound.fields$design` data frame has 8 observations (one for each #' of the sound fields compared in the experiment) on the following 3 #' variables: \describe{ #' \item{a")}{a factor with levels `c("0", #' "1")`, the *direct sound* factor (0 for *obstructed sight line*, 1 #' for *free sight line*); contrasts are sum contrasts} #' \item{b}{a #' factor with levels `c("0", "1")`, the *reflection* factor (0 for #' *-26dB*, 1 for *-20dB*); contrasts are sum contrasts} #' \item{c}{a factor with levels `c("0", "1")`, the #' *reverberation* factor (0 for *-24dB*, 1 for *-20dB*); #' contrasts are sum contrasts} } #' @author David Firth #' @references Davidson, R. R. (1970) Extending the Bradley-Terry model to #' accommodate ties in paired comparison experiments. *Journal of the #' American Statistical Association* **65**, 317--328. #' @source Kousgaard, N. (1984) Analysis of a Sound Field Experiment by a Model #' for Paired Comparisons with Explanatory Variables. *Scandinavian #' Journal of Statistics* **11**, 51--57. #' @keywords datasets #' @examples #' #' ## #' ## Fit the Bradley-Terry model to data for flutes, using the simple 'add 0.5' #' ## method to handle ties: #' ## #' flutes.model <- BTm(cbind(win1.adj, win2.adj), field1, field2, ~ field, #' id = "field", #' subset = (instrument == "flute"), #' data = sound.fields) #' ## #' ## This agrees (after re-scaling) quite closely with the estimates given #' ## in Table 3 of Kousgaard (1984): #' ## #' table3.flutes <- c(-0.581, -1.039, 0.347, 0.205, 0.276, 0.347, 0.311, 0.135) #' plot(c(0, coef(flutes.model)), table3.flutes) #' abline(lm(table3.flutes ~ c(0, coef(flutes.model)))) #' ## #' ## Now re-parameterise that model in terms of the factorial effects, as #' ## in Table 5 of Kousgaard (1984): #' ## #' flutes.model.reparam <- update(flutes.model, #' formula = ~ a[field] * b[field] * c[field] #' ) #' table5.flutes <- c(.267, .250, -.088, -.294, .062, .009, -0.070) #' plot(coef(flutes.model.reparam), table5.flutes) #' abline(lm(table5.flutes ~ coef(flutes.model.reparam))) #' "sound.fields" BradleyTerry2/R/formula.BTm.R0000744000176200001440000000006613152515662015427 0ustar liggesusers#' @export formula.BTm <- function(x, ...) x$formula BradleyTerry2/R/plotProportions.R0000744000176200001440000003403013152515662016534 0ustar liggesusers## P(win|not tie) in terms of expit(lambda_i - lambda_j) GenDavidsonTie <- function(p){ scale <- match("tie.scale", substring(names(coef), 1, 9), 0) if (scale != 0) scale <- exp(coef[scale]) else scale <- 1 tie.mode <- match("tie.mode", substring(names(coef), 1, 8), 0) if (tie.mode != 0) tie.mode <- coef["tie.mode"] delta <- coef[match("tie.max", substring(names(coef), 1, 7))] ## first player is at home weight1 <- plogis(tie.mode) weight2 <- 1 - weight1 ## plogis = expit plogis(delta - scale * (weight1 * log(weight1) + weight2 * log(weight2)) + scale * (weight1 * log(p) + weight2 * log(1-p))) } #tmp <- eval(substitute(player1), data, parent.frame()) #' Plot Proportions of Tied Matches and Non-tied Matches Won #' #' Plot proportions of tied matches and non-tied matches won by the first #' player, within matches binned by the relative player ability, as expressed #' by the probability that the first player wins, given the match is not a tie. #' Add fitted lines for each set of matches, as given by the generalized #' Davidson model. #' #' If `home.adv` is specified, the results are re-ordered if necessary so #' that the home player comes first; any matches played on neutral ground are #' omitted. #' #' First the probability that the first player wins given that the match is not #' a tie is computed: \deqn{expit(home.adv + abilities[player1] - #' abilities[player2])} where `home.adv` and `abilities` are #' parameters from a generalized Davidson model that have been estimated on the #' log scale. #' #' The matches are then binned according to this probability, grouping together #' matches with similar relative ability between the first player and the #' second player. Within each bin, the proportion of tied matches is computed #' and these proportions are plotted against the mid-point of the bin. Then the #' bins are re-computed omitting the tied games and the proportion of non-tied #' matches won by the first player is found and plotted against the new #' mid-point. #' #' Finally curves are added for the probability of a tie and the conditional #' probability of win given the match is not a tie, under a generalized #' Davidson model with parameters as specified by `tie.max`, #' `tie.scale` and `tie.mode`. #' #' The function can also be used to plot the proportions of wins along with the #' fitted probability of a win under the Bradley-Terry model. #' #' @param win a logical vector: `TRUE` if player1 wins, `FALSE` #' otherwise. #' @param tie a logical vector: `TRUE` if the outcome is a tie, #' `FALSE` otherwise (`NULL` if there are no ties). #' @param loss a logical vector: `TRUE` if player1 loses, `FALSE` #' otherwise. #' @param player1 an ID factor specifying the first player in each contest, #' with the same set of levels as `player2`. #' @param player2 an ID factor specifying the second player in each contest, #' with the same set of levels as `player2`. #' @param abilities the fitted abilities from a generalized Davidson model (or #' a Bradley-Terry model). #' @param home.adv if applicable, the fitted home advantage parameter from a #' generalized Davidson model (or a Bradley-Terry model). #' @param tie.max the fitted parameter from a generalized Davidson model #' corresponding to the maximum tie probability. #' @param tie.scale if applicable, the fitted parameter from a generalized #' Davidson model corresponding to the scale of dependence of the tie #' probability on the probability that `player1` wins, given the outcome #' is not a draw. #' @param tie.mode if applicable, the fitted parameter from a generalized #' Davidson model corresponding to the location of maximum tie probability, in #' terms of the probability that `player1` wins, given the outcome is not #' a draw. #' @param at.home1 a logical vector: `TRUE` if `player1` is at home, #' `FALSE` otherwise. #' @param at.home2 a logical vector: `TRUE` if `player2` is at home, #' `FALSE` otherwise. #' @param data an optional data frame providing variables required by the #' model, with one observation per match. #' @param subset an optional logical or numeric vector specifying a subset of #' observations to include in the plot. #' @param bin.size the approximate number of matches in each bin. #' @param xlab the label to use for the x-axis. #' @param ylab the label to use for the y-axis. #' @param legend text to use for the legend. #' @param col a vector specifying colours to use for the proportion of non-tied #' matches won and the proportion of tied matches. #' @param \dots further arguments passed to plot. #' @return A list of data frames: \item{win}{ a data frame comprising #' `prop.win`, the proportion of non-tied matches won by the first player #' in each bin and `bin.win`, the mid-point of each bin. } \item{tie}{ #' (when ties are present) a data frame comprising `prop.tie`, the #' proportion of tied matches in each bin and `bin.tie`, the mid-point of #' each bin. } #' @note This function is designed for single match outcomes, therefore data #' aggregated over player pairs will need to be expanded. #' @author Heather Turner #' @seealso [GenDavidson()], [BTm()] #' @keywords models nonlinear #' @examples #' #' #### A Bradley-Terry example using icehockey data #' #' ## Fit the standard Bradley-Terry model, ignoring home advantage #' standardBT <- BTm(outcome = result, #' player1 = visitor, player2 = opponent, #' id = "team", data = icehockey) #' #' ## comparing teams on a "level playing field" #' levelBT <- BTm(result, #' data.frame(team = visitor, home.ice = 0), #' data.frame(team = opponent, home.ice = home.ice), #' ~ team + home.ice, #' id = "team", data = icehockey) #' #' ## compare fit to observed proportion won #' ## exclude tied matches as not explicitly modelled here #' par(mfrow = c(1, 2)) #' plotProportions(win = result == 1, loss = result == 0, #' player1 = visitor, player2 = opponent, #' abilities = BTabilities(standardBT)[,1], #' data = icehockey, subset = result != 0.5, #' main = "Without home advantage") #' #' plotProportions(win = result == 1, loss = result == 0, #' player1 = visitor, player2 = opponent, #' home.adv = coef(levelBT)["home.ice"], #' at.home1 = 0, at.home2 = home.ice, #' abilities = BTabilities(levelBT)[,1], #' data = icehockey, subset = result != 0.5, #' main = "With home advantage") #' #' #### A generalized Davidson example using football data #' if (require(gnm)) { #' #' ## subset to first and last season for illustration #' football <- subset(football, season %in% c("2008-9", "2012-13")) #' #' ## convert to trinomial counts #' football.tri <- expandCategorical(football, "result", idvar = "match") #' #' ## add variable to indicate whether team playing at home #' football.tri$at.home <- !logical(nrow(football.tri)) #' #' ## fit Davidson model #' Dav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, #' home:season, away:season, home.adv = ~1, #' tie.max = ~1, #' at.home1 = at.home, #' at.home2 = !at.home) - 1, #' eliminate = match, family = poisson, data = football.tri) #' #' ## fit shifted & scaled Davidson model #' shifScalDav <- gnm(count ~ #' GenDavidson(result == 1, result == 0, result == -1, #' home:season, away:season, home.adv = ~1, #' tie.max = ~1, tie.scale = ~1, tie.mode = ~1, #' at.home1 = at.home, #' at.home2 = !at.home) - 1, #' eliminate = match, family = poisson, data = football.tri) #' #' ## diagnostic plots #' main <- c("Davidson", "Shifted & Scaled Davidson") #' mod <- list(Dav, shifScalDav) #' names(mod) <- main #' alpha <- names(coef(Dav)[-(1:2)]) #' #' ## use football.tri data so that at.home can be found, #' ## but restrict to actual match results #' par(mfrow = c(1,2)) #' for (i in 1:2) { #' coef <- parameters(mod[[i]]) #' plotProportions(result == 1, result == 0, result == -1, #' home:season, away:season, #' abilities = coef[alpha], #' home.adv = coef["home.adv"], #' tie.max = coef["tie.max"], #' tie.scale = coef["tie.scale"], #' tie.mode = coef["tie.mode"], #' at.home1 = at.home, #' at.home2 = !at.home, #' main = main[i], #' data = football.tri, subset = count == 1) #' } #' } #' #' @importFrom graphics curve plot points #' @importFrom stats na.omit #' @export plotProportions <- function(win, tie = NULL, loss, player1, player2, abilities = NULL, home.adv = NULL, tie.max = NULL, tie.scale = NULL, tie.mode = NULL, at.home1 = NULL, at.home2 = NULL, data = NULL, subset = NULL, bin.size = 20, xlab = "P(player1 wins | not a tie)", ylab = "Proportion", legend = NULL, col = 1:2, ...){ call <- as.list(match.call()) var <- intersect(names(call), c("win", "tie", "loss", "player1", "player2", "at.home1", "at.home2")) var <- var[!vapply(call[var], is.null, logical(1))] dat <- with(data, do.call("data.frame", call[var])) if (!missing(subset)){ subset <- eval(substitute(subset), data, parent.frame()) dat <- subset(dat, subset) } if (!missing(tie) && sum(dat$tie) == 0) dat$tie <- NULL if (!is.null(home.adv) && (missing(at.home1) || missing(at.home2))) stop("at.home1 and at.home2 must be specified") if (!is.null(home.adv)){ ## exclude neutral contests, make sure home player is first dat <- subset(dat, at.home1 | at.home2) swap <- which(as.logical(dat$at.home2)) if (length(swap)) { dat$win[swap] <- dat$loss[swap] if (is.null(dat$tie)) dat$loss[swap] <- !dat$win[swap] else dat$loss[swap] <- !(dat$win[swap] | dat$tie[swap]) tmp <- dat$player1[swap] dat$player1[swap] <- dat$player2[swap] dat$player2[swap] <- tmp dat$at.home1[swap] <- TRUE dat$at.home2[swap] <- FALSE } } else home.adv <- 0 ### get proportions p <- with(dat, plogis(home.adv + abilities[as.character(player1)] - abilities[as.character(player2)])) ## Depending on the distribution of p_ij (across all matches), ## divide the range of probabilities p_ij into discrete "bins", each ## of which has at least (say) 20 matches in it getBins <- function(p, bin.size) { ## alternatively estimate bins to same size intervals ## at least bin.size - distribute extra evenly over range min.size <- bin.size n <- length(p) r <- n %% min.size size <- rep(min.size, n %/% min.size) if (r > 0) { step <- length(size)/r extra <- round(seq(from = step/2 + 0.01, to = step/2 + 0.01 + (r - 1)*step, by = step)) size[extra] <- min.size + 1 } bin <- factor(rep(seq(length(size)), size))[match(p, sort(p))] low <- sort(p)[cumsum(c(1, size[-length(size)]))] #first high <- sort(p)[cumsum(size)] #last mid <- (high - low)/2 + low list(bin = bin, mid = mid) } winBin <- getBins(p, bin.size) ## Within each bin b, calculate ## d_b = proportion of matches in that bin that were drawn if (!is.null(dat$tie)) { tieBin <- winBin tri <- with(dat, win - (!win & !tie)) d_b <- tapply(tri, tieBin$bin, function(x) sum(x == 0)/length(x)) ## recompute bins omitting ties winBin <- getBins(p[!dat$tie], bin.size) } ## h_b = proportion of *non-drawn* matches in that bin that were won ## by the home team if (!is.null(dat$tie)) { h_b <- tapply(tri[!dat$tie], winBin$bin, function(x) sum(x == 1)/length(x)) } else h_b <- tapply(dat$win, winBin$bin, function(x) sum(x == 1)/length(x)) ## Plot d_b and h_b against the bin midpoints, in a plot with ## axis limits both (0,1) plot(h_b ~ winBin$mid, xlim = c(0, 1), ylim = c(0, 1), xlab = xlab, ylab = ylab, ...) if (missing(legend)) { if (is.null(dat$tie)) legend <- "Matches won" else legend <- c("Non-tied matches won", "Matches tied") } legend("topleft", legend, col = col[c(1, 2[!missing(tie)])], pch = 1) if (!is.null(dat$tie)) points(d_b ~ tieBin$mid, col = col[2]) ## Add to the plot the lines/curves ## y = x ## y = expit(log(nu * sqrt(p_ij * (1 - p_ij)))) ## The d_b should lie around the latter curve, and the h_b should ## lie around the former line. Any clear patterns of departure are ## of interest. curve(I, 0, 1, add = TRUE) env <- new.env() environment(GenDavidsonTie) <- env coef <- na.omit(c(home.adv = unname(home.adv), tie.max = unname(tie.max), tie.scale = unname(tie.scale), tie.mode = unname(tie.mode))) assign("coef", coef, envir=env) curve(GenDavidsonTie, 0, 1, col = col[2], add = TRUE) out <- list(win = data.frame(prop.win = h_b, bin.win = winBin$mid)) if (!is.null(dat$tie)) out <- c(out, tie = data.frame(prop.tie = d_b, bin.tie = tieBin$mid)) invisible(out) } BradleyTerry2/R/chameleons.R0000744000176200001440000000715213152515662015422 0ustar liggesusers#' Male Cape Dwarf Chameleons: Measured Traits and Contest Outcomes #' #' Data as used in the study by Stuart-Fox et al. (2006). Physical #' measurements made on 35 male Cape dwarf chameleons, and the results of 106 #' inter-male contests. #' #' The published paper mentions 107 contests, but only 106 contests are #' included here. Contest number 16 was deleted from the data used to fit the #' models, because it involved a male whose predictor-variables were incomplete #' (and it was the only contest involving that lizard, so it is uninformative). #' #' @name chameleons #' @docType data #' @format A list containing three data frames: `chameleons$winner`, #' `chameleons$loser` and `chameleons$predictors`. #' #' The `chameleons$winner` and `chameleons$loser` data frames each #' have 106 observations (one per contest) on the following 4 variables: #' \describe{ #' \item{ID}{a factor with 35 levels `C01`, `C02`, #' ... , `C43`, the identity of the winning (or losing) male in each #' contest} #' \item{prev.wins.1}{integer (values 0 or 1), did the #' winner/loser of this contest win in an immediately previous contest?} #' \item{prev.wins.2}{integer (values 0, 1 or 2), how many of his #' (maximum) previous 2 contests did each male win?} #' \item{prev.wins.all}{integer, how many previous contests has each #' male won?} } #' #' The `chameleons$predictors` data frame has 35 observations, one for #' each male involved in the contests, on the following 7 variables: #' \describe{ #' \item{ch.res}{numeric, residuals of casque height regression on #' `SVL`, i.e. relative height of the bony part on the top of the #' chameleons' heads} #' \item{jl.res}{numeric, residuals of jaw length #' regression on `SVL`} #' \item{tl.res}{numeric, residuals of tail #' length regression on `SVL`} #' \item{mass.res}{numeric, residuals #' of body mass regression on `SVL` (body condition)} #' \item{SVL}{numeric, snout-vent length (body size)} #' \item{prop.main}{numeric, proportion (arcsin transformed) of area of #' the flank occupied by the main pink patch on the flank} #' \item{prop.patch}{numeric, proportion (arcsin transformed) of area #' of the flank occupied by the entire flank patch} } #' @author David Firth #' @source The data were obtained by Dr Devi Stuart-Fox, #' http://www.zoology.unimelb.edu.au/research/groups/animal/labs/stuart-fox/index.php, #' and they are reproduced here with her kind permission. #' #' These are the same data that were used in #' #' Stuart-Fox, D. M., Firth, D., Moussalli, A. and Whiting, M. J. (2006) #' Multiple signals in chameleon contests: designing and analysing animal #' contests as a tournament. *Animal Behaviour* **71**, 1263--1271. #' @keywords datasets #' @examples #' #' ## #' ## Reproduce Table 3 from page 1268 of the above paper: #' ## #' summary(chameleon.model <- BTm(player1 = winner, player2 = loser, #' formula = ~ prev.wins.2 + ch.res[ID] + prop.main[ID] + (1|ID), id = "ID", #' data = chameleons)) #' head(BTabilities(chameleon.model)) #' ## #' ## Note that, although a per-chameleon random effect is specified as in the #' ## above [the term "+ (1|ID)"], the estimated variance for that random #' ## effect turns out to be zero in this case. The "prior experience" #' ## effect ["+ prev.wins.2"] in this analysis has explained most of the #' ## variation, leaving little for the ID-specific predictors to do. #' ## Despite that, two of the ID-specific predictors do emerge as #' ## significant. #' ## #' ## Test whether any of the other ID-specific predictors has an effect: #' ## #' add1(chameleon.model, ~ . + jl.res[ID] + tl.res[ID] + mass.res[ID] + #' SVL[ID] + prop.patch[ID]) #' "chameleons" BradleyTerry2/R/GenDavidson.R0000744000176200001440000003117713152515662015511 0ustar liggesusers#' Specify a Generalised Davidson Term in a gnm Model Formula #' #' GenDavidson is a function of class `"nonlin"` to specify a generalised #' Davidson term in the formula argument to [gnm::gnm()], providing a #' model for paired comparison data where ties are a possible outcome. #' #' `GenDavidson` specifies a generalisation of the Davidson model (1970) #' for paired comparisons where a tie is a possible outcome. It is designed for #' modelling trinomial counts corresponding to the win/draw/loss outcome for #' each contest, which are assumed Poisson conditional on the total count for #' each match. Since this total must be one, the expected counts are #' equivalently the probabilities for each possible outcome, which are modelled #' on the log scale: \deqn{\log(p(i \textrm{beats} j)_k) = \theta_{ijk} + #' \log(\mu\alpha_i}{log(p(i beats j)_k) = theta_{ijk} + log(mu * alpha_i)} #' \deqn{\log(p(draw)_k) = \theta_{ijk} + \delta + c + }{ log(p(draw)_k) = #' theta_{ijk} + log(delta) + c + sigma * (pi * log(mu * alpha_i) + (1 - pi) * #' log(alpha_j)) + (1 - sigma) * log(mu * alpha_i + alpha_j) }\deqn{ #' \sigma(\pi\log(\mu\alpha_i) - (1 - \pi)log(\alpha_j)) + }{ log(p(draw)_k) = #' theta_{ijk} + log(delta) + c + sigma * (pi * log(mu * alpha_i) + (1 - pi) * #' log(alpha_j)) + (1 - sigma) * log(mu * alpha_i + alpha_j) }\deqn{ (1 - #' \sigma)(\log(\mu\alpha_i + \alpha_j))}{ log(p(draw)_k) = theta_{ijk} + #' log(delta) + c + sigma * (pi * log(mu * alpha_i) + (1 - pi) * log(alpha_j)) #' + (1 - sigma) * log(mu * alpha_i + alpha_j) } \deqn{\log(p(j \textrm{beats} #' i)_k) = \theta_{ijk} + }{log(p(j beats i)_k) = theta_{ijk} + #' log(alpha_j)}\deqn{ log(\alpha_j)}{log(p(j beats i)_k) = theta_{ijk} + #' log(alpha_j)} Here \eqn{\theta_{ijk}}{theta_{ijk}} is a structural parameter #' to fix the trinomial totals; \eqn{\mu}{mu} is the home advantage parameter; #' \eqn{\alpha_i}{alpha_i} and \eqn{\alpha_j}{alpha_j} are the abilities of #' players \eqn{i} and \eqn{j} respectively; \eqn{c}{c} is a function of the #' parameters such that \eqn{\textrm{expit}(\delta)}{plogis(delta)} is the #' maximum probability of a tie, \eqn{\sigma}{sigma} scales the dependence of #' the probability of a tie on the relative abilities and \eqn{\pi}{pi} allows #' for asymmetry in this dependence. #' #' For parameters that must be positive (\eqn{\alpha_i, \sigma, \mu}{alpha, #' sigma, mu}), the log is estimated, while for parameters that must be between #' zero and one (\eqn{\delta, \pi}), the logit is estimated, as illustrated in #' the example. #' #' @param win a logical vector: `TRUE` if player1 wins, `FALSE` #' otherwise. #' @param tie a logical vector: `TRUE` if the outcome is a tie, #' `FALSE` otherwise. #' @param loss a logical vector: `TRUE` if player1 loses, `FALSE` #' otherwise. #' @param player1 an ID factor specifying the first player in each contest, #' with the same set of levels as `player2`. #' @param player2 an ID factor specifying the second player in each contest, #' with the same set of levels as `player2`. #' @param home.adv a formula for the parameter corresponding to the home #' advantage effect. If `NULL`, no home advantage effect is estimated. #' @param tie.max a formula for the parameter corresponding to the maximum tie #' probability. #' @param tie.scale a formula for the parameter corresponding to the scale of #' dependence of the tie probability on the probability that `player1` #' wins, given the outcome is not a draw. #' @param tie.mode a formula for the parameter corresponding to the location of #' maximum tie probability, in terms of the probability that `player1` #' wins, given the outcome is not a draw. #' @param at.home1 a logical vector: `TRUE` if `player1` is at home, #' `FALSE` otherwise. #' @param at.home2 a logical vector: `TRUE` if `player2` is at home, #' `FALSE` otherwise. #' @return A list with the anticipated components of a "nonlin" function: #' \item{ predictors }{ the formulae for the different parameters and the ID #' factors for player 1 and player 2. } \item{ variables }{ the outcome #' variables and the \dQuote{at home} variables, if specified. } \item{ common #' }{ an index to specify that common effects are to be estimated for the #' players. } \item{ term }{ a function to create a deparsed mathematical #' expression of the term, given labels for the predictors.} \item{ start }{ a #' function to generate starting values for the parameters.} #' @author Heather Turner #' @seealso [football()], [plotProportions()] #' @references Davidson, R. R. (1970). On extending the Bradley-Terry model to #' accommodate ties in paired comparison experiments. *Journal of the #' American Statistical Association*, **65**, 317--328. #' @keywords models nonlinear #' @examples #' #' ### example requires gnm #' if (require(gnm)) { #' ### convert to trinomial counts #' football.tri <- expandCategorical(football, "result", idvar = "match") #' head(football.tri) #' #' ### add variable to indicate whether team playing at home #' football.tri$at.home <- !logical(nrow(football.tri)) #' #' ### fit shifted & scaled Davidson model #' ### - subset to first and last season for illustration #' shifScalDav <- gnm(count ~ #' GenDavidson(result == 1, result == 0, result == -1, #' home:season, away:season, home.adv = ~1, #' tie.max = ~1, tie.scale = ~1, tie.mode = ~1, #' at.home1 = at.home, #' at.home2 = !at.home) - 1, #' eliminate = match, family = poisson, data = football.tri, #' subset = season %in% c("2008-9", "2012-13")) #' #' ### look at coefs #' coef <- coef(shifScalDav) #' ## home advantage #' exp(coef["home.adv"]) #' ## max p(tie) #' plogis(coef["tie.max"]) #' ## mode p(tie) #' plogis(coef["tie.mode"]) #' ## scale relative to Davidson of dependence of p(tie) on p(win|not a draw) #' exp(coef["tie.scale"]) #' #' ### check model fit #' alpha <- names(coef[-(1:4)]) #' plotProportions(result == 1, result == 0, result == -1, #' home:season, away:season, #' abilities = coef[alpha], home.adv = coef["home.adv"], #' tie.max = coef["tie.max"], tie.scale = coef["tie.scale"], #' tie.mode = coef["tie.mode"], #' at.home1 = at.home, at.home2 = !at.home, #' data = football.tri, subset = count == 1) #' } #' #' ### analyse all five seasons #' ### - takes a little while to run, particularly likelihood ratio tests #' \dontrun{ #' ### fit Davidson model #' Dav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, #' home:season, away:season, home.adv = ~1, #' tie.max = ~1, #' at.home1 = at.home, #' at.home2 = !at.home) - 1, #' eliminate = match, family = poisson, data = football.tri) #' #' ### fit scaled Davidson model #' scalDav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, #' home:season, away:season, home.adv = ~1, #' tie.max = ~1, tie.scale = ~1, #' at.home1 = at.home, #' at.home2 = !at.home) - 1, #' eliminate = match, family = poisson, data = football.tri) #' #' ### fit shifted & scaled Davidson model #' shifScalDav <- gnm(count ~ #' GenDavidson(result == 1, result == 0, result == -1, #' home:season, away:season, home.adv = ~1, #' tie.max = ~1, tie.scale = ~1, tie.mode = ~1, #' at.home1 = at.home, #' at.home2 = !at.home) - 1, #' eliminate = match, family = poisson, data = football.tri) #' #' ### compare models #' anova(Dav, scalDav, shifScalDav, test = "Chisq") #' #' ### diagnostic plots #' main <- c("Davidson", "Scaled Davidson", "Shifted & Scaled Davidson") #' mod <- list(Dav, scalDav, shifScalDav) #' names(mod) <- main #' #' ## use football.tri data so that at.home can be found, #' ## but restrict to actual match results #' par(mfrow = c(2,2)) #' for (i in 1:3) { #' coef <- parameters(mod[[i]]) #' plotProportions(result == 1, result == 0, result == -1, #' home:season, away:season, #' abilities = coef[alpha], #' home.adv = coef["home.adv"], #' tie.max = coef["tie.max"], #' tie.scale = coef["tie.scale"], #' tie.mode = coef["tie.mode"], #' at.home1 = at.home, #' at.home2 = !at.home, #' main = main[i], #' data = football.tri, subset = count == 1) #' } #' } #' #' @importFrom stats coef plogis runif #' @export GenDavidson <- function(win, # TRUE/FALSE tie, # TRUE/FALSE loss, # TRUE/FALSE player1, # player1 in each contest player2, # ditto player2 home.adv = NULL, tie.max = ~1, tie.mode = NULL, tie.scale = NULL, at.home1 = NULL, at.home2 = NULL){ call <- as.expression(sys.call()[c(1,5:6)]) extra <- NULL if (is.null(tie.max)) stop("a formula must be specified for tie.max") if (!is.null(home.adv) & is.null(at.home1)) stop("at.home1 and at.home2 must be specified") has.home.adv <- !is.null(home.adv) has.tie.mode <- !is.null(tie.mode) has.tie.scale <- !is.null(tie.scale) if (has.home.adv) extra <- c(extra, list(home.adv = home.adv)) if (has.tie.mode) extra <- c(extra, list(tie.mode = tie.mode)) if (has.tie.scale) extra <- c(extra, list(tie.scale = tie.scale)) i <- has.home.adv + has.tie.mode + has.tie.scale a <- match("home.adv", names(extra), 1) b <- match("tie.mode", names(extra), 1) c <- match("tie.scale", names(extra), 1) adv <- has.home.adv | has.tie.mode list(predictors = {c(extra, list(tie.max = tie.max, substitute(player1), # player1 & 2 are homogeneous substitute(player2)))}, ## substitutes "result" for "outcome", but also substitutes all of ## code vector variables = {c(list(loss = substitute(loss), tie = substitute(tie), win = substitute(win)), list(at.home1 = substitute(at.home1), at.home2 = substitute(at.home2))[adv])}, common = c(1[has.home.adv], 2[has.tie.mode], 3[has.tie.scale], 4, 5, 5), term = function(predLabels, varLabels){ if (has.home.adv) { ability1 <- paste("(", predLabels[a], ") * ", varLabels[4], " + ", predLabels[i + 2], sep = "") ability2 <- paste("(", predLabels[a], ") * ", varLabels[5], " + ", predLabels[i + 3], sep = "") } else { ability1 <- predLabels[i + 2] ability2 <- predLabels[i + 3] } tie.scale <- ifelse(has.tie.scale, predLabels[c], 0) scale <- paste("exp(", tie.scale, ")", sep = "") if (has.tie.mode) { psi1 <- paste("exp((", predLabels[b], ") * ", varLabels[4], ")", sep = "") psi2 <- paste("exp((", predLabels[b], ") * ", varLabels[5], ")", sep = "") weight1 <- paste(psi1, "/(", psi1, " + ", psi2, ")", sep = "") weight2 <- paste(psi2, "/(", psi1, " + ", psi2, ")", sep = "") } else { weight1 <- weight2 <- "0.5" } nu <- paste(predLabels[i + 1], " - ", scale, " * (", weight1, " * log(", weight1, ") + ", weight2, " * log(", weight2, "))", sep = "") paste(varLabels[1], " * (", ability2, ") + ", varLabels[2], " * (", nu, " + ", scale, " * ", weight1, " * (", ability1, ") + ", scale, " * ", weight2, " * (", ability2, ") + ", "(1 - ", scale, ") * ", "log(exp(", ability1, ") + exp(", ability2, "))) + ", varLabels[3], " * (", ability1, ")", sep = "") }, start = function(theta) { init <- runif(length(theta)) - 0.5 init[c] <- 0.5 } ) } class(GenDavidson) <- "nonlin" BradleyTerry2/R/BTm.setup.R0000744000176200001440000000520413152515662015121 0ustar liggesusers#' @importFrom stats reformulate BTm.setup <- function(outcome = 1, player1, player2, formula = NULL, id = "..", separate.ability = NULL, refcat = NULL, data = NULL, weights = NULL, subset = NULL, offset = NULL, contrasts = NULL, ...){ if (!is.data.frame(data)){ keep <- names(data) %in% c(deparse(substitute(player1)), deparse(substitute(player2))) if (!length(keep)) keep <- FALSE ## save row names for checking against index variables (in Diff) nm <- lapply(data, rownames) data <- c(data[keep], unlist(unname(data[!keep]), recursive = FALSE)) if (any(dup <- duplicated(names(data)))) warning("'data' argument specifies duplicate variable names: ", paste(names(data)[dup], collapse = " ")) } ## (will take first occurence of replicated names) withIfNecessary <- function(x, data, as.data.frame = TRUE) { if (as.data.frame) expr <- substitute(data.frame(x), list(x = x)) else expr <- x if (!is.null(data)) with(data, eval(expr)) else eval(expr) } player1 <- withIfNecessary(substitute(player1), data) player2 <- withIfNecessary(substitute(player2), data) if (ncol(player1) == 1) colnames(player1) <- colnames(player2) <- id Y <- withIfNecessary(substitute(outcome), c(player1, player2, data), as.data.frame = FALSE) weights <- withIfNecessary(substitute(weights), data, FALSE) subset1 <- withIfNecessary(substitute(subset), c(player1 = list(player1), player2 = list(player2), player1, data), FALSE) subset2 <- withIfNecessary(substitute(subset), c(player1 = list(player1), player2 = list(player2), player2, data), FALSE) if (is.logical(subset1)) subset <- subset1 | subset2 else subset <- c(subset1, subset2) if (is.null(formula)) formula <- reformulate(id) diffModel <- Diff(player1, player2, formula, id, data, separate.ability, refcat, contrasts, nm) offset <- withIfNecessary(substitute(offset), data, FALSE) #contest level if (!is.null(offset)) { if (is.null(diffModel$offset)) diffModel$offset <- offset else diffModel$offset <- diffModel$offset + offset } res <- c(diffModel, list(data = data, player1 = player1, player2 = player2, Y = Y, weights = weights, subset = subset, formula = formula)) } BradleyTerry2/R/icehockey.R0000744000176200001440000001074213152515662015246 0ustar liggesusers#' College Hockey Men's Division I 2009-10 results #' #' Game results from American College Hockey Men's Division I composite #' schedule 2009-2010. #' #' The Division I ice hockey teams are arranged in six conferences: Atlantic #' Hockey, Central Collegiate Hockey Association, College Hockey America, ECAC #' Hockey, Hockey East and the Western Collegiate Hockey Association, all part #' of the National Collegiate Athletic Association. The composite schedule #' includes within conference games and between conference games. #' #' The data set here contains only games from the regular season, the results #' of which determine the teams that play in the NCAA national tournament. #' There are six automatic bids that go to the conference tournament champions, #' the remaining 10 teams are selected based upon ranking under the NCAA's #' system of pairwise comparisons #' (http://www.collegehockeynews.com/info/?d=pwcrpi). Some have argued #' that Bradley-Terry rankings would be fairer #' (http://www.collegehockeynews.com/info/?d=krach). #' #' @name icehockey #' @docType data #' @format A data frame with 1083 observations on the following 6 variables. #' \describe{ #' \item{date}{a numeric vector} #' \item{visitor}{a #' factor with 58 levels `Alaska Anchorage` ... `Yale`} #' \item{v_goals}{a numeric vector} #' \item{opponent}{a factor #' with 58 levels `Alaska Anchorage` ... `Yale`} #' \item{o_goals}{a numeric vector} #' \item{conference}{a factor #' with levels `AH`, `CC`, `CH`, `EC`, `HE`, #' `NC`, `WC`} #' \item{result}{a numeric vector: 1 if visitor #' won, 0.5 for a draw and 0 if visitor lost} #' \item{home.ice}{a logical #' vector: 1 if opponent on home ice, 0 if game on neutral ground} } #' @references Schlobotnik, J. Build your own rankings. #' http://slack.net/~whelan/tbrw/2010/rankings.diy.shtml #' #' http://www.collegehockeynews.com #' #' Selections for 2010 NCAA tournament. #' http://sports.espn.go.com/ncaa/news/story?id=5012918 #' @source http://www.collegehockeystats.net/0910/schedules/men #' @keywords datasets #' @examples #' #' ### Fit the standard Bradley-Terry model #' standardBT <- BTm(outcome = result, #' player1 = visitor, player2 = opponent, #' id = "team", data = icehockey) #' #' ## Bradley-Terry abilities #' abilities <- exp(BTabilities(standardBT)[,1]) #' #' ## Compute round-robin winning probability and KRACH ratings #' ## (scaled abilities such that KRACH = 100 for a team with #' ## round-robin winning probability of 0.5) #' rankings <- function(abilities){ #' probwin <- abilities/outer(abilities, abilities, "+") #' diag(probwin) <- 0 #' nteams <- ncol(probwin) #' RRWP <- rowSums(probwin)/(nteams - 1) #' low <- quantile(abilities, 0.45) #' high <- quantile(abilities, 0.55) #' middling <- uniroot(function(x) {sum(x/(x+abilities)) - 0.5*nteams}, #' lower = low, upper = high)$root #' KRACH <- abilities/middling*100 #' cbind(KRACH, RRWP) #' } #' #' ranks <- rankings(abilities) #' ## matches those produced by Joe Schlobotnik's Build Your Own Rankings #' head(signif(ranks, 4)[order(ranks[,1], decreasing = TRUE),]) #' #' ## At one point the NCAA rankings gave more credit for wins on #' ## neutral/opponent's ground. Home ice effects are easily #' ## incorporated into the Bradley-Terry model, comparing teams #' ## on a "level playing field" #' levelBT <- BTm(result, #' data.frame(team = visitor, home.ice = 0), #' data.frame(team = opponent, home.ice = home.ice), #' ~ team + home.ice, #' id = "team", data = icehockey) #' #' abilities <- exp(BTabilities(levelBT)[,1]) #' ranks2 <- rankings(abilities) #' #' ## Look at movement between the two rankings #' change <- factor(rank(ranks2[,1]) - rank(ranks[,1])) #' barplot(xtabs(~change), xlab = "Change in Rank", ylab = "No. Teams") #' #' ## Take out regional winners and look at top 10 #' regional <- c("RIT", "Alabama-Huntsville", "Michigan", "Cornell", "Boston College", #' "North Dakota") #' #' ranks <- ranks[!rownames(ranks) %in% regional] #' ranks2 <- ranks2[!rownames(ranks2) %in% regional] #' #' ## compare the 10 at-large selections under both rankings #' ## with those selected under NCAA rankings #' cbind(names(sort(ranks, decr = TRUE)[1:10]), #' names(sort(ranks2, decr = TRUE)[1:10]), #' c("Miami", "Denver", "Wisconsin", "St. Cloud State", #' "Bemidji State", "Yale", "Northern Michigan", "New Hampshire", #' "Alsaka", "Vermont")) #' #' "icehockey" BradleyTerry2/R/springall.R0000744000176200001440000000565113152515662015301 0ustar liggesusers#' Springall (1973) Data on Subjective Evaluation of Flavour Strength #' #' Data from Section 7 of the paper by Springall (1973) on Bradley-Terry #' response surface modelling. An experiment to assess the effects of gel and #' flavour concentrations on the subjective assessment of flavour strength by #' pair comparisons. #' #' The variables `win.adj` and `loss.adj` are provided in order to #' allow a simple way of handling ties (in which a tie counts as half a win and #' half a loss), which is slightly different numerically from the Rao and #' Kupper (1967) model that Springall (1973) uses. #' #' @name springall #' @docType data #' @format A list containing two data frames, `springall$contests` and #' `springall$predictors`. #' #' The `springall$contests` data frame has 36 observations (one for each #' possible pairwise comparison of the 9 treatments) on the following 7 #' variables: \describe{ #' \item{row}{a factor with levels `1:9`, #' the row number in Springall's dataset} # #' \item{col}{a factor with #' levels `1:9`, the column number in Springall's dataset} #' \item{win}{integer, the number of wins for column treatment over row #' treatment} #' \item{loss}{integer, the number of wins for row treatment #' over column treatment} #' \item{tie}{integer, the number of ties #' between row and column treatments} #' \item{win.adj}{numeric, equal to #' `win + tie/2`} #' \item{loss.adj}{numeric, equal to `loss + tie/2`} } #' #' The `predictors` data frame has 9 observations (one for each treatment) #' on the following 5 variables: \describe{ #' \item{flav}{numeric, the #' flavour concentration} #' \item{gel}{numeric, the gel concentration} #' \item{flav.2}{numeric, equal to `flav^2`} #' \item{gel.2}{numeric, equal to `gel^2`} #' \item{flav.gel}{numeric, equal to `flav * gel`} } #' @author David Firth #' @references Rao, P. V. and Kupper, L. L. (1967) Ties in paired-comparison #' experiments: a generalization of the Bradley-Terry model. *Journal of #' the American Statistical Association*, **63**, 194--204. #' @source Springall, A (1973) Response surface fitting using a generalization #' of the Bradley-Terry paired comparison method. *Applied Statistics* #' **22**, 59--68. #' @keywords datasets #' @examples #' #' ## #' ## Fit the same response-surface model as in section 7 of #' ## Springall (1973). #' ## #' ## Differences from Springall's fit are minor, arising from the #' ## different treatment of ties. #' ## #' ## Springall's model in the paper does not include the random effect. #' ## In this instance, however, that makes no difference: the random-effect #' ## variance is estimated as zero. #' ## #' summary(springall.model <- BTm(cbind(win.adj, loss.adj), col, row, #' ~ flav[..] + gel[..] + #' flav.2[..] + gel.2[..] + flav.gel[..] + #' (1 | ..), #' data = springall)) #' "springall" BradleyTerry2/R/CEMS.R0000744000176200001440000001611113152515662014026 0ustar liggesusers#' Dittrich, Hatzinger and Katzenbeisser (1998, 2001) Data on Management School #' Preference in Europe #' #' *Community of European management schools* (CEMS) data as used in the #' paper by Dittrich et al. (1998, 2001), re-formatted for use with #' [BTm()] #' #' The variables `win1.adj` and `win2.adj` are provided in order to #' allow a simple way of handling ties (in which a tie counts as half a win and #' half a loss), which is slightly different numerically from the Davidson #' (1970) method that is used by Dittrich et al. (1998): see the examples. #' #' @name CEMS #' @docType data #' @format A list containing three data frames, `CEMS$preferences`, #' `CEMS$students` and `CEMS$schools`. #' #' The `CEMS$preferences` data frame has `303 * 15 = 4505` #' observations (15 possible comparisons, for each of 303 students) on the #' following 8 variables: \describe{ #' \item{student}{a factor with #' levels `1:303`} #' \item{school1}{a factor with levels #' `c("Barcelona", "London", "Milano", "Paris", "St.Gallen", #' "Stockholm")`; the first management school in a comparison} #' \item{school2}{a factor with the same levels as `school1`; the #' second management school in a comparison} #' \item{win1}{integer (value #' 0 or 1) indicating whether `school1` was preferred to `school2`} #' \item{win2}{integer (value 0 or 1) indicating whether `school2` #' was preferred to `school1`} #' \item{tied}{integer (value 0 or 1) #' indicating whether no preference was expressed} #' \item{win1.adj}{numeric, equal to `win1 + tied/2`} #' \item{win2.adj}{numeric, equal to `win2 + tied/2`} } #' #' The `CEMS$students` data frame has 303 observations (one for each #' student) on the following 8 variables: \describe{ #' \item{STUD}{a #' factor with levels `c("other", "commerce")`, the student's main #' discipline of study} #' \item{ENG}{a factor with levels `c("good, #' poor")`, indicating the student's knowledge of English} #' \item{FRA}{a #' factor with levels `c("good, poor")`, indicating the student's #' knowledge of French} #' \item{SPA}{a factor with levels `c("good, #' poor")`, indicating the student's knowledge of Spanish} #' \item{ITA}{a #' factor with levels `c("good, poor")`, indicating the student's #' knowledge of Italian} #' \item{WOR}{a factor with levels `c("no", #' "yes")`, whether the student was in full-time employment while studying} #' \item{DEG}{a factor with levels `c("no", "yes")`, whether the #' student intended to take an international degree} #' \item{SEX}{a #' factor with levels `c("female", "male")` } } #' #' The `CEMS$schools` data frame has 6 observations (one for each #' management school) on the following 7 variables: \describe{ #' \item{Barcelona}{numeric (value 0 or 1)} #' \item{London}{numeric (value 0 or 1)} #' \item{Milano}{numeric #' (value 0 or 1)} \item{Paris}{numeric (value 0 or 1)} #' \item{St.Gallen}{numeric (value 0 or 1)} #' \item{Stockholm}{numeric (value 0 or 1)} #' \item{LAT}{numeric #' (value 0 or 1) indicating a 'Latin' city} } #' @author David Firth #' @references Davidson, R. R. (1970) Extending the Bradley-Terry model to #' accommodate ties in paired comparison experiments. *Journal of the #' American Statistical Association* **65**, 317--328. #' #' Dittrich, R., Hatzinger, R. and Katzenbeisser, W. (1998) Modelling the #' effect of subject-specific covariates in paired comparison studies with an #' application to university rankings. *Applied Statistics* **47**, #' 511--525. #' #' Dittrich, R., Hatzinger, R. and Katzenbeisser, W. (2001) Corrigendum: #' Modelling the effect of subject-specific covariates in paired comparison #' studies with an application to university rankings. *Applied #' Statistics* **50**, 247--249. #' #' Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 #' package. *Journal of Statistical Software*, **48**(9), 1--21. #' @source Royal Statistical Society datasets website, at #' http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1467-9876/homepage/47_4.htm. #' @keywords datasets #' @examples #' #' ## #' ## Fit the standard Bradley-Terry model, using the simple 'add 0.5' #' ## method to handle ties: #' ## #' table3.model <- BTm(outcome = cbind(win1.adj, win2.adj), #' player1 = school1, player2 = school2, #' formula = ~.. , refcat = "Stockholm", #' data = CEMS) #' ## The results in Table 3 of Dittrich et al (2001) are reproduced #' ## approximately by a simple re-scaling of the estimates: #' table3 <- summary(table3.model)$coef[, 1:2]/1.75 #' print(table3) #' ## #' ## Now fit the 'final model' from Table 6 of Dittrich et al.: #' ## #' table6.model <- BTm(outcome = cbind(win1.adj, win2.adj), #' player1 = school1, player2 = school2, #' formula = ~ .. + #' WOR[student] * Paris[..] + #' WOR[student] * Milano[..] + #' WOR[student] * Barcelona[..] + #' DEG[student] * St.Gallen[..] + #' STUD[student] * Paris[..] + #' STUD[student] * St.Gallen[..] + #' ENG[student] * St.Gallen[..] + #' FRA[student] * London[..] + #' FRA[student] * Paris[..] + #' SPA[student] * Barcelona[..] + #' ITA[student] * London[..] + #' ITA[student] * Milano[..] + #' SEX[student] * Milano[..], #' refcat = "Stockholm", #' data = CEMS) #' ## #' ## Again re-scale to reproduce approximately Table 6 of Dittrich et #' ## al. (2001): #' ## #' table6 <- summary(table6.model)$coef[, 1:2]/1.75 #' print(table6) #' ## #' \dontrun{ #' ## Now the slightly simplified model of Table 8 of Dittrich et al. (2001): #' ## #' table8.model <- BTm(outcome = cbind(win1.adj, win2.adj), #' player1 = school1, player2 = school2, #' formula = ~ .. + #' WOR[student] * LAT[..] + #' DEG[student] * St.Gallen[..] + #' STUD[student] * Paris[..] + #' STUD[student] * St.Gallen[..] + #' ENG[student] * St.Gallen[..] + #' FRA[student] * London[..] + #' FRA[student] * Paris[..] + #' SPA[student] * Barcelona[..] + #' ITA[student] * London[..] + #' ITA[student] * Milano[..] + #' SEX[student] * Milano[..], #' refcat = "Stockholm", #' data = CEMS) #' table8 <- summary(table8.model)$coef[, 1:2]/1.75 #' ## #' ## Notice some larger than expected discrepancies here (the coefficients #' ## named "..Barcelona", "..Milano" and "..Paris") from the results in #' ## Dittrich et al. (2001). Apparently a mistake was made in Table 8 of #' ## the published Corrigendum note (R. Dittrich personal communication, #' ## February 2010). #' ## #' print(table8) #' } #' "CEMS" BradleyTerry2/R/print.summary.glmmPQL.R0000744000176200001440000000421513152515662017441 0ustar liggesusers#' @importFrom stats naprint printCoefmat symnum #' @export print.summary.BTglmmPQL <- function(x, digits = max(3, getOption("digits") - 3), symbolic.cor = x$symbolic.cor, signif.stars = getOption("show.signif.stars"), ...) { cat("\nCall:\n", deparse(x$call), sep = "", fill = TRUE) p <- length(x$aliased) tidy.zeros <- function(vec) ifelse(abs(vec) < 100 * .Machine$double.eps, 0, vec) if (p == 0) { cat("\nNo Fixed Effects\n") } else { if (nsingular <- p - x$rank) { cat("\nFixed Effects: (", nsingular, " not defined because of singularities)\n", sep = "") cn <- names(x$aliased) pars <- matrix(NA, p, 4, dimnames = list(cn, colnames(x$fixef))) pars[!x$aliased, ] <- tidy.zeros(x$fixef) } else { cat("\nFixed Effects:\n") pars <- tidy.zeros(x$fixef) } printCoefmat(pars, digits = digits, signif.stars = signif.stars, na.print = "NA", ...) } cat("\n(Dispersion parameter for ", x$family$family, " family taken to be 1)\n", sep = "") cat("\nRandom Effects:\n") pars <- tidy.zeros(x$ranef) printCoefmat(pars, digits = digits, signif.stars = signif.stars, na.print = "NA", ...) if (nzchar(mess <- naprint(x$na.action))) cat("\n", mess, "\n", sep = "") cat("\nNumber of iterations: ", x$iter, "\n", sep = "") correl <- x$correlation if (!is.null(correl)) { if (x$rank > 1) { cat("\nCorrelation of Coefficients:\n") if (is.logical(symbolic.cor) && symbolic.cor) { print(symnum(correl, abbr.colnames = NULL)) } else { correl <- format(round(correl, 2), nsmall = 2, digits = digits) correl[!lower.tri(correl)] <- "" print(correl[-1, -x$rank, drop = FALSE], quote = FALSE) } } } cat("\n") invisible(x) } BradleyTerry2/R/drop1.BTm.R0000744000176200001440000000733613152515662015016 0ustar liggesusers#' @importFrom stats coef drop.scope model.matrix formula pchisq pf terms update.formula vcov #' @export drop1.BTm <- function(object, scope, scale = 0, test = c("none", "Chisq", "F"), ...) { x <- model.matrix(object) ## Pass on if no random effects if (is.null(object$random)){ object$x <- x attr(object$x, "assign") <- object$assign object$terms <- terms(object$formula) return(NextMethod()) } form <- formula(object) if (missing(scope)) scope <- drop.scope(nobars(form)) else { if (!is.character(scope)) { srandom <- findbars(scope[[2]]) if (length(srandom)) stop("Scope should not include random effects.") scope <- attr(terms(update.formula(form, scope)), "term.labels") } if (!all(match(scope, terms(form), 0L) > 0L)) stop("scope is not a subset of term labels") } asgn <- object$assign coefs <- coef(object) if (scale == 0) dispersion <- 1 else dispersion <- scale vc <- vcov(object, dispersion = dispersion) #vcov should handle disp != 1 sTerms <- vapply(strsplit(scope, ":", fixed = TRUE), function(x) paste(sort(x), collapse = ":"), character(1)) stat <- df <- numeric(length(scope)) names(stat) <- names(df) <- as.character(lapply(scope, as.name)) tryerror <- FALSE for (i in seq(scope)) { stt <- paste(sort(strsplit(scope[i], ":")[[1]]), collapse = ":") usex <- match(asgn, match(stt, sTerms), 0) > 0 trystat <- try(t(coefs[usex]) %*% chol2inv(chol(vc[usex, usex])) %*% coefs[usex], silent = TRUE) if (inherits(trystat, "try-error")) { stat[i] <- df[i] <- NA tryerror <- TRUE } else { stat[i] <- trystat df[i] <- sum(usex) } } table <- data.frame(stat, df) dimnames(table) <- list(names(df), c("Statistic", "Df")) title <- "Single term deletions\n" topnote <- gsub("\\s+", " ", paste("Model: ", paste(deparse(as.vector(formula(object))), collapse = ""), if (scale > 0) paste("\nscale: ", format(scale), "\n"), if (tryerror) "\n\nTest statistic unestimable for at least one term"), perl = TRUE) test <- match.arg(test) if (test == "Chisq") { dfs <- table[, "Df"] vals <- table[, "Statistic"] vals[dfs %in% 0] <- NA table <- cbind(table, `P(>|Chi|)` = pchisq(vals, abs(dfs), lower.tail = FALSE)) } else if (test == "F") { ## Assume dispersion fixed at one - if dispersion estimated, would use ## "residual" df from larger model in each comparison df.dispersion <- Inf if (df.dispersion == Inf) { fam <- object[[1]]$family$family if (fam == "binomial" || fam == "poisson") warning(gettextf("using F test with a '%s' family is ", "inappropriate", fam), domain = NA, call. = FALSE) else { warning("using F test with a fixed dispersion is inappropriate") } } dfs <- table[, "Df"] Fvalue <- table[, "Statistic"]/abs(dfs) Fvalue[dfs %in% 0] <- NA table <- cbind(table, F = Fvalue, `Pr(>F)` = pf(Fvalue, abs(dfs), df.dispersion, lower.tail = FALSE)) } structure(table, heading = c(title, topnote), class = c("anova", "data.frame")) } BradleyTerry2/R/football.R0000744000176200001440000000431213152515662015101 0ustar liggesusers#' English Premier League Football Results 2008/9 to 2012/13 #' #' The win/lose/draw results for five seasons of the English Premier League #' football results, from 2008/9 to 2012/13 #' #' In each season, there are 20 teams, each of which plays one home game and #' one away game against all the other teams in the league. The results in 380 #' games per season. #' #' @name football #' @docType data #' @format A data frame with 1881 observations on the following 4 variables. #' \describe{ #' \item{season}{a factor with levels `2008-9`, #' `2009-10`, `2010-11`, `2011-12`, `2012-13`} #' \item{home}{a factor specifying the home team, with 29 levels #' `Ars` (Arsenal), ... , `Wol` (Wolverhampton)} #' \item{away}{a factor specifying the away team, with the same levels #' as `home`.} #' \item{result}{a numeric vector giving the result #' for the home team: 1 for a win, 0 for a draw, -1 for a loss.} } #' @seealso [GenDavidson()] #' @references Davidson, R. R. (1970). On extending the Bradley-Terry model to #' accommodate ties in paired comparison experiments. *Journal of the #' American Statistical Association*, **65**, 317--328. #' @source These data were downloaded from http://soccernet.espn.go.com in #' 2013. The site has since moved and the new site does not appear to have an #' equivalent source. #' @keywords datasets #' @examples #' #' ### example requires gnm #' if (require(gnm)) { #' ### convert to trinomial counts #' football.tri <- expandCategorical(football, "result", idvar = "match") #' head(football.tri) #' #' ### add variable to indicate whether team playing at home #' football.tri$at.home <- !logical(nrow(football.tri)) #' #' ### fit Davidson model for ties #' ### - subset to first and last season for illustration #' Davidson <- gnm(count ~ #' GenDavidson(result == 1, result == 0, result == -1, #' home:season, away:season, #' home.adv = ~1, tie.max = ~1, #' at.home1 = at.home, at.home2 = !at.home) - 1, #' eliminate = match, family = poisson, data = football.tri, #' subset = season %in% c("2008-9", "2012-13")) #' #' ### see ?GenDavidson for further analysis #' } #' "football" BradleyTerry2/R/countsToBinomial.R0000744000176200001440000000354013152515662016572 0ustar liggesusers#' Convert Contingency Table of Wins to Binomial Counts #' #' Convert a contingency table of wins to a four-column data frame containing #' the number of wins and losses for each pair of players. #' #' #' @param xtab a contingency table of wins cross-classified by \dQuote{winner} #' and \dQuote{loser} #' @return A data frame with four columns \item{player1 }{ the first player in #' the contest. } \item{player2 }{ the second player in the contest. } #' \item{win1 }{ the number of times `player1` won. } \item{win2 }{ the #' number of times `player2` won. } #' @author Heather Turner #' @seealso [BTm()] #' @keywords models #' @examples #' #' ######################################################## #' ## Statistics journal citation data from Stigler (1994) #' ## -- see also Agresti (2002, p448) #' ######################################################## #' citations #' #' ## Convert frequencies to success/failure data #' citations.sf <- countsToBinomial(citations) #' names(citations.sf)[1:2] <- c("journal1", "journal2") #' citations.sf #' #' @importFrom gtools combinations #' @export countsToBinomial <- function(xtab) { ## make square if necessary if (nrow(xtab) != ncol(xtab) || rownames(xtab) != colnames(xtab)) { dat <- as.data.frame(xtab) lev <- union(rownames(xtab), colnames(xtab)) dat[,1] <- factor(dat[,1], levels = lev) dat[,2] <- factor(dat[,2], levels = lev) xtab <- tapply(dat[,3], dat[1:2], sum) xtab[is.na(xtab)] <- 0 } ##assumes square players <- rownames(xtab) comb <- combinations(nrow(xtab), 2) won <- xtab[comb] lost <- t(xtab)[comb] res <- !(won == 0 & lost == 0) player1 <- factor(players[comb[,1]], levels = players)[res] player2 <- factor(players[comb[,2]], levels = players)[res] data.frame(player1, player2, win1 = won[res], win2 = lost[res]) } BradleyTerry2/R/BTabilities.R0000744000176200001440000001714613160030365015472 0ustar liggesusers#' Estimated Abilities from a Bradley-Terry Model #' #' Computes the (baseline) ability of each player from a model object of class #' `"BTm"`. #' #' The player abilities are either directly estimated by the model, in which #' case the appropriate parameter estimates are returned, otherwise the #' abilities are computed from the terms of the fitted model that involve #' player covariates only (those indexed by `model$id` in the model #' formula). Thus parameters in any other terms are assumed to be zero. If one #' player has been set as the reference, then `predict.BTm()` can be used to #' obtain ability estimates with non-player covariates set to other values, #' see examples for [predict.BTm()]. #' #' If the abilities are structured according to a linear predictor, and if #' there are player covariates with missing values, the abilities for the #' corresponding players are estimated as separate parameters. In this event #' the resultant matrix has an attribute, named `"separate"`, which #' identifies those players whose ability was estimated separately. For an #' example, see [flatlizards()]. #' #' @aliases BTabilities print.BTabilities coef.BTabilities vcov.BTabilities #' @param model a model object for which `inherits(model, "BTm")` is #' `TRUE` #' @return A two-column numeric matrix of class `c("BTabilities", #' "matrix")`, with columns named `"ability"` and `"se"`; has one row #' for each player; has attributes named `"vcov"`, `"modelcall"`, #' `"factorname"` and (sometimes --- see below) `"separate"`. The #' first three attributes are not printed by the method #' `print.BTabilities`. #' #' @author David Firth and Heather Turner #' @seealso [BTm()], [residuals.BTm()] #' @references Firth, D. (2005) Bradley-Terry models in R. *Journal of #' Statistical Software*, **12**(1), 1--12. #' #' Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 #' package. *Journal of Statistical Software*, **48**(9), 1--21. #' @keywords models #' @examples #' #' ### citations example #' #' ## Convert frequencies to success/failure data #' citations.sf <- countsToBinomial(citations) #' names(citations.sf)[1:2] <- c("journal1", "journal2") #' #' ## Fit the "standard" Bradley-Terry model #' citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) #' BTabilities(citeModel) #' #' ### baseball example #' #' data(baseball) # start with baseball data as provided by package #' #' ## Fit mode with home advantage #' baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) #' baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) #' baseballModel2 <- BTm(cbind(home.wins, away.wins), home.team, away.team, #' formula = ~ team + at.home, id = "team", #' data = baseball) #' ## Estimate abilities for each team, relative to Baltimore, when #' ## playing away from home: #' BTabilities(baseballModel2) #' #' @importFrom stats C contrasts model.frame model.matrix model.offset na.exclude na.pass terms reformulate relevel vcov #' @export BTabilities <- function (model) { if (!inherits(model, "BTm")) stop("model is not of class BTm") X0 <- model.matrix(model) player1 <- model$player1[, model$id] player.names <- levels(player1) factors <- attr(terms(model$formula), "factors") if (!(model$id %in% rownames(factors))) { players <- data.frame(factor(seq(player.names), labels = player.names)) names(players) <- model$id ## assume player covariates indexed by id fixed <- nobars(model$formula) factors <- attr(terms(fixed), "factors") vars <- rownames(factors) by.id <- grep(paste("[", model$id, "]", sep = ""), vars, fixed = TRUE) drop <- setdiff(seq(length(vars)), by.id) ## following will only work for linear terms ## (drop any term involving non-player covariate) keep <- colSums(factors[drop, , drop = FALSE]) == 0 formula <- reformulate(names(keep)[keep]) mf <- model.frame(terms(formula), data = c(players, model$data), na.action = na.pass) rownames(mf) <- player.names players <- players[, model$id] offset <- model.offset(mf) if (is.null(offset)) offset <- 0 predvars <- setdiff(seq(ncol(mf)), attr(attr(mf, "terms"), "offset")) predvars <- reformulate(colnames(mf)[predvars]) X <- model.matrix(predvars, mf) Xmiss <- is.na(rowSums(X)) | players %in% model$separate.ability X[Xmiss, ] <- 0 X <- X[, -1, drop = FALSE] separate.ability <- unique(union(players[Xmiss], model$separate.ability)) ns <- length(separate.ability) if (ns) { S <- matrix(0, nrow = nrow(X), ncol = ns) S[cbind(which(players %in% separate.ability), seq(ns))] <- 1 X <- cbind(S, X) } ## remove inestimable coef est <- !is.na(model$coef) kept <- model$assign[est] %in% c(0, which(keep)) est <- est[kept] X <- X[, est, drop = FALSE] sqrt.vcov <- chol(vcov(model)[kept, kept]) V <- crossprod(sqrt.vcov %*% t(X)) se <- sqrt(diag(V)) abilities <- cbind(X %*% coef(model)[est][kept] + offset, se) attr(abilities, "vcov") <- V if (length(separate.ability)) { attr(abilities, "separate") <- separate.ability } } else { ## get ability coef and corresponding vcov asgn <- model$assign if (is.null(asgn)) abilities <- TRUE else { idterm <- attr(terms(model$formula), "term.labels") == model$id if (!any(idterm)) stop("abilities not uniquely defined for this parameterization") coefs.to.include <- asgn == which(idterm) vcov.to.include <- asgn[!is.na(coef(model))] == which(idterm) } coef <- na.exclude(coef(model)[coefs.to.include]) vc <- vcov(model)[names(coef), names(coef), drop = FALSE] ## setup factor reflecting contrasts used .. fac <- factor(player.names, levels = player.names) if (!is.null(model$refcat)) { fac <- C(relevel(fac, model$refcat), "contr.treatment") } else fac <- C(fac, model$contrasts[[model$id]]) contr <- contrasts(fac)[player.names,] ## calc abilities and s.e., fill in NA as necessary if (!is.null(attr(coef, "na.action"))) { contr <- contr[, -attr(coef, "na.action"), drop = FALSE] } est <- contr %*% coef ## vc of contrasts for use with qvcalc vc <- contr %*% vc %*% t(contr) se <- sqrt(diag(vc)) if (!is.null(attr(coef, "na.action"))){ id <- match(names(attr(coef, "na.action")), paste0(model$id, rownames(contr))) est[id] <- se[id] <- NA } abilities <- cbind(est, se) rownames(abilities) <- player.names attr(abilities, "vcov") <- vc } colnames(abilities) <- c("ability", "s.e.") attr(abilities, "modelcall") <- model$call attr(abilities, "factorname") <- model$id class(abilities) <- c("BTabilities", "matrix") abilities } #' @export print.BTabilities <- function(x, ...) { attr(x, "vcov") <- attr(x, "modelcall") <- attr(x, "factorname") <- NULL class(x) <- "matrix" print(x) ## ie, print without showing the messy attributes } #' @export vcov.BTabilities <- function(object, ...) { attr(object, "vcov") } #' @export coef.BTabilities <- function(object, ...) { object[, "ability"] } BradleyTerry2/R/predict.BTglmmPQL.R0000744000176200001440000002146713152515662016501 0ustar liggesusers#' Predict Method for BTglmmPQL Objects #' #' Obtain predictions and optionally standard errors of those predictions from #' a `"BTglmmPQL"` object. #' #' If `newdata` is omitted the predictions are based on the data used for #' the fit. In that case how cases with missing values in the original fit are #' treated is determined by the `na.action` argument of that fit. If #' `na.action = na.omit` omitted cases will not appear in the residuals, #' whereas if `na.action = na.exclude` they will appear (in predictions #' and standard errors), with residual value `NA`. See also #' `napredict`. #' #' Standard errors for the predictions are approximated assuming the variance #' of the random effects is known, see Booth and Hobert (1998). #' #' @param object a fitted object of class `"BTglmmPQL"` #' @param newdata (optional) a data frame in which to look for variables with #' which to predict. If omitted, the fitted linear predictors are used. #' @param newrandom if `newdata` is provided, a corresponding design #' matrix for the random effects, will columns corresponding to the random #' effects estimated in the original model. #' @param level an integer vector giving the level(s) at which predictions are #' required. Level zero corresponds to population-level predictions (fixed #' effects only), whilst level one corresponds to the individual-level #' predictions (full model) which are NA for contests involving individuals not #' in the original data. By default `level = 0` if the model converged to a #' fixed effects model, `1` otherwise. #' @param type the type of prediction required. The default is on the scale of #' the linear predictors; the alternative `"response"` is on the scale of #' the response variable. Thus for a default binomial model the default #' predictions are of log-odds (probabilities on logit scale) and `type = #' "response"` gives the predicted probabilities. The `"terms"` option #' returns a matrix giving the fitted values of each term in the model formula #' on the linear predictor scale (fixed effects only). #' @param se.fit logical switch indicating if standard errors are required. #' @param terms with `type ="terms"` by default all terms are returned. A #' character vector specifies which terms are to be returned. #' @param na.action function determining what should be done with missing #' values in `newdata`. The default is to predict `NA`. #' @param \dots further arguments passed to or from other methods. #' @return If `se.fit = FALSE`, a vector or matrix of predictions. If #' `se = TRUE`, a list with components \item{fit }{Predictions} #' \item{se.fit }{Estimated standard errors} #' @author Heather Turner #' @seealso [predict.glm()], [predict.BTm()] #' @references Booth, J. G. and Hobert, J. P. (1998). Standard errors of #' prediction in Generalized Linear Mixed Models. *Journal of the American #' Statistical Association* **93**(441), 262 -- 272. #' @keywords models #' @examples #' #' seedsModel <- glmmPQL(cbind(r, n - r) ~ seed + extract, #' random = diag(nrow(seeds)), #' family = binomial, #' data = seeds) #' #' pred <- predict(seedsModel, level = 0) #' predTerms <- predict(seedsModel, type = "terms") #' #' all.equal(pred, rowSums(predTerms) + attr(predTerms, "constant")) #' #' @importFrom stats .checkMFClasses coef delete.response family model.frame model.matrix na.exclude na.pass napredict #' @export predict.BTglmmPQL <- function(object, newdata = NULL, newrandom = NULL, level = ifelse(object$sigma == 0, 0, 1), type = c("link", "response", "terms"), se.fit = FALSE, terms = NULL, na.action = na.pass, ...) { ## only pass on if a glm if (object$sigma == 0) { if (level != 0) warning("Fixed effects model: setting level to 0") return(NextMethod()) } if (!all(level %in% c(0, 1))) stop("Only level %in% c(0, 1) allowed") type <- match.arg(type) if (!is.null(newdata) || type == "terms") tt <- terms(object) if (!is.null(newdata)) { ## newdata should give variables in terms formula Terms <- delete.response(tt) m <- model.frame(Terms, newdata, na.action = na.action, xlev = object$xlevels) na.action <- attr(m, "na.action") if (!is.null(cl <- attr(Terms, "dataClasses"))) .checkMFClasses(cl, m) D <- model.matrix(Terms, m, contrasts.arg = object$contrasts) np <- nrow(D) # n predictions offset <- rep(0, np) if (!is.null(off.num <- attr(tt, "offset"))) for (i in off.num) offset <- offset + eval(attr(tt, "variables")[[i + 1]], newdata) if (!is.null(object$call$offset)) offset <- offset + eval(object$call$offset, newdata) } else { D <- model.matrix(object) newrandom <- object$random na.action <- object$na.action offset <- object$offset } cf <- coef(object) keep <- !is.na(cf) aa <- attr(D, "assign")[keep] cf <- cf[keep] D <- D[, keep, drop = FALSE] if (se.fit == TRUE) { sigma <- object$sigma w <- sqrt(object$weights) wX <- w * model.matrix(object)[, keep] wZ <- w * object$random XWX <- crossprod(wX) XWZ <- crossprod(wX, wZ) ZWZ <- crossprod(wZ, wZ) diag(ZWZ) <- diag(ZWZ) + 1/sigma^2 K <- cbind(XWX, XWZ) K <- chol(rbind(K, cbind(t(XWZ), ZWZ))) if (type == "terms" || level == 0){ ## work out (chol of inverse of) topleft of K-inv directly A <- backsolve(chol(ZWZ), t(XWZ), transpose = TRUE) A <- chol(XWX - t(A) %*% A) } } if (type == "terms") { # ignore level if (1 %in% level) warning("type = \"terms\": setting level to 0", call. = FALSE) ll <- attr(tt, "term.labels") if (!is.null(terms)) { include <- ll %in% terms ll <- ll[include] } hasintercept <- attr(tt, "intercept") > 0L if (hasintercept) { avx <- colMeans(model.matrix(object)) termsconst <- sum(avx * cf) #NA coefs? D <- sweep(D, 2, avx) } pred0 <- matrix(ncol = length(ll), nrow = NROW(D)) colnames(pred0) <- ll if (se.fit) { A <- chol2inv(A) se.pred0 <- pred0 } for (i in seq(length.out = length(ll))){ ind <- aa == which(attr(tt, "term.labels") == ll[i]) pred0[, i] <- D[, ind, drop = FALSE] %*% cf[ind] if (se.fit) { se.pred0[, i] <- sqrt(diag(D[, ind] %*% tcrossprod(A[ind, ind], D[, ind]))) } } if (hasintercept) attr(pred0, "constant") <- termsconst if (se.fit) return(list(fit = pred0, se.fit = se.pred0)) return(pred0) } if (0 %in% level) { pred0 <- napredict(na.action, c(D %*% cf) + offset) if (type == "response") pred0 <- family(object)$linkinv(pred0) if (se.fit == TRUE) { na.act <- attr(na.exclude(pred0), "na.action") H <- backsolve(A, t(na.exclude(D)), transpose = TRUE) ## se.pred0 <- ## sqrt(diag(D %*% chol2inv(K)[1:ncol(D), 1:ncol(D)] %*% t(D))) se.pred0 <- napredict(na.action, napredict(na.act, sqrt(colSums(H^2)))) if (type == "response") se.pred0 <- se.pred0*abs(family(object)$mu.eta(pred0)) pred0 <- list(fit = pred0, se.fit = se.pred0) } if (identical(level, 0)) return(pred0) } r <- nrow(D) ## newrandom should give new design matrix for original random effects if (!is.null(newdata)){ if(is.null(newrandom)) stop("newdata specified without newrandom") if (!is.null(na.action)) newrandom <- newrandom[-na.action, , drop = FALSE] } if (!identical(dim(newrandom), c(r, ncol(object$random)))) stop("newrandom should have ", r, " rows and ", ncol(object$random), " columns") D <- cbind(D, newrandom) cf <- c(cf, attr(coef(object), "random")) pred <- napredict(na.action, c(D %*% cf) + offset) if (type == "response") pred <- family(object)$linkinv(pred) if (se.fit == TRUE) { ##se.pred <- sqrt(diag(D %*% chol2inv(K) %*% t(D))) na.act <- attr(na.exclude(pred), "na.action") H <- backsolve(K, t(na.exclude(D)), transpose = TRUE) se.pred <- napredict(na.action, napredict(na.act, sqrt(colSums(H^2)))) if (type == "response") se.pred <- se.pred*abs(family(object)$mu.eta(pred)) pred <- list(fit = pred, se.fit = se.pred) } if (0 %in% level) list(population = pred0, individual = pred) else pred } BradleyTerry2/R/anova.BTm.R0000744000176200001440000002142113152515662015064 0ustar liggesusers#' Compare Nested Bradley Terry Models #' #' Compare nested models inheriting from class `"BTm"`. For models with no #' random effects, compute analysis of deviance table, otherwise compute Wald #' tests of additional terms. #' #' For models with no random effects, an analysis of deviance table is computed #' using [anova.glm()]. Otherwise, Wald tests are computed as #' detailed here. #' #' If a single object is specified, terms are added sequentially and a Wald #' statistic is computed for the extra parameters. If the full model includes #' player covariates and there are players with missing values over these #' covariates, then the `NULL` model will include a separate ability for #' these players. If there are missing values in any contest-level variables in #' the full model, the corresponding contests will be omitted throughout. The #' random effects structure of the full model is assumed for all sub-models. #' #' For a list of objects, consecutive pairs of models are compared by computing #' a Wald statistic for the extra parameters in the larger of the two models. #' #' The Wald statistic is always based on the variance-covariance matrix of the #' larger of the two models being compared. #' #' @param object a fitted object of class inheriting from `"BTm"`. #' @param ... additional `"BTm"` objects. #' @param dispersion a value for the dispersion. Not implemented for models #' with random effects. #' @param test optional character string (partially) matching one of #' `"Chisq"`, `"F"` or `"Cp"` to specify that p-values should be #' returned. The Chisq test is a likelihood ratio test for models with no #' random effects, otherwise a Wald test. Options `"F"` and `"Cp"` #' are only applicable to models with no random effects, see #' [stat.anova()]. #' @return An object of class `"anova"` inheriting from class #' `"data.frame"`. #' @section Warning: The comparison between two or more models will only be #' valid if they are fitted to the same dataset. This may be a problem if there #' are missing values and 's default of `na.action = na.omit` is used. An #' error will be returned in this case. #' #' The same problem will occur when separate abilities have been estimated for #' different subsets of players in the models being compared. However no #' warning is given in this case. #' @author Heather Turner #' @seealso [BTm()], [add1.BTm()] #' @keywords models #' @examples #' #' result <- rep(1, nrow(flatlizards$contests)) #' BTmodel <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + #' head.length[..] + (1|..), data = flatlizards, #' trace = TRUE) #' anova(BTmodel) #' #' @export anova.BTm <- function (object, ..., dispersion = NULL, test = NULL) { ## Only list models in ... dotargs <- list(...) named <- if (is.null(names(dotargs))) rep(FALSE, length(dotargs)) else (names(dotargs) != "") if (any(named)) warning("the following arguments to 'anova.BTm' are invalid and ", "dropped: ", paste(deparse(dotargs[named]), collapse = ", ")) dotargs <- dotargs[!named] is.BTm <- unlist(lapply(dotargs, function(x) inherits(x, "BTm"))) dotargs <- dotargs[is.BTm] ## Compare list of models models <- c(list(object), dotargs) if (length(dotargs) > 0){ fixed <- unlist(lapply(models, function(x) is.null(x$random))) if (all(fixed)) { variables <- lapply(models, function(x) paste(deparse(formula(x)), collapse = "\n")) models <- lapply(models, function(x) { x$formula <- formula(x$terms) class(x) <- setdiff(class(x), "BTm") x}) call <- match.call() anova.table <- do.call("anova", c(models, call$dispersion, call$test)) attr(anova.table, "heading") <- c(paste("Analysis of Deviance Table\n\n", "Response: ", deparse(object$call$outcome, 500), "\n", sep = ""), paste("Model ", format(seq(models)), ": ", variables, sep = "", collapse = "\n")) return(anova.table) } else return(anova.BTmlist(c(list(object), dotargs), dispersion = dispersion, test = test)) } X <- model.matrix(object) Z <- object$random sep <- 0 %in% object$assign ## Passing on to glm when no random effects if (is.null(Z)) { object$x <- X attr(object$x, "assign") <- object$assign + sep attr(object$terms, "term.labels") <- c("[sep]"[sep], object$term.labels) anova.table <- NextMethod() attr(anova.table, "heading") <- paste("Analysis of Deviance Table", "\n\nModel: ", object$family$family, ", link: ", object$family$link, "\n\nResponse: ", deparse(object$call$outcome, 500), "\n\nTerms added sequentially (first to last)\n\n", sep = "") if (sep) { anova.table <- anova.table[-1,] rownames(anova.table)[1] <- "NULL" anova.table[1, 1:2] <- NA } return(anova.table) } varseq <- object$assign nvars <- max(0, varseq) stat <- df <- numeric(nvars) tryerror <- FALSE if (nvars > 1) { y <- object$y ## Extension to further methods method <- object$method if (!is.function(method)) method <- get(method, mode = "function") control <- object$control control$trace <- FALSE for (i in 1:(nvars - 1)) { fit <- method(X = X[, varseq <= i, drop = FALSE], y = y, Z = Z, weights = object$prior.weights, start = object$start, offset = object$offset, family = object$family, control = control, sigma = object$call$sigma, sigma.fixed = object$sigma.fixed) class(fit) <- oldClass(object) ind <- (varseq == i)[varseq <= i] trystat <- try(t(coef(fit)[ind]) %*% chol2inv(chol(suppressMessages( #vcov should deal with dispersion != 1 vcov(fit, dispersion = dispersion))[ind, ind])) %*% coef(fit)[ind], silent = TRUE) if (inherits(trystat, "try-error")) { stat[i] <- df[i] <- NA tryerror <- TRUE } else { stat[i] <- trystat df[i] <- sum(ind) } } } ind <- varseq == nvars trystat <- try(t(coef(object)[ind]) %*% chol2inv(chol(object$varFix[ind, ind])) %*% coef(object)[ind], silent = TRUE) if (inherits(trystat, "try-error")) { stat[nvars] <- df[nvars] <- NA tryerror <- TRUE } else { stat[nvars] <- trystat df[nvars] <- sum(ind) } table <- data.frame(c(NA, stat), c(NA, df)) dimnames(table) <- list(c("NULL", object$term.labels), c("Statistic", "Df")) title <- paste("Sequential Wald Tests", "\n\nModel: ", object$family$family, ", link: ", object$family$link, "\n\nResponse: ", deparse(object$call$outcome, 500), "\n\nPredictor: ", paste(formula(object), collapse = ""), "\n\nTerms added sequentially (first to last)", if (tryerror) "\n\nTest statistic unestimable for at least one term", "\n", sep = "") ## Assume dispersion fixed at one - if dispersion estimated, would use ## "residual" df from larger model in each comparison df.dispersion <- Inf if (!is.null(test)) { if (test == "F" && df.dispersion == Inf) { fam <- object$family$family if (fam == "binomial" || fam == "poisson") warning(gettextf("using F test with a %s family is ", "inappropriate", fam), domain = NA) else { warning("using F test with a fixed dispersion is inappropriate") } } table <- switch(test, Chisq = { dfs <- table[, "Df"] vals <- table[, "Statistic"] vals[dfs %in% 0] <- NA cbind(table, `P(>|Chi|)` = pchisq(vals, dfs, lower.tail = FALSE)) }, F = { dfs <- table[, "Df"] Fvalue <- table[, "Statistic"]/dfs Fvalue[dfs %in% 0] <- NA cbind(table, F = Fvalue, `Pr(>F)` = pf(Fvalue, dfs, df.dispersion, lower.tail = FALSE)) }) } structure(table, heading = title, class = c("anova", "data.frame")) } BradleyTerry2/R/glmmPQL.fit.R0000744000176200001440000002115413152515662015374 0ustar liggesusers#' @importFrom utils flush.console glmmPQL.fit <- function(X, y, Z, weights = rep(1, NROW(y)), start = NULL, etastart = NULL, mustart = NULL, offset = rep(0, NROW(y)), family = gaussian(), control = glmmPQL.control(...), sigma = NULL, sigma.fixed = FALSE, ...) { matchCall <- as.list(match.call(expand.dots = FALSE)) dots <- names(matchCall[["..."]]) dots <- intersect(dots, setdiff(names(formals(glm)), "control")) fit0 <- do.call("glm.fit", c(list(X, y, weights, start = start, etastart = etastart, mustart = mustart, offset = offset, family = family, control = glm.control()), matchCall[dots])) w <- fit0$prior.weights # QR missing from glm.fit if ncol(X) = 0 QR <- qr(X) R <- qr.R(QR) rank <- QR$rank p <- ncol(R) nm <- colnames(R)[seq(length = rank)] if (rank < p) { X0 <- X[,colnames(R)[-seq(length = rank)]] X <- X[, nm] } empty <- !length(X) if (empty) { alpha <- numeric(0) Xa <- matrix(0, length(y), 1) } eta <- fit0$linear.predictors residuals <- fit0$residuals Y <- eta + residuals - offset #working response wy <- fit0$weights # iterative weights wY <- sqrt(wy) * Y wZ <- sqrt(wy) * Z ZWy <- crossprod(wZ, wY) ZWZ <- crossprod(wZ, wZ) if (!empty) { wX <- sqrt(wy) * X XWy <- crossprod(wX, wY) XWX <- crossprod(wX, wX) ZWX <- crossprod(wZ, wX) E <- chol(XWX) J <- backsolve(E, t(ZWX), transpose = TRUE) f <- backsolve(E, XWy, transpose = TRUE) ZSy <- ZWy - crossprod(J, f) ZSZ <- ZWZ - crossprod(J, J) } if (is.null(sigma)) sigma <- 0.1 logtheta <- log(sigma^2) conv <- FALSE for (i in 1:control$maxiter) { ## Update coefficients for (j in 1:control$IWLSiter) { IZWZD <- ZWZ * sigma^2 diag(IZWZD) <- 1 + diag(IZWZD) A <- chol(IZWZD) if (!empty) { IZSZD <- ZSZ * sigma^2 diag(IZSZD) <- 1 + diag(IZSZD) G <- chol(IZSZD) g <- backsolve(G, ZSy, transpose = TRUE) v <- backsolve(G, g) B <- backsolve(A, sigma * ZWX, transpose = TRUE) K <- chol(XWX - crossprod(B, B)) b <- backsolve(A, sigma * ZWy, transpose = TRUE) c <- backsolve(K, XWy - t(B) %*% b, transpose = TRUE) alpha <- backsolve(K, c) Xa <- X %*% alpha beta <- sigma^2 * v } else { g <- backsolve(A, ZWy, transpose = TRUE) v <- backsolve(A, g) beta <- sigma^2 * v } eta <- c(Xa + Z %*% beta + offset) ## Update working response & weights mu <- family$linkinv(eta) mu.eta.val <- family$mu.eta(eta) residuals <- (fit0$y - mu)/mu.eta.val Y <- eta + residuals - offset wy <- w * mu.eta.val^2/family$variance(mu) wY <- sqrt(wy) * Y wZ <- sqrt(wy) * Z ZWy <- crossprod(wZ, wY) ZWZ <- crossprod(wZ, wZ) if (!empty) { wX <- sqrt(wy) * X XWy <- crossprod(wX, wY) XWX <- crossprod(wX, wX) ZWX <- crossprod(wZ, wX) E <- chol(XWX) J <- backsolve(E, t(ZWX), transpose = TRUE) f <- backsolve(E, XWy, transpose = TRUE) ZSy <- ZWy - crossprod(J, f) ZSZ <- ZWZ - crossprod(J, J) score <- c(crossprod(X, wy * residuals), crossprod(Z, wy * residuals) - v) diagInfo <- c(diag(XWX), diag(ZWZ)) if (all(diagInfo < 1e-20) || all(abs(score) < control$tol * sqrt(control$tol + diagInfo))) { if (sigma.fixed) conv <- TRUE break } } else { score <- crossprod(Z, wy * residuals) - v diagInfo <- diag(ZWZ) if (all(diagInfo < 1e-20) || all(abs(score) < control$tol * sqrt(control$tol + diagInfo))) { if (sigma.fixed) conv <- TRUE break } } } if (!sigma.fixed){ ## Update sigma ## sigma^2 = exp(logtheta) ## One Fisher scoring iteration IZWZD <- ZWZ * sigma^2 diag(IZWZD) <- 1 + diag(IZWZD) A <- chol(IZWZD) if (!empty) { IZSZD <- ZSZ * sigma^2 diag(IZSZD) <- 1 + diag(IZSZD) G <- chol(IZSZD) g <- backsolve(G, ZSy, transpose = TRUE) v <- backsolve(G, g) h <- backsolve(G, ZSZ, transpose = TRUE) H <- backsolve(G, h) } else { g <- backsolve(A, ZWy, transpose = TRUE) v <- backsolve(A, g) h <- backsolve(A, ZWZ, transpose = TRUE) H <- backsolve(A, h) } ## Harville p326 score <- drop(-0.5 * sum(diag(H)) + 0.5 * crossprod(v, v)) * sigma^2 Info <- 0.5 * sum(H^2) * sigma^4 if (control$trace) { ##B & K eq 5 - still not consistently increasing cat("Iteration ", i, ". Score = ", abs(score) , "\n", sep = "") flush.console() } ## check for overall convergence if (Info < 1e-20 || abs(score) < control$tol * sqrt(control$tol + Info)){ conv <- TRUE break } ## Cannot use beta to update t(YXa) %*% Vinv %*% YXa ZWYXa <- crossprod(wZ, sqrt(wy) * (Y - Xa)) optfun <- function(logtheta) { IZWZD <- ZWZ * exp(logtheta) diag(IZWZD) <- 1 + diag(IZWZD) A <- chol(IZWZD) if (!empty) { IZSZD <- ZSZ * exp(logtheta) diag(IZSZD) <- 1 + diag(IZSZD) G <- chol(IZSZD) d <- backsolve(A, sqrt(exp(logtheta)) * ZWYXa, transpose = TRUE) sum(log(diag(G))) - 0.5 * crossprod(d, d) } else { d <- backsolve(A, sqrt(exp(logtheta)) * ZWy, transpose = TRUE) sum(log(diag(A))) - 0.5 * crossprod(d, d) } } optres <- optimize(optfun, c(-10, 10)) if (optfun(-10) < optfun(optres$minimum)) sigma <- 0 else { if (abs(optres$minimum - (logtheta + score/Info)) > 0.1) logtheta <- optres$minimum else logtheta <- logtheta + score/Info sigma <- sqrt(exp(logtheta)) } } else if (conv) break } if (!empty) varFix <- chol2inv(K) else varFix <- matrix(, 0, 0) rownames(varFix) <- colnames(varFix) <- colnames(X) fit0$coef[nm] <- alpha if (!sigma.fixed) varSigma <- sigma^2/(4 * Info) else varSigma <- NA glm <- identical(sigma, 0) if (!empty) { if (rank < p) QR <- qr(cbind(wX, sqrt(w) * X0)) else QR <- qr(wX) R <- qr.R(QR) } list(coefficients = structure(fit0$coef, random = beta), residuals = residuals, fitted.values = mu, #effect = ? R = if (!empty) R, rank = rank, qr = if (!empty) QR, family = family, linear.predictors = eta, deviance = if (glm) sum(family$dev.resids(y, mu, w)), aic = if (glm) family$aic(y, length(y), mu, w, sum(family$dev.resids(y, mu, w))) + 2 * rank, null.deviance = if (glm) { wtdmu <- family$linkinv(offset) sum(family$dev.resids(y, wtdmu, w)) }, iter = ifelse(glm, NA, i), weights = wy, prior.weights = w, df.residual = length(y) - rank, df.null = if (glm) length(y) - sum(w == 0), y = y, sigma = sigma, sigma.fixed = sigma.fixed, varFix = varFix, varSigma = varSigma, converged = conv) } BradleyTerry2/R/Diff.R0000744000176200001440000001371413152515662014155 0ustar liggesusers#' @importFrom stats is.empty.model model.frame model.matrix model.offset na.omit na.pass reformulate relevel terms Diff <- function(player1, player2, formula = NULL, id = "..", data = NULL, separate.ability = NULL, refcat = NULL, contrasts = NULL, subset = NULL) { player.one <- player1[[id]] player.two <- player2[[id]] if (!is.factor(player.one) || !is.factor(player.two) || !identical(levels(player.one), levels(player.two))) stop("'player1$", id, "' and 'player2$", id, "' must be factors with the same levels") if (!identical(attr(player.one, "contrasts"), attr(player.two, "contrasts"))) stop("'player1$", id, "' and 'player2$", id, "' must have the same contrasts attribute") if(is.null(formula)) formula <- reformulate(id) players <- levels(player.one) nplayers <- nlevels(player.one) ncontests <- length(player.one) D <- matrix(nrow = ncontests, ncol = nplayers) D <- col(D) == as.numeric(player.one) D <- D - (col(D) == as.numeric(player.two)) colnames(D) <- paste(id, players, sep = "") fixed <- nobars(formula) X <- offset <- missing <- term.labels <- NULL saturated <- FALSE sep <- list() empty <- is.null(fixed) || is.empty.model(mt <- terms(fixed)) if (!empty) { factors <- attr(mt, "factors") term.labels <- as.character(colnames(factors)) vars <- rownames(factors) indexed <- grep("[[][^],]+[],]", vars) if (length(indexed)) { #set NAs to zero indices <- gsub("[^[]*[[]([^],]+)[],].*", "\\1", vars[indexed]) vars <- gsub("[[][^]]*[]]", "", vars[indexed]) ## assumes no overlap, e.g. no age[..]:judge.gender[judge] grp <- split(vars, indices) for (ind in names(grp)) { vars <- model.frame(terms(reformulate(grp[[ind]])), data = data, na.action = na.pass) lev <- levels(eval(as.name(ind), c(player1, data))) as.sep <- rowSums(is.na(vars)) | lev %in% separate.ability if (any(as.sep)) { sep[[ind]] <- as.sep vars[sep[[ind]], ] <- lapply(vars, function(x) max(levels(x)[1], 0)) colnames(vars) <- gsub(".*[$[],? ?\"?([^]\"]*).*", "\\1", grp[[ind]]) labels <- gsub("([^[$]*)[[$].*", "\\1", grp[[ind]]) for (lab in intersect(labels, grp[[ind]])) data[lab] <- vars[lab] for (lab in setdiff(labels, grp[[ind]])) data[[lab]] <- vars[, labels == lab, drop = FALSE] } } if (length(sep)) { fixed <- reformulate(c(names(sep), attr(mt, "term.labels"), rownames(attr(mt, "factors"))[ attr(mt, "offset")])) mt <- terms(fixed) } } idterm <- id %in% rownames(attr(mt, "factors")) mf1 <- model.frame(mt, data = c(player1, data), na.action = na.pass) if (nrow(mf1) != ncontests) stop("Predictor variables are not of the correct length --", "they probably need indexing in 'formula'.") mf2 <- model.frame(mt, data = c(player2, data), na.action = na.pass) if (idterm){ if (!is.null(refcat)) { mf1[[id]] <- relevel(mf1[[id]], refcat) mf2[[id]] <- relevel(mf2[[id]], refcat) if (!missing(contrasts)) contrasts[[id]] <- "contr.treatment" } else { ## 'else' defined by contrasts arg/contrasts attr of id factor ## leave refcat NULL if (is.null(contrasts) & !is.null(attr(player.one, "contrasts"))){ contrasts <- list() contrasts[[id]] <- attr(player.one, "contrasts") } } } offset <- model.offset(mf1) if (!is.null(offset)) offset <- offset - model.offset(mf2) if (length(sep)){ #create separate effect factor recode <- function(x, keep){ lev <- levels(x) ext <- make.unique(c(lev[keep], "nosep"))[sum(keep) + 1] levels(x)[!keep] <- ext relevel(x, ref = ext) } for (ind in names(grp)) { mf1[ind] <- recode(mf1[[ind]], sep[[ind]]) mf2[ind] <- recode(mf2[[ind]], sep[[ind]]) } } X1 <- model.matrix(fixed, mf1, contrasts = contrasts) X2 <- model.matrix(fixed, mf2, contrasts = contrasts) X <- X1 - X2 ## will need to check for saturation in each set of indexed var ## - however as only allowing (1|..) just consider player id for now saturated <- qr(na.omit(X))$rank == qr(na.omit(cbind(D, X)))$rank && !idterm if (all(X[,1] == 0)) X <- X[, -1, drop = FALSE] attr(X, "assign") <- attr(X1, "assign")[-1] } random <- findbars(formula[[2]]) if (!is.null(random)) { if (!is.list(random)) random <- list(random) if (length(random) > 1 || random[[1]] != parse(text = paste("1|", id, sep = ""))[[1]]) stop("Currently '(1 | ", id, ")' is the only random effects", "structure allowed.") random <- D } else if (!empty && (!idterm & !saturated)) warning("Ability modelled by predictors but no random effects", call. = FALSE) if (length(sep)) { attr(X, "assign") <- attr(X, "assign") - 1 if (!is.null(random)) random <- D[,!sep[[id]], drop = FALSE] } list(X = X, random = random, offset = offset, term.labels = term.labels, refcat = refcat, contrasts = contrasts, saturated = saturated) } BradleyTerry2/R/BTm.R0000744000176200001440000003241013152515662013761 0ustar liggesusers#' Bradley-Terry Model and Extensions #' #' Fits Bradley-Terry models for pair comparison data, including models with #' structured scores, order effect and missing covariate data. Fits by either #' maximum likelihood or maximum penalized likelihood (with Jeffreys-prior #' penalty) when abilities are modelled exactly, or by penalized #' quasi-likelihood when abilities are modelled by covariates. #' #' In each comparison to be modelled there is a 'first player' and a 'second #' player' and it is assumed that one player wins while the other loses (no #' allowance is made for tied comparisons). #' #' The [countsToBinomial()] function is provided to convert a #' contingency table of wins into a data frame of wins and losses for each pair #' of players. #' #' The `formula` argument specifies the model for player ability and #' applies to both the first player and the second player in each contest. If #' `NULL` a separate ability is estimated for each player, equivalent to #' setting `formula = reformulate(id)`. #' #' Contest-level variables can be specified in the formula in the usual manner, #' see [formula()]. Player covariates should be included as variables #' indexed by `id`, see examples. Thus player covariates must be ordered #' according to the levels of the ID factor. #' #' If `formula` includes player covariates and there are players with #' missing values over these covariates, then a separate ability will be #' estimated for those players. #' #' When player abilities are modelled by covariates, then random player effects #' should be added to the model. These should be specified in the formula using #' the vertical bar notation of [lme4::lmer()], see examples. #' #' When specified, it is assumed that random player effects arise from a #' \eqn{N(0, }{N(0, sigma^2)}\eqn{ \sigma^2)}{N(0, sigma^2)} distribution and #' model parameters, including \eqn{\sigma}{sigma}, are estimated using PQL #' (Breslow and Clayton, 1993) as implemented in the [glmmPQL()] #' function. #' #' @param outcome the binomial response: either a numeric vector, a factor in #' which the first level denotes failure and all others success, or a #' two-column matrix with the columns giving the numbers of successes and #' failures. #' @param player1 either an ID factor specifying the first player in each #' contest, or a data.frame containing such a factor and possibly other #' contest-level variables that are specific to the first player. If given in a #' data.frame, the ID factor must have the name given in the `id` #' argument. If a factor is specified it will be used to create such a #' data.frame. #' @param player2 an object corresponding to that given in `player1` for #' the second player in each contest, with identical structure -- in particular #' factors must have identical levels. #' @param formula a formula with no left-hand-side, specifying the model for #' player ability. See details for more information. #' @param id the name of the ID factor. #' @param separate.ability (if `formula` does not include the ID factor as #' a separate term) a character vector giving the names of players whose #' abilities are to be modelled individually rather than using the #' specification given by `formula`. #' @param refcat (if `formula` includes the ID factor as a separate term) #' a character specifying which player to use as a reference, with the first #' level of the ID factor as the default. Overrides any other contrast #' specification for the ID factor. #' @param family a description of the error distribution and link function to #' be used in the model. Only the binomial family is implemented, with #' either`"logit"`, `"probit"` , or `"cauchit"` link. (See #' [stats::family()] for details of family functions.) #' @param data an optional object providing data required by the model. This #' may be a single data frame of contest-level data or a list of data frames. #' Names of data frames are ignored unless they refer to data frames specified #' by `player1` and `player2`. The rows of data frames that do not #' contain contest-level data must correspond to the levels of a factor used #' for indexing, i.e. row 1 corresponds to level 1, etc. Note any rownames are #' ignored. Objects are searched for first in the `data` object if #' provided, then in the environment of `formula`. If `data` is a #' list, the data frames are searched in the order given. #' @param weights an optional numeric vector of \sQuote{prior weights}. #' @param subset an optional logical or numeric vector specifying a subset of #' observations to be used in the fitting process. #' @param na.action a function which indicates what should happen when any #' contest-level variables contain `NA`s. The default is the #' `na.action` setting of `options`. See details for the handling of #' missing values in other variables. #' @param start a vector of starting values for the fixed effects. #' @param etastart a vector of starting values for the linear predictor. #' @param mustart a vector of starting values for the vector of means. #' @param offset an optional offset term in the model. A vector of length equal #' to the number of contests. #' @param br logical. If `TRUE` fitting will be by penalized maximum #' likelihood as in Firth (1992, 1993), using [brglm::brglm()], #' rather than maximum likelihood using [glm()], when abilities are #' modelled exactly or when the abilities are modelled by covariates and the #' variance of the random effects is estimated as zero. #' @param model logical: whether or not to return the model frame. #' @param x logical: whether or not to return the design matrix for the fixed #' effects. #' @param contrasts an optional list specifying contrasts for the factors in #' `formula`. See the `contrasts.arg` of [model.matrix()]. #' @param \dots other arguments for fitting function (currently either #' [glm()], [brglm::brglm()], or [glmmPQL()]) #' @return An object of class `c("BTm", "x")`, where `"x"` is the #' class of object returned by the model fitting function (e.g. `glm`). #' Components are as for objects of class `"x"`, with additionally #' \item{id}{the `id` argument.} \item{separate.ability}{the #' `separate.ability` argument.} \item{refcat}{the `refcat` #' argument.} \item{player1}{a data frame for the first player containing the #' ID factor and any player-specific contest-level variables.} \item{player2}{a #' data frame corresponding to that for `player1`.} \item{assign}{a #' numeric vector indicating which coefficients correspond to which terms in #' the model.} \item{term.labels}{labels for the model terms.} #' \item{random}{for models with random effects, the design matrix for the #' random effects. } #' @author Heather Turner, David Firth #' @seealso [countsToBinomial()], [glmmPQL()], #' [BTabilities()], [residuals.BTm()], #' [add1.BTm()], [anova.BTm()] #' @references #' #' Agresti, A. (2002) *Categorical Data Analysis* (2nd ed). New York: #' Wiley. #' #' Firth, D. (1992) Bias reduction, the Jeffreys prior and GLIM. In #' *Advances in GLIM and Statistical Modelling*, Eds. Fahrmeir, L., #' Francis, B. J., Gilchrist, R. and Tutz, G., pp91--100. New York: Springer. #' #' Firth, D. (1993) Bias reduction of maximum likelihood estimates. #' *Biometrika* **80**, 27--38. #' #' Firth, D. (2005) Bradley-Terry models in R. *Journal of Statistical #' Software*, **12**(1), 1--12. #' #' Stigler, S. (1994) Citation patterns in the journals of statistics and #' probability. *Statistical Science* **9**, 94--108. #' #' Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 #' package. *Journal of Statistical Software*, **48**(9), 1--21. #' @keywords models #' @examples #' #' ######################################################## #' ## Statistics journal citation data from Stigler (1994) #' ## -- see also Agresti (2002, p448) #' ######################################################## #' #' ## Convert frequencies to success/failure data #' citations.sf <- countsToBinomial(citations) #' names(citations.sf)[1:2] <- c("journal1", "journal2") #' #' ## First fit the "standard" Bradley-Terry model #' citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) #' #' ## Now the same thing with a different "reference" journal #' citeModel2 <- update(citeModel, refcat = "JASA") #' BTabilities(citeModel2) #' #' ################################################################## #' ## Now an example with an order effect -- see Agresti (2002) p438 #' ################################################################## #' data(baseball) # start with baseball data as provided by package #' #' ## Simple Bradley-Terry model, ignoring home advantage: #' baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, #' data = baseball, id = "team") #' #' ## Now incorporate the "home advantage" effect #' baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) #' baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) #' baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) #' #' ## Compare the fit of these two models: #' anova(baseballModel1, baseballModel2) #' #' ## #' ## For a more elaborate example with both player-level and contest-level #' ## predictor variables, see help(chameleons). #' ## #' #' @importFrom brglm brglm #' @export BTm <- function(outcome = 1, player1, player2, formula = NULL, id = "..", separate.ability = NULL, refcat = NULL, family = "binomial", data = NULL, weights = NULL, subset = NULL, na.action = NULL, start = NULL, etastart = NULL, mustart = NULL, offset = NULL, br = FALSE, model = TRUE, x = FALSE, contrasts = NULL, ...){ call <- match.call() if (is.character(family)) family <- get(family, mode = "function", envir = parent.frame()) if (is.function(family)) family <- family() if (is.null(family$family)) { print(family) stop("`family' not recognized") } if (family$family != "binomial") stop("`family' must be binomial") if (!family$link %in% c("logit", "probit", "cauchit")) stop("link for binomial family must be one of \"logit\", \"probit\"", "or \"cauchit\"") fcall <- as.list(match.call(expand.dots = FALSE)) setup <- match(c("outcome", "player1", "player2", "formula", "id", "separate.ability", "refcat", "data", "weights", "subset", "offset", "contrasts"), names(fcall), 0L) if (is.null(formula)) env <- parent.frame() else env <- environment(formula) setup <- do.call(BTm.setup, fcall[setup], envir = env) if (setup$saturated) warning("Player ability saturated - equivalent to fitting ", "separate abilities.") mf <- data.frame(X = setup$player1) #just to get length if (!is.null(setup$X)) { mf$X <- setup$X formula <- Y ~ X - 1 } else formula <- Y ~ 0 mf$Y <- setup$Y argPos <- match(c("na.action", "start", "etastart", "mustart", "control", "model", "x"), names(fcall), 0) dotArgs <- fcall$"..." if (is.null(setup$random)) { method <- get(ifelse(br, "brglm", "glm"), mode = "function") fit <- as.call(c(method, fcall[argPos], list(formula = formula, family = family, data = mf, offset = setup$offset, subset = setup$subset, weights = setup$weights), dotArgs)) fit <- eval(fit, parent.frame()) } else { method <- get("glmmPQL", mode = "function") fit <- as.call(c(method, fcall[argPos], list(formula, setup$random, family = family, data = mf, offset = setup$offset, subset = setup$subset, weights = setup$weights), dotArgs)) fit <- eval(fit, parent.frame()) if (br) { if (identical(fit$sigma, 0)){ argPos <- match(c("na.action", "model", "x"), names(fcall), 0) method <- get("brglm", mode = "function") fit <- as.call(c(method, fcall[argPos], list(formula, family = family, data = mf, offset = setup$offset, subset = setup$subset, weights = setup$weights, etastart = fit$linear.predictors))) fit <- eval(fit, parent.frame()) fit$class <- c("glmmPQL", class(fit)) } else warning("'br' argument ignored for models with random effects", call. = FALSE) } } if (length(fit$coefficients)) { if (ncol(setup$X) > 1) names(fit$coefficients) <- substring(names(fit$coefficients), 2) else names(fit$coefficients) <- colnames(setup$X) fit$assign <- attr(setup$X, "assign") } fit$call <- call fit$id <- id fit$separate.ability <- separate.ability fit$contrasts <- setup$contrasts fit$refcat <- setup$refcat fit$formula <- setup$formula fit$player1 <- setup$player1 fit$player2 <- setup$player2 fit$term.labels <- setup$term.labels fit$data <- setup$data fit$random <- setup$random class(fit) <- c("BTm", class(fit)) fit } BradleyTerry2/R/print.BTglmmPQL.R0000744000176200001440000000124213152515662016170 0ustar liggesusers#' @importFrom stats coef naprint #' @export print.BTglmmPQL <- function (x, digits = max(3, getOption("digits") - 3), ...) { if (identical(x$sigma, 0)){ cat("PQL algorithm converged to fixed effects model\n") return(NextMethod()) } cat("\nCall: ", deparse(x$call), "\n", sep = "", fill = TRUE) if (length(coef(x))) { cat("Fixed effects:\n\n") print.default(format(x$coefficients, digits = digits), print.gap = 2, quote = FALSE) } else cat("No fixed effects\n\n") cat("\nRandom Effects Std. Dev.:", x$sigma, "\n") if (nzchar(mess <- naprint(x$na.action))) cat("\n", mess, "\n", sep = "") } BradleyTerry2/R/anova.BTmlist.R0000744000176200001440000000651313152515662015765 0ustar liggesusers#' @importFrom stats coef fitted formula na.omit pchisq pf terms vcov anova.BTmlist <- function (object, ..., dispersion = NULL, test = NULL) { ## Pass on if no random effects fixed <- unlist(lapply(object, function(x) is.null(x$random))) if (!all(!fixed)) stop("Models must have the same random effects structure") responses <- as.character(lapply(object, function(x) { deparse(formula(terms(x))[[2]]) })) sameresp <- responses == responses[1] if (!all(sameresp)) { object <- object[sameresp] warning("models with response ", deparse(responses[!sameresp]), " removed because response differs from model 1") } ns <- vapply(object, function(x) length(fitted(x)), numeric(1)) if (any(ns != ns[1])) stop("models were not all fitted to the same size of dataset") nmodels <- length(object) ncoefs <- vapply(object, function(x) length(na.omit(coef(x))), numeric(1)) #omit aliased labels <- lapply(object, function(x) x$term.labels) stat <- numeric(nmodels) for (i in 2:nmodels) { descending <- ncoefs[i] < ncoefs[i - 1] bigger <- i - descending smaller <- i - !descending if (!all(labels[[smaller]] %in% labels[[bigger]])) stop("models are not nested") ind <- !(labels[[bigger]] %in% labels[[smaller]]) stat[i] <- t(coef(object[[bigger]])[ind]) %*% chol2inv(chol(vcov(object[[bigger]], dispersion = dispersion)[ind, ind])) %*% coef(object[[bigger]])[ind] #vcov should handle dispersion != 1 } stat[1] <- NA table <- data.frame(stat, c(NA, diff(ncoefs))) variables <- lapply(object, function(x) paste(deparse(formula(x)), collapse = "\n")) dimnames(table) <- list(1:nmodels, c("Statistic", "Df")) title <- paste("Sequential Wald Tests\n\n", "Response: ", responses[1], "\n", sep = "") topnote <- paste("Model ", format(1:nmodels), ": ", variables, sep = "", collapse = "\n") if (!is.null(test)) { ## Assume dispersion fixed at one - if dispersion estimated, would use ## "residual" df from larger model in each comparison df.dispersion <- Inf if (test == "F" && df.dispersion == Inf) { fam <- object[[1]]$family$family if (fam == "binomial" || fam == "poisson") warning(gettextf( "using F test with a '%s' family is inappropriate", fam), domain = NA, call. = FALSE) else { warning("using F test with a fixed dispersion is inappropriate") } } table <- switch(test, Chisq = { dfs <- table[, "Df"] vals <- table[, "Statistic"] vals[dfs %in% 0] <- NA cbind(table, `P(>|Chi|)` = pchisq(vals, abs(dfs), lower.tail = FALSE)) }, F = { dfs <- table[, "Df"] Fvalue <- table[, "Statistic"]/abs(dfs) Fvalue[dfs %in% 0] <- NA cbind(table, F = Fvalue, `Pr(>F)` = pf(Fvalue, abs(dfs), df.dispersion, lower.tail = FALSE)) }) } structure(table, heading = c(title, topnote), class = c("anova", "data.frame")) } BradleyTerry2/R/glmmPQL.R0000744000176200001440000002504513152515662014616 0ustar liggesusers#' PQL Estimation of Generalized Linear Mixed Models #' #' Fits GLMMs with simple random effects structure via Breslow and Clayton's #' PQL algorithm. #' The GLMM is assumed to be of the form \ifelse{html}{\out{g(μ) = #' + Ze}}{\deqn{g(\boldsymbol{\mu}) = \boldsymbol{X\beta} #' + \boldsymbol{Ze}}{ g(mu) = X * beta + Z * e}} where \eqn{g} is the link #' function, \ifelse{html}{\out{μ}}{\eqn{\boldsymbol{\mu}}{mu}} is the #' vector of means and \ifelse{html}{\out{X, Z}}{\eqn{\boldsymbol{X}, #' \boldsymbol{Z}}{X,Z}} are design matrices for the fixed effects #' \ifelse{html}{\out{β}}{\eqn{\boldsymbol{\beta}}{beta}} and random #' effects \ifelse{html}{\out{e}}{\eqn{\boldsymbol{e}}{e}} respectively. #' Furthermore the random effects are assumed to be i.i.d. #' \ifelse{html}{\out{N(0, σ2)}}{\eqn{N(0, \sigma^2)}{ #' N(0, sigma^2)}}. #' #' @param fixed a formula for the fixed effects. #' @param random a design matrix for the random effects, with number of rows #' equal to the length of variables in `formula`. #' @param family a description of the error distribution and link function to #' be used in the model. This can be a character string naming a family #' function, a family function or the result of a call to a family function. #' (See [family()] for details of family functions.) #' @param data an optional data frame, list or environment (or object coercible #' by [as.data.frame()] to a data frame) containing the variables in #' the model. If not found in `data`, the variables are taken from #' `environment(formula)`, typically the environment from which #' `glmmPQL` called. #' @param subset an optional logical or numeric vector specifying a subset of #' observations to be used in the fitting process. #' @param weights an optional vector of \sQuote{prior weights} to be used in #' the fitting process. #' @param offset an optional numeric vector to be added to the linear predictor #' during fitting. One or more `offset` terms can be included in the #' formula instead or as well, and if more than one is specified their sum is #' used. See [model.offset()]. #' @param na.action a function which indicates what should happen when the data #' contain `NA`s. The default is set by the `na.action` setting of #' [options()], and is [na.fail()] if that is unset. #' @param start starting values for the parameters in the linear predictor. #' @param etastart starting values for the linear predictor. #' @param mustart starting values for the vector of means. #' @param control a list of parameters for controlling the fitting process. #' See the [glmmPQL.control()] for details. #' @param sigma a starting value for the standard deviation of the random #' effects. #' @param sigma.fixed logical: whether or not the standard deviation of the #' random effects should be fixed at its starting value. #' @param model logical: whether or not the model frame should be returned. #' @param x logical: whether or not the design matrix for the fixed effects #' should be returned. #' @param contrasts an optional list. See the `contrasts.arg` argument of #' [model.matrix()]. #' @param \dots arguments to be passed to [glmmPQL.control()]. #' @return An object of class `"BTglmmPQL"` which inherits from #' `"glm"` and `"lm"`: \item{coefficients}{ a named vector of #' coefficients, with a `"random"` attribute giving the estimated random #' effects.} \item{residuals}{ the working residuals from the final iteration #' of the IWLS loop.} \item{random}{the design matrix for the random effects.} #' \item{fitted.values}{ the fitted mean values, obtained by transforming the #' linear predictors by the inverse of the link function.} \item{rank}{the #' numeric rank of the fitted linear model.} \item{family}{the `family` #' object used.} \item{linear.predictors}{the linear fit on link scale.} #' \item{deviance}{up to a constant, minus twice the maximized log-likelihood.} #' \item{aic}{a version of Akaike's *An Information Criterion*, minus #' twice the maximized log-likelihood plus twice the number of parameters, #' computed by the `aic` component of the family.} #' \item{null.deviance}{the deviance for the null model, comparable with #' `deviance`.} \item{iter}{the numer of iterations of the PQL algorithm.} #' \item{weights}{the working weights, that is the weights in the final #' iteration of the IWLS loop.} \item{prior.weights}{the weights initially #' supplied, a vector of `1`'s if none were.} \item{df.residual}{the #' residual degrees of freedom.} \item{df.null}{the residual degrees of freedom #' for the null model.} \item{y}{if requested (the default) the `y` vector #' used. (It is a vector even for a binomial model.)} \item{x}{if requested, #' the model matrix.} \item{model}{if requested (the default), the model #' frame.} \item{converged}{logical. Was the PQL algorithm judged to have #' converged?} \item{call}{the matched call.} \item{formula}{the formula #' supplied.} \item{terms}{the `terms` object used.} \item{data}{the #' `data` argument used.} \item{offset}{the offset vector used.} #' \item{control}{the value of the `control` argument used.} #' \item{contrasts}{(where relevant) the contrasts used.} \item{xlevels}{(where #' relevant) a record of the levels of the factors used in fitting.} #' \item{na.action}{(where relevant) information returned by `model.frame` #' on the special handling of `NA`s.} \item{sigma}{the estimated standard #' deviation of the random effects} \item{sigma.fixed}{logical: whether or not #' `sigma` was fixed} \item{varFix}{the variance-covariance matrix of the #' fixed effects} \item{varSigma}{the variance of `sigma`} #' @author Heather Turner #' @seealso #' [predict.BTglmmPQL()],[glmmPQL.control()],[BTm()] #' @references Breslow, N. E. and Clayton, D. G. (1993) Approximate inference #' in Generalized Linear Mixed Models. *Journal of the American #' Statistical Association* **88**(421), 9--25. #' #' Harville, D. A. (1977) Maximum likelihood approaches to variance component #' estimation and to related problems. *Journal of the American #' Statistical Association* **72**(358), 320--338. #' @keywords models #' @examples #' #' ############################################### #' ## Crowder seeds example from Breslow & Clayton #' ############################################### #' #' summary(glmmPQL(cbind(r, n - r) ~ seed + extract, #' random = diag(nrow(seeds)), #' family = "binomial", data = seeds)) #' #' summary(glmmPQL(cbind(r, n - r) ~ seed*extract, #' random = diag(nrow(seeds)), #' family = "binomial", data = seeds)) #' #' @importFrom stats gaussian .getXlevels glm.control is.empty.model glm.control glm.fit model.frame model.matrix model.offset model.response model.weights optimize terms #' @export glmmPQL <- function(fixed, random = NULL, family = "binomial", data = NULL, subset = NULL, weights = NULL, offset = NULL, na.action = NULL, start = NULL, etastart = NULL, mustart = NULL, control = glmmPQL.control(...), sigma = 0.1, sigma.fixed = FALSE, model = TRUE, x = FALSE, contrasts = NULL, ...) { call <- match.call() nm <- names(call)[-1] if (is.null(random)) { keep <- is.element(nm, c("family", "data", "subset", "weights", "offset", "na.action")) for (i in nm[!keep]) call[[i]] <- NULL call$formula <- fixed environment(call$formula) <- environment(fixed) call[[1]] <- as.name("glm") return(eval.parent(call)) } modelTerms <- terms(fixed, data = data) modelCall <- as.list(match.call(expand.dots = FALSE)) argPos <- match(c("data", "subset", "na.action", "weights", "offset"), names(modelCall), 0) modelData <- as.call(c(model.frame, list(formula = modelTerms, drop.unused.levels = TRUE), modelCall[argPos])) modelData <- eval(modelData, parent.frame()) if (!is.matrix(random) || nrow(random) != nrow(modelData)) { stop("`random` should be a matrix object, with ", nrow(modelData), " rows.") } if (!is.null(modelCall$subset)) Z <- random[eval(modelCall$subset, data, parent.frame()),] else Z <- random if (!is.null(attr(modelData, "na.action"))) Z <- Z[-attr(modelData, "na.action"),] nObs <- nrow(modelData) y <- model.response(modelData, "numeric") if (is.null(y)) y <- rep(0, nObs) weights <- as.vector(model.weights(modelData)) if (!is.null(weights) && any(weights < 0)) stop("negative weights are not allowed") if (is.null(weights)) weights <- rep.int(1, nObs) offset <- as.vector(model.offset(modelData)) if (is.null(offset)) offset <- rep.int(0, nObs) if (is.character(family)) family <- get(family, mode = "function", envir = parent.frame()) if (is.function(family)) family <- family() if (is.null(family$family)) { print(family) stop("`family' not recognized") } if (family$family == "binomial") { if (is.factor(y) && NCOL(y) == 1) y <- y != levels(y)[1] else if (NCOL(y) == 2) { n <- y[, 1] + y[, 2] y <- ifelse(n == 0, 0, y[, 1]/n) weights <- weights * n } } ## Use GLM to estimate fixed effects empty <- is.empty.model(modelTerms) if (!empty) X <- model.matrix(fixed, data = modelData, contrasts) else X <- matrix(, nObs, 0) fit <- glmmPQL.fit(X = X, y = y, Z = Z, weights = weights, start = start, etastart = etastart, mustart = mustart, offset = offset, family = family, control = control, sigma = sigma, sigma.fixed = sigma.fixed, ...) if (sum(offset) && attr(modelTerms, "intercept") > 0) { fit$null.deviance <- glm.fit(x = X[, "(Intercept)", drop = FALSE], y = y, weights = weights, offset = offset, family = family, control = glm.control(), intercept = TRUE)$deviance } if (model) fit$model <- modelData fit$na.action <- attr(modelData, "na.action") if (x) fit$x <- X fit <- c(fit, list(call = call, formula = fixed, random = random, terms = modelTerms, data = data, offset = offset, control = control, method = "glmmPQL.fit", contrasts = attr(X, "contrasts"), xlevels = .getXlevels(modelTerms, modelData))) class(fit) <- c("BTglmmPQL", "glm", "lm") fit } BradleyTerry2/R/summary.BTglmmPQL.R0000744000176200001440000000304213152515662016531 0ustar liggesusers#' @importFrom stats coef pnorm #' @export summary.BTglmmPQL <- function(object, dispersion = NULL, correlation = FALSE, symbolic.cor = FALSE, ...) { if (identical(object$sigma, 0)){ cat("PQL algorithm converged to fixed effects model\n") return(NextMethod("summary")) } aliased <- is.na(coef(object)) coefs <- coef(object)[!aliased] cov.scaled <- cov.unscaled <- object$varFix # when dispersion != 1? dn <- c("Estimate", "Std. Error", "z value", "Pr(>|z|)") if (object$rank > 0) { sterr <- sqrt(diag(cov.scaled)) tvalue <- coefs/sterr pvalue <- 2 * pnorm(-abs(tvalue)) fixef.table <- cbind(coefs, sterr, tvalue, pvalue) dimnames(fixef.table) <- list(names(coefs), dn) } else { fixef.table <- matrix(, 0, 4) dimnames(fixef.table) <- list(NULL, dn) } sterr <- sqrt(object$varSigma) tvalue <- object$sigma/sterr pvalue <- 2 * pnorm(-abs(tvalue)) ranef.table <- cbind(object$sigma, sterr, tvalue, pvalue) dimnames(ranef.table) <- list("Std. Dev.", dn) ans <- c(object[c("call", "family", "iter", "rank", "na.action")], list(fixef = fixef.table, ranef = ranef.table, aliased = aliased, dispersion = 1, cov.unscaled = cov.unscaled)) if (correlation & object$rank > 0) { dd <- sqrt(diag(cov.unscaled)) ans$correlation <- cov.unscaled/outer(dd, dd) ans$symbolic.cor <- symbolic.cor } class(ans) <- "summary.BTglmmPQL" ans } BradleyTerry2/R/flatlizards.R0000744000176200001440000001511413152515662015620 0ustar liggesusers#' Augrabies Male Flat Lizards: Contest Results and Predictor Variables #' #' Data collected at Augrabies Falls National Park (South Africa) in #' September-October 2002, on the contest performance and background attributes #' of 77 male flat lizards (*Platysaurus broadleyi*). The results of #' exactly 100 contests were recorded, along with various measurements made on #' each lizard. Full details of the study are in Whiting et al. (2006). #' #' There were no duplicate contests (no pair of lizards was seen fighting more #' than once), and there were no tied contests (the result of each contest was #' clear). #' #' The variables `head.length`, `head.width`, `head.height` and #' `condition` were all computed as residuals (of directly measured head #' length, head width, head height and body mass index, respectively) from #' simple least-squares regressions on `SVL`. #' #' Values of some predictors are missing (`NA`) for some lizards, #' \sQuote{at random}, because of instrument problems unconnected with the #' value of the measurement being made. #' #' @name flatlizards #' @docType data #' @format This dataset is a list containing two data frames: #' `flatlizards$contests` and `flatlizards$predictors`. #' #' The `flatlizards$contests` data frame has 100 observations on the #' following 2 variables: \describe{ #' \item{winner}{a factor with 77 #' levels `lizard003` ... `lizard189`.} #' \item{loser}{a factor #' with the same 77 levels `lizard003` ... `lizard189`.} } #' #' The `flatlizards$predictors` data frame has 77 observations (one for #' each of the 77 lizards) on the following 18 variables: \describe{ #' \item{id}{factor with 77 levels (3 5 6 ... 189), the lizard #' identifiers.} #' \item{throat.PC1}{numeric, the first principal #' component of the throat spectrum.} #' \item{throat.PC2}{numeric, the #' second principal component of the throat spectrum.} #' \item{throat.PC3}{numeric, the third principal component of the #' throat spectrum.} #' \item{frontleg.PC1}{numeric, the first principal #' component of the front-leg spectrum.} #' \item{frontleg.PC2}{numeric, #' the second principal component of the front-leg spectrum.} #' \item{frontleg.PC3}{numeric, the third principal component of the #' front-leg spectrum.} #' \item{badge.PC1}{numeric, the first principal #' component of the ventral colour patch spectrum.} #' \item{badge.PC2}{numeric, the second principal component of the #' ventral colour patch spectrum.} #' \item{badge.PC3}{numeric, the third #' principal component of the ventral colour patch spectrum.} #' \item{badge.size}{numeric, a measure of the area of the ventral #' colour patch.} #' \item{testosterone}{numeric, a measure of blood #' testosterone concentration.} #' \item{SVL}{numeric, the snout-vent #' length of the lizard.} #' \item{head.length}{numeric, head length.} #' \item{head.width}{numeric, head width.} #' \item{head.height}{numeric, head height.} #' \item{condition}{numeric, a measure of body condition.} #' \item{repro.tactic}{a factor indicating reproductive tactic; levels #' are `resident` and `floater`.} } #' @seealso [BTm()] #' @references Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The #' BradleyTerry2 package. *Journal of Statistical Software*, #' **48**(9), 1--21. #' #' Whiting, M. J., Stuart-Fox, D. M., O'Connor, D., Firth, D., Bennett, N. C. #' and Blomberg, S. P. (2006). Ultraviolet signals ultra-aggression in a #' lizard. *Animal Behaviour* **72**, 353--363. #' @source The data were collected by Dr Martin Whiting, #' http://whitinglab.com/?page_id=3380, and they appear here with his #' kind permission. #' @keywords datasets #' @examples #' #' ## #' ## Fit the standard Bradley-Terry model, using the bias-reduced #' ## maximum likelihood method: #' ## #' result <- rep(1, nrow(flatlizards$contests)) #' BTmodel <- BTm(result, winner, loser, br = TRUE, data = flatlizards$contests) #' summary(BTmodel) #' ## #' ## That's fairly useless, though, because of the rather small #' ## amount of data on each lizard. And really the scientific #' ## interest is not in the abilities of these particular 77 #' ## lizards, but in the relationship between ability and the #' ## measured predictor variables. #' ## #' ## So next fit (by maximum likelihood) a "structured" B-T model in #' ## which abilities are determined by a linear predictor. #' ## #' ## This reproduces results reported in Table 1 of Whiting et al. (2006): #' ## #' Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + #' head.length[..] + SVL[..], #' data = flatlizards) #' summary(Whiting.model) #' ## #' ## Equivalently, fit the same model using glmmPQL: #' ## #' Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + #' head.length[..] + SVL[..] + (1|..), sigma = 0, #' sigma.fixed = TRUE, data = flatlizards) #' summary(Whiting.model) #' ## #' ## But that analysis assumes that the linear predictor formula for #' ## abilities is _perfect_, i.e., that there is no error in the linear #' ## predictor. This will always be unrealistic. #' ## #' ## So now fit the same predictor but with a normally distributed error #' ## term --- a generalized linear mixed model --- by using the BTm #' ## function instead of glm. #' ## #' Whiting.model2 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + #' head.length[..] + SVL[..] + (1|..), #' data = flatlizards, trace = TRUE) #' summary(Whiting.model2) #' ## #' ## The estimated coefficients (of throat.PC1, throat.PC3, #' ## head.length and SVL are not changed substantially by #' ## the recognition of an error term in the model; but the estimated #' ## standard errors are larger, as expected. The main conclusions from #' ## Whiting et al. (2006) are unaffected. #' ## #' ## With the normally distributed random error included, it is perhaps #' ## at least as natural to use probit rather than logit as the link #' ## function: #' ## #' require(stats) #' Whiting.model3 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + #' head.length[..] + SVL[..] + (1|..), #' family = binomial(link = "probit"), #' data = flatlizards, trace = TRUE) #' summary(Whiting.model3) #' BTabilities(Whiting.model3) #' ## Note the "separate" attribute here, identifying two lizards with #' ## missing values of at least one predictor variable #' ## #' ## Modulo the usual scale change between logit and probit, the results #' ## are (as expected) very similar to Whiting.model2. #' "flatlizards" BradleyTerry2/R/residuals.BTm.R0000744000176200001440000000632713152515662015763 0ustar liggesusers#' Residuals from a Bradley-Terry Model #' #' Computes residuals from a model object of class `"BTm"`. In additional #' to the usual options for objects inheriting from class `"glm"`, a #' `"grouped"` option is implemented to compute player-specific residuals #' suitable for diagnostic checking of a predictor involving player-level #' covariates. #' #' For `type` other than `"grouped"` see [residuals.glm()]. #' #' For `type = "grouped"` the residuals returned are weighted means of #' working residuals, with weights equal to the binomial denominators in the #' fitted model. These are suitable for diagnostic model checking, for example #' plotting against candidate predictors. #' #' @param object a model object for which `inherits(model, "BTm")` is #' `TRUE`. #' @param type the type of residuals which should be returned. The #' alternatives are: `"deviance"` (default), `"pearson"`, #' `"working"`, `"response"`, and `"partial"`. #' @param by the grouping factor to use when `type = "grouped"`. #' @param ... arguments to pass on other methods. #' @return A numeric vector of length equal to the number of players, with a #' `"weights"` attribute. #' @author David Firth and Heather Turner #' @seealso [BTm()], [BTabilities()] #' @references Firth, D. (2005) Bradley-Terry models in R. *Journal of #' Statistical Software* **12**(1), 1--12. #' #' Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 #' package. *Journal of Statistical Software*, **48**(9), 1--21. #' @keywords models #' @examples #' #' ## #' ## See ?springall #' ## #' springall.model <- BTm(cbind(win.adj, loss.adj), #' col, row, #' ~ flav[..] + gel[..] + #' flav.2[..] + gel.2[..] + flav.gel[..] + (1 | ..), #' data = springall) #' res <- residuals(springall.model, type = "grouped") #' with(springall$predictors, plot(flav, res)) #' with(springall$predictors, plot(gel, res)) #' ## Weighted least-squares regression of these residuals on any variable #' ## already included in the model yields slope coefficient zero: #' lm(res ~ flav, weights = attr(res, "weights"), #' data = springall$predictors) #' lm(res ~ gel, weights = attr(res, "weights"), #' data = springall$predictors) #' #' @importFrom stats as.formula model.frame model.matrix terms #' @export residuals.BTm <- function(object, type = c("deviance", "pearson", "working", "response", "partial", "grouped"), by = object$id, ...) { type <- match.arg(type) if (type != "grouped") return(NextMethod()) ## for glm, lm would just be ## X <- model.matrix(formula, data = object$data) formula <- as.formula(paste("~", by, "- 1")) mt <- terms(formula) mf1 <- model.frame(mt, data = c(object$player1, object$data)) X1 <- model.matrix(mt, data = mf1) mf2 <- model.frame(mt, data = c(object$player2, object$data)) X2 <- model.matrix(mt, data = mf2) X <- X1 - X2 r <- object$residuals ## the "working" residuals w <- object$weights total.resid <- crossprod(X, r * w) total.weight <- crossprod(abs(X), w) result <- total.resid / total.weight attr(result, "weights") <- total.weight result } BradleyTerry2/vignettes/0000755000176200001440000000000013160423701014751 5ustar liggesusersBradleyTerry2/vignettes/baseball-qvplot.pdf0000744000176200001440000000766613152515662020565 0ustar liggesusers%PDF-1.4 %ρ\r 1 0 obj << /CreationDate (D:20100223094452) /ModDate (D:20100223094452) /Title (R Graphics Output) /Producer (R 2.10.1) /Creator (R) >> endobj 2 0 obj << /Type /Catalog /Pages 3 0 R >> endobj 5 0 obj << /Type /Page /Parent 3 0 R /Contents 6 0 R /Resources 4 0 R >> endobj 6 0 obj << /Length 7 0 R >> stream q Q q 59.04 73.44 414.72 371.52 re W n Q q 0.000 0.000 0.000 RG 0.75 w [] 0 d 1 J 1 j 10.00 M 101.83 73.44 m 430.97 73.44 l S 101.83 73.44 m 101.83 66.24 l S 156.69 73.44 m 156.69 66.24 l S 211.54 73.44 m 211.54 66.24 l S 266.40 73.44 m 266.40 66.24 l S 321.26 73.44 m 321.26 66.24 l S 376.11 73.44 m 376.11 66.24 l S 430.97 73.44 m 430.97 66.24 l S BT 0.000 0.000 0.000 rg /F2 1 Tf 12.00 0.00 -0.00 12.00 93.16 47.52 Tm (Bal) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 146.35 47.52 Tm (Bos) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 202.54 47.52 Tm (Cle) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 257.06 47.52 Tm (Det) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 313.60 47.52 Tm (Mil) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 367.78 47.52 Tm (NY) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 422.69 47.52 Tm [(T) 120 (or)] TJ ET 59.04 117.93 m 59.04 394.65 l S 59.04 117.93 m 51.84 117.93 l S 59.04 173.28 m 51.84 173.28 l S 59.04 228.62 m 51.84 228.62 l S 59.04 283.97 m 51.84 283.97 l S 59.04 339.31 m 51.84 339.31 l S 59.04 394.65 m 51.84 394.65 l S BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 106.09 Tm (-0.5) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 164.94 Tm (0.0) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 220.28 Tm (0.5) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 275.63 Tm (1.0) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 330.97 Tm (1.5) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 386.31 Tm (2.0) Tj ET Q q BT 0.000 0.000 0.000 rg /F3 1 Tf 14.00 0.00 -0.00 14.00 131.47 469.45 Tm [(Inter) -10 (v) 20 (als based on quasi standar) 20 (d err) 20 (or) 15 (s)] TJ ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 253.06 18.72 Tm (team) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 12.96 236.53 Tm (estimate) Tj ET Q q 0.000 0.000 0.000 RG 0.75 w [] 0 d 1 J 1 j 10.00 M 59.04 73.44 m 473.76 73.44 l 473.76 444.96 l 59.04 444.96 l 59.04 73.44 l S Q q 59.04 73.44 414.72 371.52 re W n 0.000 0.000 0.000 RG 0.75 w [] 0 d 1 J 1 j 10.00 M BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 98.87 170.68 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 153.72 297.29 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 208.58 248.68 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 263.44 333.99 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 318.30 349.95 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 373.15 312.51 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 428.01 317.58 Tm (l) Tj 0 Tr ET 101.83 115.87 m 101.83 230.69 l S 156.69 251.63 m 156.69 348.14 l S 211.54 201.81 m 211.54 300.75 l S 266.40 287.34 m 266.40 385.82 l S 321.26 302.55 m 321.26 402.53 l S 376.11 266.63 m 376.11 363.58 l S 430.97 271.74 m 430.97 368.61 l S Q endstream endobj 7 0 obj 2612 endobj 3 0 obj << /Type /Pages /Kids [ 5 0 R ] /Count 1 /MediaBox [0 0 504 504] >> endobj 4 0 obj << /ProcSet [/PDF /Text] /Font << /F1 9 0 R /F2 10 0 R /F3 11 0 R >> /ExtGState << >> >> endobj 8 0 obj << /Type /Encoding /BaseEncoding /WinAnsiEncoding /Differences [ 45/minus 96/quoteleft 144/dotlessi /grave /acute /circumflex /tilde /macron /breve /dotaccent /dieresis /.notdef /ring /cedilla /.notdef /hungarumlaut /ogonek /caron /space] >> endobj 9 0 obj << /Type /Font /Subtype /Type1 /Name /F1 /BaseFont /ZapfDingbats >> endobj 10 0 obj << /Type /Font /Subtype /Type1 /Name /F2 /BaseFont /Helvetica /Encoding 8 0 R >> endobj 11 0 obj << /Type /Font /Subtype /Type1 /Name /F3 /BaseFont /Helvetica-Bold /Encoding 8 0 R >> endobj xref 0 12 0000000000 65535 f 0000000021 00000 n 0000000164 00000 n 0000002978 00000 n 0000003061 00000 n 0000000213 00000 n 0000000293 00000 n 0000002958 00000 n 0000003165 00000 n 0000003422 00000 n 0000003505 00000 n 0000003602 00000 n trailer << /Size 12 /Info 1 0 R /Root 2 0 R >> startxref 3704 %%EOF BradleyTerry2/vignettes/residuals.pdf0000744000176200001440000004163213152515662017457 0ustar liggesusers%PDF-1.4 %ρ\r 1 0 obj << /CreationDate (D:20120416000729) /ModDate (D:20120416000729) /Title (R Graphics Output) /Producer (R 2.13.0) /Creator (R) >> endobj 2 0 obj << /Type /Catalog /Pages 3 0 R >> endobj 7 0 obj << /Type /Page /Parent 3 0 R /Contents 8 0 R /Resources 4 0 R >> endobj 8 0 obj << /Length 9 0 R >> stream 1 J 1 j q Q q 59.04 73.44 414.72 371.52 re W n /sRGB CS 0.000 0.000 0.000 SCN 0.75 w [] 0 d 1 J 1 j 10.00 M BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 373.76 370.86 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 297.91 364.27 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 217.06 132.95 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 271.98 383.23 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 285.59 363.98 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 365.30 109.35 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 166.54 150.45 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 274.48 215.97 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 328.67 368.87 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 272.35 115.81 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 455.44 365.89 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 241.51 388.89 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 184.46 84.60 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 290.30 359.01 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 376.27 93.93 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 321.69 225.79 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 325.88 145.31 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 330.24 358.44 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 231.58 186.43 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 175.43 117.56 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 187.64 276.05 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 288.44 145.23 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 300.30 272.64 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 232.39 359.08 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 327.14 288.49 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 267.06 243.62 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 237.09 272.74 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 207.74 117.56 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 220.56 285.87 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 306.99 421.78 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 140.81 135.56 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 336.31 428.60 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 405.04 397.04 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 343.34 137.46 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 321.39 130.95 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 369.62 135.77 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 199.22 378.73 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 197.79 380.58 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 391.71 373.06 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 194.90 141.86 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 363.42 363.47 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 238.05 368.46 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 302.64 365.37 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 247.67 296.91 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 281.16 87.02 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 126.01 145.54 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 272.17 150.16 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 264.87 118.10 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 236.66 239.79 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 231.33 367.70 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 162.43 139.70 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 211.98 266.17 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 313.64 126.71 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 355.45 96.44 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 232.66 147.51 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 152.31 270.06 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 161.55 149.19 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 145.16 140.22 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 254.52 141.46 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 252.23 146.83 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 283.14 144.31 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 236.03 137.25 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 309.92 254.74 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 331.25 368.60 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 288.93 142.56 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 118.77 142.71 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 260.49 356.52 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 317.25 312.25 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 364.78 316.81 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 249.62 128.04 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 282.97 230.71 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 388.64 140.44 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 194.71 143.39 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 71.44 116.05 Tm (l) Tj 0 Tr ET BT /F1 1 Tf 1 Tr 7.48 0 0 7.48 324.14 130.38 Tm (l) Tj 0 Tr ET Q q /sRGB CS 0.000 0.000 0.000 SCN 0.75 w [] 0 d 1 J 1 j 10.00 M 77.70 73.44 m 467.08 73.44 l S 77.70 73.44 m 77.70 66.24 l S 142.60 73.44 m 142.60 66.24 l S 207.50 73.44 m 207.50 66.24 l S 272.39 73.44 m 272.39 66.24 l S 337.29 73.44 m 337.29 66.24 l S 402.18 73.44 m 402.18 66.24 l S 467.08 73.44 m 467.08 66.24 l S BT /sRGB cs 0.000 0.000 0.000 scn /F2 1 Tf 12.00 0.00 -0.00 12.00 70.86 47.52 Tm (-6) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 135.76 47.52 Tm (-4) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 200.66 47.52 Tm (-2) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 269.06 47.52 Tm (0) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 333.95 47.52 Tm (2) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 398.85 47.52 Tm (4) Tj ET BT /F2 1 Tf 12.00 0.00 -0.00 12.00 463.74 47.52 Tm (6) Tj ET 59.04 74.72 m 59.04 436.85 l S 59.04 74.72 m 51.84 74.72 l S 59.04 165.25 m 51.84 165.25 l S 59.04 255.79 m 51.84 255.79 l S 59.04 346.32 m 51.84 346.32 l S 59.04 436.85 m 51.84 436.85 l S BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 67.88 Tm (-2) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 158.41 Tm (-1) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 252.45 Tm (0) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 342.98 Tm (1) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 41.76 433.51 Tm (2) Tj ET 59.04 73.44 m 473.76 73.44 l 473.76 444.96 l 59.04 444.96 l 59.04 73.44 l S Q q BT /sRGB cs 0.000 0.000 0.000 scn /F2 1 Tf 12.00 0.00 -0.00 12.00 179.37 18.72 Tm (flatlizards$predictors$throat.PC3) Tj ET BT /F2 1 Tf 0.00 12.00 -12.00 0.00 12.96 211.52 Tm (lizModel.residuals) Tj ET Q endstream endobj 9 0 obj 6385 endobj 3 0 obj << /Type /Pages /Kids [ 7 0 R ] /Count 1 /MediaBox [0 0 504 504] >> endobj 4 0 obj << /ProcSet [/PDF /Text] /Font << /F1 11 0 R /F2 12 0 R >> /ExtGState << >> /ColorSpace << /sRGB 5 0 R >> >> endobj 5 0 obj [/ICCBased 6 0 R] endobj 6 0 obj << /N 3 /Alternate /DeviceRGB /Length 9433 /Filter /ASCIIHexDecode >> stream 00 00 0c 48 4c 69 6e 6f 02 10 00 00 6d 6e 74 72 52 47 42 20 58 59 5a 20 07 ce 00 02 00 09 00 06 00 31 00 00 61 63 73 70 4d 53 46 54 00 00 00 00 49 45 43 20 73 52 47 42 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f6 d6 00 01 00 00 00 00 d3 2d 48 50 20 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 11 63 70 72 74 00 00 01 50 00 00 00 33 64 65 73 63 00 00 01 84 00 00 00 6c 77 74 70 74 00 00 01 f0 00 00 00 14 62 6b 70 74 00 00 02 04 00 00 00 14 72 58 59 5a 00 00 02 18 00 00 00 14 67 58 59 5a 00 00 02 2c 00 00 00 14 62 58 59 5a 00 00 02 40 00 00 00 14 64 6d 6e 64 00 00 02 54 00 00 00 70 64 6d 64 64 00 00 02 c4 00 00 00 88 76 75 65 64 00 00 03 4c 00 00 00 86 76 69 65 77 00 00 03 d4 00 00 00 24 6c 75 6d 69 00 00 03 f8 00 00 00 14 6d 65 61 73 00 00 04 0c 00 00 00 24 74 65 63 68 00 00 04 30 00 00 00 0c 72 54 52 43 00 00 04 3c 00 00 08 0c 67 54 52 43 00 00 04 3c 00 00 08 0c 62 54 52 43 00 00 04 3c 00 00 08 0c 74 65 78 74 00 00 00 00 43 6f 70 79 72 69 67 68 74 20 28 63 29 20 31 39 39 38 20 48 65 77 6c 65 74 74 2d 50 61 63 6b 61 72 64 20 43 6f 6d 70 61 6e 79 00 00 64 65 73 63 00 00 00 00 00 00 00 12 73 52 47 42 20 49 45 43 36 31 39 36 36 2d 32 2e 31 00 00 00 00 00 00 00 00 00 00 00 12 73 52 47 42 20 49 45 43 36 31 39 36 36 2d 32 2e 31 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 58 59 5a 20 00 00 00 00 00 00 f3 51 00 01 00 00 00 01 16 cc 58 59 5a 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 58 59 5a 20 00 00 00 00 00 00 6f a2 00 00 38 f5 00 00 03 90 58 59 5a 20 00 00 00 00 00 00 62 99 00 00 b7 85 00 00 18 da 58 59 5a 20 00 00 00 00 00 00 24 a0 00 00 0f 84 00 00 b6 cf 64 65 73 63 00 00 00 00 00 00 00 16 49 45 43 20 68 74 74 70 3a 2f 2f 77 77 77 2e 69 65 63 2e 63 68 00 00 00 00 00 00 00 00 00 00 00 16 49 45 43 20 68 74 74 70 3a 2f 2f 77 77 77 2e 69 65 63 2e 63 68 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 64 65 73 63 00 00 00 00 00 00 00 2e 49 45 43 20 36 31 39 36 36 2d 32 2e 31 20 44 65 66 61 75 6c 74 20 52 47 42 20 63 6f 6c 6f 75 72 20 73 70 61 63 65 20 2d 20 73 52 47 42 00 00 00 00 00 00 00 00 00 00 00 2e 49 45 43 20 36 31 39 36 36 2d 32 2e 31 20 44 65 66 61 75 6c 74 20 52 47 42 20 63 6f 6c 6f 75 72 20 73 70 61 63 65 20 2d 20 73 52 47 42 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 64 65 73 63 00 00 00 00 00 00 00 2c 52 65 66 65 72 65 6e 63 65 20 56 69 65 77 69 6e 67 20 43 6f 6e 64 69 74 69 6f 6e 20 69 6e 20 49 45 43 36 31 39 36 36 2d 32 2e 31 00 00 00 00 00 00 00 00 00 00 00 2c 52 65 66 65 72 65 6e 63 65 20 56 69 65 77 69 6e 67 20 43 6f 6e 64 69 74 69 6f 6e 20 69 6e 20 49 45 43 36 31 39 36 36 2d 32 2e 31 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 69 65 77 00 00 00 00 00 13 a4 fe 00 14 5f 2e 00 10 cf 14 00 03 ed cc 00 04 13 0b 00 03 5c 9e 00 00 00 01 58 59 5a 20 00 00 00 00 00 4c 09 56 00 50 00 00 00 57 1f e7 6d 65 61 73 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 8f 00 00 00 02 73 69 67 20 00 00 00 00 43 52 54 20 63 75 72 76 00 00 00 00 00 00 04 00 00 00 00 05 00 0a 00 0f 00 14 00 19 00 1e 00 23 00 28 00 2d 00 32 00 37 00 3b 00 40 00 45 00 4a 00 4f 00 54 00 59 00 5e 00 63 00 68 00 6d 00 72 00 77 00 7c 00 81 00 86 00 8b 00 90 00 95 00 9a 00 9f 00 a4 00 a9 00 ae 00 b2 00 b7 00 bc 00 c1 00 c6 00 cb 00 d0 00 d5 00 db 00 e0 00 e5 00 eb 00 f0 00 f6 00 fb 01 01 01 07 01 0d 01 13 01 19 01 1f 01 25 01 2b 01 32 01 38 01 3e 01 45 01 4c 01 52 01 59 01 60 01 67 01 6e 01 75 01 7c 01 83 01 8b 01 92 01 9a 01 a1 01 a9 01 b1 01 b9 01 c1 01 c9 01 d1 01 d9 01 e1 01 e9 01 f2 01 fa 02 03 02 0c 02 14 02 1d 02 26 02 2f 02 38 02 41 02 4b 02 54 02 5d 02 67 02 71 02 7a 02 84 02 8e 02 98 02 a2 02 ac 02 b6 02 c1 02 cb 02 d5 02 e0 02 eb 02 f5 03 00 03 0b 03 16 03 21 03 2d 03 38 03 43 03 4f 03 5a 03 66 03 72 03 7e 03 8a 03 96 03 a2 03 ae 03 ba 03 c7 03 d3 03 e0 03 ec 03 f9 04 06 04 13 04 20 04 2d 04 3b 04 48 04 55 04 63 04 71 04 7e 04 8c 04 9a 04 a8 04 b6 04 c4 04 d3 04 e1 04 f0 04 fe 05 0d 05 1c 05 2b 05 3a 05 49 05 58 05 67 05 77 05 86 05 96 05 a6 05 b5 05 c5 05 d5 05 e5 05 f6 06 06 06 16 06 27 06 37 06 48 06 59 06 6a 06 7b 06 8c 06 9d 06 af 06 c0 06 d1 06 e3 06 f5 07 07 07 19 07 2b 07 3d 07 4f 07 61 07 74 07 86 07 99 07 ac 07 bf 07 d2 07 e5 07 f8 08 0b 08 1f 08 32 08 46 08 5a 08 6e 08 82 08 96 08 aa 08 be 08 d2 08 e7 08 fb 09 10 09 25 09 3a 09 4f 09 64 09 79 09 8f 09 a4 09 ba 09 cf 09 e5 09 fb 0a 11 0a 27 0a 3d 0a 54 0a 6a 0a 81 0a 98 0a ae 0a c5 0a dc 0a f3 0b 0b 0b 22 0b 39 0b 51 0b 69 0b 80 0b 98 0b b0 0b c8 0b e1 0b f9 0c 12 0c 2a 0c 43 0c 5c 0c 75 0c 8e 0c a7 0c c0 0c d9 0c f3 0d 0d 0d 26 0d 40 0d 5a 0d 74 0d 8e 0d a9 0d c3 0d de 0d f8 0e 13 0e 2e 0e 49 0e 64 0e 7f 0e 9b 0e b6 0e d2 0e ee 0f 09 0f 25 0f 41 0f 5e 0f 7a 0f 96 0f b3 0f cf 0f ec 10 09 10 26 10 43 10 61 10 7e 10 9b 10 b9 10 d7 10 f5 11 13 11 31 11 4f 11 6d 11 8c 11 aa 11 c9 11 e8 12 07 12 26 12 45 12 64 12 84 12 a3 12 c3 12 e3 13 03 13 23 13 43 13 63 13 83 13 a4 13 c5 13 e5 14 06 14 27 14 49 14 6a 14 8b 14 ad 14 ce 14 f0 15 12 15 34 15 56 15 78 15 9b 15 bd 15 e0 16 03 16 26 16 49 16 6c 16 8f 16 b2 16 d6 16 fa 17 1d 17 41 17 65 17 89 17 ae 17 d2 17 f7 18 1b 18 40 18 65 18 8a 18 af 18 d5 18 fa 19 20 19 45 19 6b 19 91 19 b7 19 dd 1a 04 1a 2a 1a 51 1a 77 1a 9e 1a c5 1a ec 1b 14 1b 3b 1b 63 1b 8a 1b b2 1b da 1c 02 1c 2a 1c 52 1c 7b 1c a3 1c cc 1c f5 1d 1e 1d 47 1d 70 1d 99 1d c3 1d ec 1e 16 1e 40 1e 6a 1e 94 1e be 1e e9 1f 13 1f 3e 1f 69 1f 94 1f bf 1f ea 20 15 20 41 20 6c 20 98 20 c4 20 f0 21 1c 21 48 21 75 21 a1 21 ce 21 fb 22 27 22 55 22 82 22 af 22 dd 23 0a 23 38 23 66 23 94 23 c2 23 f0 24 1f 24 4d 24 7c 24 ab 24 da 25 09 25 38 25 68 25 97 25 c7 25 f7 26 27 26 57 26 87 26 b7 26 e8 27 18 27 49 27 7a 27 ab 27 dc 28 0d 28 3f 28 71 28 a2 28 d4 29 06 29 38 29 6b 29 9d 29 d0 2a 02 2a 35 2a 68 2a 9b 2a cf 2b 02 2b 36 2b 69 2b 9d 2b d1 2c 05 2c 39 2c 6e 2c a2 2c d7 2d 0c 2d 41 2d 76 2d ab 2d e1 2e 16 2e 4c 2e 82 2e b7 2e ee 2f 24 2f 5a 2f 91 2f c7 2f fe 30 35 30 6c 30 a4 30 db 31 12 31 4a 31 82 31 ba 31 f2 32 2a 32 63 32 9b 32 d4 33 0d 33 46 33 7f 33 b8 33 f1 34 2b 34 65 34 9e 34 d8 35 13 35 4d 35 87 35 c2 35 fd 36 37 36 72 36 ae 36 e9 37 24 37 60 37 9c 37 d7 38 14 38 50 38 8c 38 c8 39 05 39 42 39 7f 39 bc 39 f9 3a 36 3a 74 3a b2 3a ef 3b 2d 3b 6b 3b aa 3b e8 3c 27 3c 65 3c a4 3c e3 3d 22 3d 61 3d a1 3d e0 3e 20 3e 60 3e a0 3e e0 3f 21 3f 61 3f a2 3f e2 40 23 40 64 40 a6 40 e7 41 29 41 6a 41 ac 41 ee 42 30 42 72 42 b5 42 f7 43 3a 43 7d 43 c0 44 03 44 47 44 8a 44 ce 45 12 45 55 45 9a 45 de 46 22 46 67 46 ab 46 f0 47 35 47 7b 47 c0 48 05 48 4b 48 91 48 d7 49 1d 49 63 49 a9 49 f0 4a 37 4a 7d 4a c4 4b 0c 4b 53 4b 9a 4b e2 4c 2a 4c 72 4c ba 4d 02 4d 4a 4d 93 4d dc 4e 25 4e 6e 4e b7 4f 00 4f 49 4f 93 4f dd 50 27 50 71 50 bb 51 06 51 50 51 9b 51 e6 52 31 52 7c 52 c7 53 13 53 5f 53 aa 53 f6 54 42 54 8f 54 db 55 28 55 75 55 c2 56 0f 56 5c 56 a9 56 f7 57 44 57 92 57 e0 58 2f 58 7d 58 cb 59 1a 59 69 59 b8 5a 07 5a 56 5a a6 5a f5 5b 45 5b 95 5b e5 5c 35 5c 86 5c d6 5d 27 5d 78 5d c9 5e 1a 5e 6c 5e bd 5f 0f 5f 61 5f b3 60 05 60 57 60 aa 60 fc 61 4f 61 a2 61 f5 62 49 62 9c 62 f0 63 43 63 97 63 eb 64 40 64 94 64 e9 65 3d 65 92 65 e7 66 3d 66 92 66 e8 67 3d 67 93 67 e9 68 3f 68 96 68 ec 69 43 69 9a 69 f1 6a 48 6a 9f 6a f7 6b 4f 6b a7 6b ff 6c 57 6c af 6d 08 6d 60 6d b9 6e 12 6e 6b 6e c4 6f 1e 6f 78 6f d1 70 2b 70 86 70 e0 71 3a 71 95 71 f0 72 4b 72 a6 73 01 73 5d 73 b8 74 14 74 70 74 cc 75 28 75 85 75 e1 76 3e 76 9b 76 f8 77 56 77 b3 78 11 78 6e 78 cc 79 2a 79 89 79 e7 7a 46 7a a5 7b 04 7b 63 7b c2 7c 21 7c 81 7c e1 7d 41 7d a1 7e 01 7e 62 7e c2 7f 23 7f 84 7f e5 80 47 80 a8 81 0a 81 6b 81 cd 82 30 82 92 82 f4 83 57 83 ba 84 1d 84 80 84 e3 85 47 85 ab 86 0e 86 72 86 d7 87 3b 87 9f 88 04 88 69 88 ce 89 33 89 99 89 fe 8a 64 8a ca 8b 30 8b 96 8b fc 8c 63 8c ca 8d 31 8d 98 8d ff 8e 66 8e ce 8f 36 8f 9e 90 06 90 6e 90 d6 91 3f 91 a8 92 11 92 7a 92 e3 93 4d 93 b6 94 20 94 8a 94 f4 95 5f 95 c9 96 34 96 9f 97 0a 97 75 97 e0 98 4c 98 b8 99 24 99 90 99 fc 9a 68 9a d5 9b 42 9b af 9c 1c 9c 89 9c f7 9d 64 9d d2 9e 40 9e ae 9f 1d 9f 8b 9f fa a0 69 a0 d8 a1 47 a1 b6 a2 26 a2 96 a3 06 a3 76 a3 e6 a4 56 a4 c7 a5 38 a5 a9 a6 1a a6 8b a6 fd a7 6e a7 e0 a8 52 a8 c4 a9 37 a9 a9 aa 1c aa 8f ab 02 ab 75 ab e9 ac 5c ac d0 ad 44 ad b8 ae 2d ae a1 af 16 af 8b b0 00 b0 75 b0 ea b1 60 b1 d6 b2 4b b2 c2 b3 38 b3 ae b4 25 b4 9c b5 13 b5 8a b6 01 b6 79 b6 f0 b7 68 b7 e0 b8 59 b8 d1 b9 4a b9 c2 ba 3b ba b5 bb 2e bb a7 bc 21 bc 9b bd 15 bd 8f be 0a be 84 be ff bf 7a bf f5 c0 70 c0 ec c1 67 c1 e3 c2 5f c2 db c3 58 c3 d4 c4 51 c4 ce c5 4b c5 c8 c6 46 c6 c3 c7 41 c7 bf c8 3d c8 bc c9 3a c9 b9 ca 38 ca b7 cb 36 cb b6 cc 35 cc b5 cd 35 cd b5 ce 36 ce b6 cf 37 cf b8 d0 39 d0 ba d1 3c d1 be d2 3f d2 c1 d3 44 d3 c6 d4 49 d4 cb d5 4e d5 d1 d6 55 d6 d8 d7 5c d7 e0 d8 64 d8 e8 d9 6c d9 f1 da 76 da fb db 80 dc 05 dc 8a dd 10 dd 96 de 1c de a2 df 29 df af e0 36 e0 bd e1 44 e1 cc e2 53 e2 db e3 63 e3 eb e4 73 e4 fc e5 84 e6 0d e6 96 e7 1f e7 a9 e8 32 e8 bc e9 46 e9 d0 ea 5b ea e5 eb 70 eb fb ec 86 ed 11 ed 9c ee 28 ee b4 ef 40 ef cc f0 58 f0 e5 f1 72 f1 ff f2 8c f3 19 f3 a7 f4 34 f4 c2 f5 50 f5 de f6 6d f6 fb f7 8a f8 19 f8 a8 f9 38 f9 c7 fa 57 fa e7 fb 77 fc 07 fc 98 fd 29 fd ba fe 4b fe dc ff 6d ff ff > endstream endobj 10 0 obj << /Type /Encoding /BaseEncoding /WinAnsiEncoding /Differences [ 45/minus 96/quoteleft 144/dotlessi /grave /acute /circumflex /tilde /macron /breve /dotaccent /dieresis /.notdef /ring /cedilla /.notdef /hungarumlaut /ogonek /caron /space] >> endobj 11 0 obj << /Type /Font /Subtype /Type1 /Name /F1 /BaseFont /ZapfDingbats >> endobj 12 0 obj << /Type /Font /Subtype /Type1 /Name /F2 /BaseFont /Helvetica /Encoding 10 0 R >> endobj xref 0 13 0000000000 65535 f 0000000021 00000 n 0000000164 00000 n 0000006751 00000 n 0000006834 00000 n 0000006958 00000 n 0000006991 00000 n 0000000213 00000 n 0000000293 00000 n 0000006731 00000 n 0000016527 00000 n 0000016785 00000 n 0000016869 00000 n trailer << /Size 13 /Info 1 0 R /Root 2 0 R >> startxref 16967 %%EOF BradleyTerry2/vignettes/BradleyTerry.Rnw0000744000176200001440000013105413160423164020061 0ustar liggesusers% \VignetteIndexEntry{Bradley-Terry models in R} % \VignetteKeyword{generalized linear model} % \VignetteKeyword{logistic regression} % \VignetteKeyword{penalized quasi-likelihood} % \VignetteKeyword{ranking} % \VignetteKeyword{tournament analysis} % \VignetteKeyword{working residuals} % \VignettePackage{BradleyTerry2} %%% For jss: %% \documentclass{jss} %% \newcommand{\pkginfo}{} %%% uncomment for vignette version \documentclass[nojss]{jss} \newcommand{\pkginfo}{\small \\[12pt]For \pkg{BradleyTerry2} version \Sexpr{packageDescription("BradleyTerry2")[["Version"]]}, \Sexpr{Sys.Date()}\\\url{https://github.com/hturner/BradleyTerry2}\\[-12pt]} %% need no \usepackage{Sweave.sty} \usepackage[english]{babel} % to avoid et~al with texi2pdf \usepackage{amsmath} \usepackage{booktabs} \usepackage{thumbpdf} \setkeys{Gin}{width=0.6\textwidth} \SweaveOpts{keep.source=TRUE} %http://www.stat.auckland.ac.nz/~ihaka/downloads/Sweave-customisation.pdf \newcommand{\R}{\proglang{R}} \newcommand{\BT}{\pkg{BradleyTerry2}} \newcommand{\logit}{\mathop{\rm logit}} \newcommand{\pr}{\mathop{\rm pr}} \author{Heather Turner\\University of Warwick \And David Firth\\University of Warwick} \Plainauthor{Heather Turner, David Firth} \title{Bradley-Terry Models in \proglang{R}: The \BT\ Package \pkginfo} \Plaintitle{Bradley-Terry Models in R: The BradleyTerry2 Package} \Shorttitle{\pkg{BradleyTerry2}: Bradley-Terry Models in \proglang{R}} \Abstract{ This is a short overview of the \R\ add-on package \BT, which facilitates the specification and fitting of Bradley-Terry logit, probit or cauchit models to pair-comparison data. Included are the standard `unstructured' Bradley-Terry model, structured versions in which the parameters are related through a linear predictor to explanatory variables, and the possibility of an order or `home advantage' effect or other `contest-specific' effects. Model fitting is either by maximum likelihood, by penalized quasi-likelihood (for models which involve a random effect), or by bias-reduced maximum likelihood in which the first-order asymptotic bias of parameter estimates is eliminated. Also provided are a simple and efficient approach to handling missing covariate data, and suitably-defined residuals for diagnostic checking of the linear predictor. } \Keywords{generalized linear model, logistic regression, penalized quasi-likelihood, ranking, tournament analysis, working residuals} \Address{ David Firth\\ Department of Statistics\\ University of Warwick\\ Coventry\\ CV4 7AL, United Kingdom\\ E-mail: \email{d.firth@warwick.ac.uk}\\ URL: \url{http://go.warwick.ac.uk/dfirth} } \begin{document} @ <>= options(prompt = "R> ", continue = "+ ", width = 70, useFancyQuotes = FALSE, digits = 7) @ %def \section{Introduction} The Bradley-Terry model \citep{brad:terr:52} assumes that in a `contest' between any two `players', say player $i$ and player $j$ $(i, j \in \{1,\ldots,K\})$, the odds that $i$ beats $j$ are $\alpha_i/\alpha_j$, where $\alpha_i$ and $\alpha_j$ are positive-valued parameters which might be thought of as representing `ability'. A general introduction can be found in \citet{brad:84} or \citet{agre:02}. Applications are many, ranging from experimental psychology to the analysis of sports tournaments to genetics (for example, the allelic transmission/disequilibrium test of \citealp{sham:curt:95} is based on a Bradley-Terry model in which the `players' are alleles). In typical psychometric applications the `contests' are comparisons, made by different human subjects, between pairs of items. The model can alternatively be expressed in the logit-linear form \begin{equation} \logit[\pr(i\ \mathrm{beats}\ j)]=\lambda_i-\lambda_j, \label{eq:unstructured} \end{equation} where $\lambda_i=\log\alpha_i$ for all $i$. Thus, assuming independence of all contests, the parameters $\{\lambda_i\}$ can be estimated by maximum likelihood using standard software for generalized linear models, with a suitably specified model matrix. The primary purpose of the \BT\ package \citep{turn:12}, implemented in the \R\ statistical computing environment \citep{ihak:gent:96, R}, is to facilitate the specification and fitting of such models and some extensions. The \BT\ package supersedes the earlier \pkg{BradleyTerry} package \citep{firt:05}, providing a more flexible user interface to allow a wider range of models to be fitted. In particular, \BT\ allows the inclusion of simple random effects so that the ability parameters can be related to available explanatory variables through a linear predictor of the form \begin{equation} \lambda_i=\sum_{r=1}^p\beta_rx_{ir} + U_i. \end{equation} The inclusion of the prediction error $U_i$ allows for variability between players with equal covariate values and induces correlation between comparisons with a common player. \BT\ also allows for general contest-specific effects to be included in the model and allows the logit link to be replaced, if required, by a different symmetric link function (probit or cauchit). The remainder of the paper is organised as follows. Section~\ref{sec:BTmodel} demonstrates how to use the \pkg{BradleyTerry2} package to fit a standard (i.e., unstructured) Bradley-Terry model, with a separate ability parameter estimated for each player, including the use of bias-reduced estimation for such models. Section~\ref{sec:covariates} considers variations of the standard model, including the use of player-specific variables to model ability and allowing for contest-specific effects such as an order effect or judge effects. Sections~\ref{sec:ability} and \ref{sec:residuals} explain how to obtain important information about a fitted model, in particular the estimates of ability and their standard errors, and player-level residuals, whilst Section~\ref{sec:model} notes the functions available to aid model search. Section~\ref{sec:data} explains in more detail how set up data for use with the \BT\ package, Section~\ref{sec:functions} lists the functions provided by the package and finally Section~\ref{sec:finalremarks} comments on two directions for further development of the software. \section{Standard Bradley-Terry model} \label{sec:BTmodel} \subsection{Example: Analysis of journal citations} \label{citations} The following data come from page 448 of \citet{agre:02}, extracted from the larger table of \citet{stig:94}. The data are counts of citations among four prominent journals of statistics and are included the \BT\ package as the data set \code{citations}: @ <>= library("BradleyTerry2") @ @ <>= data("citations", package = "BradleyTerry2") @ @ <>= citations @ %def Thus, for example, \emph{Biometrika} was cited 498 times by papers in \emph{Journal of the American Statistical Association} (JASA) during the period under study. In order to fit a Bradley-Terry model to these data using \code{BTm} from the \BT\ package, the data must first be converted to binomial frequencies. That is, the data need to be organised into pairs (\code{player1}, \code{player2}) and corresponding frequencies of wins and losses for \code{player1} against \code{player2}. The \BT\ package provides the utility function \code{countsToBinomial} to convert a contingency table of wins to the format just described: @ <>= citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") citations.sf @ %def Note that the self-citation counts are ignored -- these provide no information on the ability parameters, since the abilities are relative rather than absolute quantities. The binomial response can then be modelled by the difference in player abilities as follows: @ <>= citeModel <- BTm(cbind(win1, win2), journal1, journal2, ~ journal, id = "journal", data = citations.sf) citeModel @ %def The coefficients here are maximum likelihood estimates of $\lambda_2, \lambda_3, \lambda_4$, with $\lambda_1$ (the log-ability for \emph{Biometrika}) set to zero as an identifying convention. The one-sided model formula \begin{verbatim} ~ journal \end{verbatim} specifies the model for player ability, in this case the `citeability' of the journal. The \code{id} argument specifies that \code{"journal"} is the name to be used for the factor that identifies the player -- the values of which are given here by \code{journal1} and \code{journal2} for the first and second players respectively. Therefore in this case a separate citeability parameter is estimated for each journal. If a different `reference' journal is required, this can be achieved using the optional \code{refcat} argument: for example, making use of \code{update} to avoid re-specifying the whole model, @ <>= update(citeModel, refcat = "JASA") @ %def -- the same model in a different parameterization. The use of the standard Bradley-Terry model for this application might perhaps seem rather questionable -- for example, citations within a published paper can hardly be considered independent, and the model discards potentially important information on self-citation. \citet{stig:94} provides arguments to defend the model's use despite such concerns. \subsection{Bias-reduced estimates} %\label{sec:bias} Estimation of the standard Bradley-Terry model in \code{BTm} is by default computed by maximum likelihood, using an internal call to the \code{glm} function. An alternative is to fit by bias-reduced maximum likelihood \citep{firt:93}: this requires additionally the \pkg{brglm} package \citep{kosm:07}, and is specified by the optional argument \code{br = TRUE}. The resultant effect, namely removal of first-order asymptotic bias in the estimated coefficients, is often quite small. One notable feature of bias-reduced fits is that all estimated coefficients and standard errors are necessarily finite, even in situations of `complete separation' where maximum likelihood estimates take infinite values \citep{hein:sche:02}. For the citation data, the parameter estimates are only very slightly changed in the bias-reduced fit: @ <>= update(citeModel, br = TRUE) @ %def Here the bias of maximum likelihood is small because the binomial counts are fairly large. In more sparse arrangements of contests -- that is, where there is less or no replication of the contests -- the effect of bias reduction would typically be more substantial than the insignificant one seen here. \section{Abilities predicted by explanatory variables} \label{sec:covariates} \subsection{`Player-specific' predictor variables} In some application contexts there may be `player-specific' explanatory variables available, and it is then natural to consider model simplification of the form \begin{equation} \lambda_i=\sum_{r=1}^p\beta_rx_{ir} + U_i, \end{equation} in which ability of each player $i$ is related to explanatory variables $x_{i1},\ldots,x_{ip}$ through a linear predictor with coefficients $\beta_1,\ldots,\beta_p$; the $\{U_i\}$ are independent errors. Dependence of the player abilities on explanatory variables can be specified via the \code{formula} argument, using the standard \emph{S}-language model formulae. The difference in the abilities of player $i$ and player $j$ is modelled by \begin{equation} \sum_{r=1}^p\beta_rx_{ir} - \sum_{r=1}^p\beta_rx_{jr} + U_i - U_j, \label{eq:structured} \end{equation} where $U_i \sim N(0, \sigma^2)$ for all $i$. The Bradley-Terry model is then a generalized linear mixed model, which the \code{BTm} function currently fits by using the penalized quasi-likelihood algorithm of \citet{bres:93}. As an illustration, consider the following simple model for the \code{flatlizards} data, which predicts the fighting ability of Augrabies flat lizards by body size (snout to vent length): @ <>= options(show.signif.stars = FALSE) data("flatlizards", package = "BradleyTerry2") lizModel <- BTm(1, winner, loser, ~ SVL[..] + (1|..), data = flatlizards) @ %def Here the winner of each fight is compared to the loser, so the outcome is always 1. The special name `\code{..}' appears in the formula as the default identifier for players, in the absence of a user-specified \code{id} argument. The values of this factor are given by \code{winner} for the winning lizard and \code{loser} for the losing lizard in each contest. %Since \code{winner} %and \code{loser} are specific instances of the factor \code{..}, they must %share the same set of levels (one for each lizard). %The factors \code{winner}and \code{loser} These factors are provided in the data frame \code{contests} that is the first element of the list object \code{flatlizards}. The second element of \code{flatlizards} is another data frame, \code{predictors}, containing measurements on the observed lizards, including \code{SVL}, which is the snout to vent length. Thus \code{SVL[..]} represents the snout to vent length indexed by lizard (\code{winner} or \code{loser} as appropriate). Finally a random intercept for each lizard is included using the bar notation familiar to users of the \pkg{lme4} package \citep{bate:11}. (Note that a random intercept is the only random effect structure currently implemented in \pkg{BradleyTerry2}.) The fitted model is summarized below: @ <>= summary(lizModel) @ %def The coefficient of snout to vent length is weakly significant; however, the standard deviation of the random effect is quite large, suggesting that this simple model has fairly poor explanatory power. A more appropriate model is considered in the next section. \subsection{Missing values} The contest data may include all possible pairs of players and hence rows of missing data corresponding to players paired with themselves. Such rows contribute no information to the Bradley-Terry model and are simply discarded by \code{BTm}. Where there are missing values in player-specific \emph{predictor} (or \emph{explanatory}) variables which appear in the formula, it will typically be very wasteful to discard all contests involving players for which some values are missing. Instead, such cases are accommodated by the inclusion of one or more parameters in the model. If, for example, player $1$ has one or more of its predictor values $x_{11},\ldots,x_{1p}$ missing, then the combination of Equations~\ref{eq:unstructured} and \ref{eq:structured} above yields \begin{equation} \logit[\pr(1\ \mathrm{beats}\ j)]=\lambda_1 - \left(\sum_{r=1}^p\beta_rx_{jr} + U_j\right), \end{equation} for all other players $j$. This results in the inclusion of a `direct' ability parameter for each player having missing predictor values, in addition to the common coefficients $\beta_1,\ldots,\beta_p$ -- an approach which will be appropriate when the missingness mechanism is unrelated to contest success. The same device can be used also to accommodate any user-specified departures from a structured Bradley-Terry model, whereby some players have their abilities determined by the linear predictor but others do not. In the original analysis of the \code{flatlizards} data \citep{whit:06}, the final model included the first and third principal components of the spectral reflectance from the throat (representing brightness and UV intensity respectively) as well as head length and the snout to vent length seen in our earlier model. The spectroscopy data was missing for two lizards, therefore the ability of these lizards was estimated directly. The following fits this model, with the addition of a random intercept as before: @ <>= lizModel2 <- BTm(1, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), data = flatlizards) summary(lizModel2) @ %def Note that \code{BTm} detects that lizards 96 and 99 have missing values in the specified predictors and automatically includes separate ability parameters for these lizards. This model was found to be the single best model based on the principal components of reflectance and the other predictors available and indeed the standard deviation of the random intercept is much reduced, but still highly significant. Allowing for this significant variation between lizards with the same predictor values produces more realistic (i.e., larger) standard errors for the parameters when compared to the original analysis of \citet{whit:06}. Although this affects the significance of the morphological variables, it does not affect the significance of the principal components, so in this case does not affect the main conclusions of the study. \subsection{Order effect} \label{sec:order} In certain types of application some or all contests have an associated `bias', related to the order in which items are presented to a judge or with the location in which a contest takes place, for example. A natural extension of the Bradley-Terry model (Equation~\ref{eq:unstructured}) is then \begin{equation} \logit[\pr(i\ \mathrm{beats}\ j)]=\lambda_i-\lambda_j + \delta z, \end{equation} where $z=1$ if $i$ has the supposed advantage and $z=-1$ if $j$ has it. (If the `advantage' is in fact a disadvantage, $\delta$ will be negative.) The scores $\lambda_i$ then relate to ability in the absence of any such advantage. As an example, consider the baseball data given in \citet{agre:02}, page 438: @ <>= data("baseball", package = "BradleyTerry2") head(baseball) @ %def The data set records the home wins and losses for each baseball team against each of the 6 other teams in the data set. The \code{head} function is used to show the first 6 records, which are the Milwaukee home games. We see for example that Milwaukee played 7 home games against Detroit and won 4 of them. The `standard' Bradley-Terry model without a home-advantage parameter will be fitted if no formula is specified in the call to \code{BTm}: @ <>= baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, data = baseball, id = "team") summary(baseballModel1) @ %def The reference team is Baltimore, estimated to be the weakest of these seven, with Milwaukee and Detroit the strongest. In the above, the ability of each team is modelled simply as \code{~ team} where the values of the factor \code{team} are given by \code{home.team} for the first team and \code{away.team} for the second team in each game. To estimate the home-advantage effect, an additional variable is required to indicate whether the team is at home or not. Therefore data frames containing both the team factor and this new indicator variable are required in place of the factors \code{home.team} and \code{away.team} in the call to \code{BTm}. This is achieved here by over-writing the \code{home.team} and \code{away.team} factors in the \code{baseball} data frame: @ <>= baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) @ %def The \code{at.home} variable is needed for both the home team and the away team, so that it can be differenced as appropriate in the linear predictor. With the data organised in this way, the ability formula can now be updated to include the \code{at.home} variable as follows: @ <>= baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) summary(baseballModel2) @ %def \vspace*{-0.3cm} This reproduces the results given on page 438 of \citet{agre:02}: the home team has an estimated odds-multiplier of $\exp(0.3023) = 1.35$ in its favour. \vspace*{-0.2cm} \subsection{More general (contest-specific) predictors} \label{sec:CEMS} The `home advantage' effect is a simple example of a contest-specific predictor. Such predictors are necessarily interactions, between aspects of the contest and (aspects of) the two `players' involved. For more elaborate examples of such effects, see \code{?chameleons} and \code{?CEMS}. The former includes an `experience' effect, which changes through time, on the fighting ability of male chameleons. The latter illustrates a common situation in psychometric applications of the Bradley-Terry model, where \emph{subjects} express preference for one of two \emph{objects} (the `players'), and it is the influence on the results of subject attributes that is of primary interest. As an illustration of the way in which such effects are specified, consider the following model specification taken from the examples in \code{?CEMS}, where data on students' preferences in relation to six European management schools is analysed. \vspace*{-0.3cm} @ <>= data("CEMS", package = "BradleyTerry2") table8.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * LAT[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) @ %def This model reproduces results from Table~8 of \cite{ditt:01} apart from minor differences due to the different treatment of ties. Here the outcome is the binomial frequency of preference for \code{school1} over \code{school2}, with ties counted as half a `win' and half a `loss'. The formula specifies the model for school `ability' or worth. In this formula, the default label `\code{..}' represents the school (with values given by \code{school1} or \code{school2} as appropriate) and \code{student} is a factor specifying the student that made the comparison. The remaining variables in the formula use \proglang{R}'s standard indexing mechanism to include student-specific variables, e.g., \code{WOR}: whether or not the student was in full-time employment, and school-specific variables, e.g., \code{LAT}: whether the school was in a `Latin' city. Thus there are three types of variables: contest-specific (\code{school1}, \code{school2}, \code{student}), subject-specific (\code{WOR}, \code{DEG}, \ldots) and object-specific (\code{LAT}, \code{St.Gallen}, \ldots). These three types of variables are provided in three data frames, contained in the list object \code{CEMS}. \section{Ability scores} \label{sec:ability} The function \code{BTabilities} extracts estimates and standard errors for the log-ability scores $\lambda_1, \ldots,\lambda_K$. These will either be `direct' estimates, in the case of the standard Bradley-Terry model or for players with one or more missing predictor values, or `model-based' estimates of the form $\hat\lambda_i=\sum_{r=1}^p\hat\beta_rx_{ir}$ for players whose ability is predicted by explanatory variables. As a simple illustration, team ability estimates in the home-advantage model for the \code{baseball} data are obtained by: @ <>= BTabilities(baseballModel2) @ %def This gives, for each team, the estimated ability when the team enjoys no home advantage. Similarly, estimates of the fighting ability of each lizard in the \code{flatlizards} data under the model based on the principal components of the spectral reflectance from the throat are obtained as follows: @ <>= head(BTabilities(lizModel2), 4) @ %def % The ability estimates in an unstructured Bradley-Terry model are particularly well suited to presentation using the device of \emph{quasi-variances} \citep{firt:04}. The \pkg{qvcalc} package \citep[][version 0.8-5 or later]{firt:10} contains a function of the same name which does the necessary work: \begin{Sinput} > library("qvcalc") > baseball.qv <- qvcalc(BTabilities(baseballModel2)) > plot(baseball.qv, + levelNames = c("Bal", "Bos", "Cle", "Det", "Mil", "NY", "Tor")) \end{Sinput} % \begin{figure}[t!] \centering \includegraphics[width=0.67\textwidth]{baseball-qvplot.pdf} \caption{Estimated relative abilities of baseball teams.\label{fig:qvplot}} \end{figure} % The `comparison intervals' as shown in Figure~\ref{fig:qvplot} are based on `quasi standard errors', and can be interpreted as if they refer to \emph{independent} estimates of ability for the journals. This has the advantage that comparison between any pair of journals is readily made (i.e., not only comparisons with the `reference' journal). For details of the theory and method of calculation see \citet{firt:04}. \section{Residuals} \label{sec:residuals} There are two main types of residuals available for a Bradley-Terry model object. First, there are residuals obtained by the standard methods for models of class \code{"glm"}. These all deliver one residual for each contest or type of contest. For example, Pearson residuals for the model \code{lizModel2} can be obtained simply by \vspace*{0.2cm} @ <>= res.pearson <- round(residuals(lizModel2), 3) head(cbind(flatlizards$contests, res.pearson), 4) @ %def \vspace*{-0.2cm} More useful for diagnostics on the linear predictor $\sum\beta_rx_{ir}$ are `player'-level residuals, obtained by using the function \code{residuals} with argument \code{type = "grouped"}. These residuals can then be plotted against other player-specific variables. \vspace*{-0.2cm} @ <>= res <- residuals(lizModel2, type = "grouped") # with(flatlizards$predictors, plot(throat.PC2, res)) # with(flatlizards$predictors, plot(head.width, res)) @ %def \vspace*{-0.2cm} These residuals estimate the error in the linear predictor; they are obtained by suitable aggregation of the so-called `working' residuals from the model fit. The \code{weights} attribute indicates the relative information in these residuals -- weight is roughly inversely proportional to variance -- which may be useful for plotting and/or interpretation; for example, a large residual may be of no real concern if based on very little information. Weighted least-squares regression of these residuals on any variable already in the model is null. For example: \vspace*{-0.2cm} @ <>= lm(res ~ throat.PC1, weights = attr(res, "weights"), data = flatlizards$predictors) lm(res ~ head.length, weights = attr(res, "weights"), data = flatlizards$predictors) @ %def %$ \vspace*{-0.2cm} As an illustration of evident \emph{non-null} residual structure, consider the unrealistically simple model \code{lizModel} that was fitted in Section~\ref{sec:covariates} above. That model lacks the clearly significant predictor variable \code{throat.PC3}, and the plot shown in Figure~\ref{fig:residuals} demonstrates this fact graphically: \begin{Sinput} > lizModel.residuals <- residuals(lizModel, type = "grouped") > plot(flatlizards$predictors$throat.PC3, lizModel.residuals) \end{Sinput} % \begin{figure}[t!] \centering \includegraphics[width=0.69\textwidth]{residuals.pdf} \caption{Lizard residuals for the simple model \code{lizModel}, plotted against \code{throat.PC3}.\label{fig:residuals}} \end{figure} % The residuals in the plot exhibit a strong, positive regression slope in relation to the omitted predictor variable \code{throat.PC3}. \section{Model search} \label{sec:model} In addition to \code{update()} as illustrated in preceding sections, methods for the generic functions \code{add1()}, \code{drop1()} and \code{anova()} are provided. These can be used to investigate the effect of adding or removing a variable, whether that variable is contest-specific, such as an order effect, or player-specific; and to compare the fit of nested models. %These can be used in the standard way for model elaboration or specialization, %and their availability also allows the use of \texttt{\color{black} step()} for %automated exploration of a set of candidate player-specific predictors. \section{Setting up the data} \label{sec:data} \subsection{Contest-specific data} \label{sec:contest} The \code{outcome} argument of \code{BTm} represents a binomial response and can be supplied in any of the formats allowed by the \code{glm} function. That is, either a two-column matrix with the columns giving the number of wins and losses (for \code{player1} vs.\ \code{player2}), a factor where the first level denotes a loss and all other levels denote a win, or a binary variable where 0 denotes a loss and 1 denotes a win. Each row represents either a single contest or a set of contests between the same two players. The \code{player1} and \code{player2} arguments are either factors specifying the two players in each contest, or data frames containing such factors, along with any contest-specific variables that are also player-specific, such as the \code{at.home} variable seen in Section~\ref{sec:order}. If given in data frames, the factors identifying the players should be named as specified by the \code{id} argument and should have identical levels, since they represent a particular sample of the full set of players. Thus for the model \code{baseballModel2}, which was specified by the following call: @ <>= baseballModel2$call @ %def the data are provided in the \code{baseball} data frame, which has the following structure: @ <>= str(baseball, vec.len = 2) @ %def In this case \code{home.team} and \code{away.team} are both data frames, with the factor \code{team} specifying the team and the variable \code{at.home} specifying whether or not the team was at home. So the first comparison @ <>= baseball$home.team[1,] baseball$away.team[1,] @ %def is Milwaukee playing at home against Detroit. The outcome is given by @ <>= baseball[1, c("home.wins", "away.wins")] @ %def Contest-specific variables that are \emph{not} player-specific -- for example, whether it rained or not during a contest -- should only be used in interactions with variables that \emph{are} player-specific, otherwise the effect on ability would be the same for both players and would cancel out. Such variables can conveniently be provided in a single data frame along with the \code{outcome}, \code{player1} and \code{player2} data. An offset in the model can be specified by using the \code{offset} argument to \code{BTm}\null. This facility is provided for completeness: the authors have not yet encountered an application where it is needed. To use only certain rows of the contest data in the analysis, the \code{subset} argument may be used in the call to \code{BTm}. This should either be a logical vector of the same length as the binomial response, or a numeric vector containing the indices of rows to be used. \subsection{Non contest-specific data} \label{sec:non-contest} Some variables do not vary by contest directly, but rather vary by a factor that is contest-specific, such as the player ID or the judge making the paired comparison. For such variables, it is more economical to store the data by the levels of the contest-specific factor and use indexing to obtain the values for each contest. The \code{CEMS} example in Section~\ref{sec:CEMS} provides an illustration of such variables. In this example student-specific variables are indexed by \code{student} and school-specific variables are indexed by \code{..}, i.e., the first or second school in the comparison as appropriate. There are then two extra sets of variables in addition to the usual contest-specific data as described in the last section. A good way to provide these data to \code{BTm} is as a list of data frames, one for each set of variables, e.g., @ <>= str(CEMS, vec.len = 2) @ %def The names of the data frames are only used by \code{BTm} if they match the names specified in the \code{player1} and \code{player2} arguments, in which case it is assumed that these are data frames providing the data for the first and second player respectively. The rows of data frames in the list should either correspond to the contests or the levels of the factor used for indexing. Player-specific offsets should be included in the formula by using the \code{offset} function. \subsection{Converting data from a `wide' format} The \code{BTm} function requires data in a `long' format, with one row per contest, provided either directly as in Section~\ref{sec:contest} or via indexing as in Section~\ref{sec:non-contest}. In studies where the same set of paired comparisons are made by several judges, as in a questionnaire for example, the data may be stored in a `wide' format, with one row per judge. As an example, consider the \code{cemspc} data from the \pkg{prefmod} package \citep{hatz:12}, which provides data from the CEMS study in a wide format. Each row corresponds to one student; the first 15 columns give the outcome of all pairwise comparisons between the 6~schools in the study and the last two columns correspond to two of the student-specific variables: \code{ENG} (indicating the student's knowledge of English) and \code{SEX} (indicating the student's gender). The following steps convert these data into a form suitable for analysis with \code{BTm}. First a new data frame is created from the student-specific variables and these variables are converted to factors: @ <>= library("prefmod") student <- cemspc[c("ENG", "SEX")] student$ENG <- factor(student$ENG, levels = 1:2, labels = c("good", "poor")) student$SEX <- factor(student$SEX, levels = 1:2, labels = c("female", "male")) @ %def This data frame is put into a list, which will eventually hold all the necessary data. Then a \code{student} factor is created for indexing the student data to produce contest-level data. This is put in a new data frame that will hold the contest-specific data. @ <>= cems <- list(student = student) student <- gl(303, 1, 303 * 15) #303 students, 15 comparisons contest <- data.frame(student = student) @ %def Next the outcome data is converted to a binomial response, adjusted for ties. The result is added to the \code{contest} data frame. @ <>= win <- cemspc[, 1:15] == 0 lose <- cemspc[, 1:15] == 2 draw <- cemspc[, 1:15] == 1 contest$win.adj <- c(win + draw/2) contest$lose.adj <- c(lose + draw/2) @ %def Then two factors are created identifying the first and second school in each comparison. The comparisons are in the order 1 vs.\ 2, 1 vs.\ 3, 2 vs.\ 3, 1 vs.\ 4, \ldots, so the factors can be created as follows: @ <>= lab <- c("London", "Paris", "Milano", "St. Gallen", "Barcelona", "Stockholm") contest$school1 <- factor(sequence(1:5), levels = 1:6, labels = lab) contest$school2 <- factor(rep(2:6, 1:5), levels = 1:6, labels = lab) @ %def Note that both factors have exactly the same levels, even though only five of the six players are represented in each case. In other words, the numeric factor levels refer to the same players in each case, so that the player is unambiguously identified. This ensures that player-specific parameters and player-specific covariates are correctly specified. Finally the \code{contest} data frame is added to the main list: @ <>= cems$contest <- contest @ %def This creates a single data object that can be passed to the \code{data} argument of \code{BTm}. Of course, such a list could be created on-the-fly as in \code{data = list(contest, student)}, which may be more convenient in practice. \subsection[Converting data from the format required by the earlier BradleyTerry package]{Converting data from the format required by the earlier \pkg{BradleyTerry} package} The \pkg{BradleyTerry} package described in \citet{firt:05} required contest/comparison results to be in a data frame with columns named \code{winner}, \code{loser} and \code{Freq}. The following example shows how \code{xtabs} and \code{countsToBinomial} can be used to convert such data for use with the \code{BTm} function in \pkg{BradleyTerry2}: \begin{Sinput} > library("BradleyTerry") ## the /old/ BradleyTerry package > ## load data frame with columns "winner", "loser", "Freq" > data("citations", package = "BradleyTerry") > ## convert to 2-way table of counts > citations <- xtabs(Freq ~ winner + loser, citations) > ## convert to a data frame of binomial observations > citations.sf <- countsToBinomial(citations) \end{Sinput} The \code{citations.sf} data frame can then be used with \code{BTm} as shown in Section~\ref{citations}. \section[A list of the functions provided in BradleyTerry2]{A list of the functions provided in \pkg{BradleyTerry2}} \label{sec:functions} The standard \R\ help files provide the definitive reference. Here we simply list the main user-level functions and their arguments, as a convenient overview: @ <>= ## cf. prompt options(width = 55) for (fn in getNamespaceExports("BradleyTerry2")) { name <- as.name(fn) args <- formals(fn) n <- length(args) arg.names <- arg.n <- names(args) arg.n[arg.n == "..."] <- "\\dots" is.missing.arg <- function(arg) typeof(arg) == "symbol" && deparse(arg) == "" Call <- paste(name, "(", sep = "") for (i in seq_len(n)) { Call <- paste(Call, arg.names[i], if (!is.missing.arg(args[[i]])) paste(" = ", paste(deparse(args[[i]]), collapse = "\n"), sep = ""), sep = "") if (i != n) Call <- paste(Call, ", ", sep = "") } Call <- paste(Call, ")", sep = "") cat(deparse(parse(text = Call)[[1]], width.cutoff = 50), fill = TRUE) } options(width = 60) @ %def \section{Some final remarks} \label{sec:finalremarks} \subsection[A note on the treatment of ties]{A note on the treatment of ties} The present version of \BT\ provides no sophisticated facilities for handling tied contests/comparisons; the well-known models of \cite{rao:kupp:67} and \cite{davi:70} are not implemented here. At present the \code{BTm} function requires a binary or binomial response variable, the third (`tied') category of response is not allowed. In several of the data examples (e.g., \code{?CEMS}, \code{?springall}, \code{?sound.fields}), ties are handled by the crude but simple device of adding half of a `win' to the tally for each player involved; in each of the examples where this has been done it is found that the result is very similar, after a simple re-scaling, to the more sophisticated analyses that have appeared in the literature. Note that this device when used with \code{BTm} typically gives rise to warnings produced by the back-end \code{glm} function, about non-integer `binomial' counts; such warnings are of no consequence and can be safely ignored. It is likely that a future version of \BT\ will have a more general method for handling ties. \subsection{A note on `contest-specific' random effects} The current version of \BT\ provides facilities for fitting models with random effects in `player-specific' predictor functions, as illustrated in Section~\ref{sec:covariates}. For more general, `contest-specific' random-effect structures, such as random `judge' effects in psychological studies \citep[e.g.,][]{bock:01}, \BT\ provides (through \code{BTm}) the necessary user interface but as yet no back-end calculation. It is hoped that this important generalization can be made successfully in a future version of \BT. \section*{Acknowledgments} This work was supported by the UK Engineering and Physical Sciences Research Council. \begin{thebibliography}{22} \newcommand{\enquote}[1]{``#1''} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \providecommand{\urlprefix}{URL } \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi:\discretionary{}{}{}#1}\else \providecommand{\doi}{doi:\discretionary{}{}{}\begingroup \urlstyle{rm}\Url}\fi \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{Agresti(2002)}]{agre:02} Agresti A (2002). \newblock \emph{Categorical Data Analysis}. \newblock 2nd edition. John Wiley \& Sons. \bibitem[{Bates \emph{et~al.}(2011)Bates, M\"achler, and Bolker}]{bate:11} Bates D, M\"achler M, Bolker B (2011). \newblock \emph{\pkg{lme4}: Linear Mixed-Effects Models Using \proglang{S}4 Classes}. \newblock \proglang{R}~package version~0.999375-42, \urlprefix\url{http://CRAN.R-project.org/package=lme4}. \bibitem[{B\"ockenholt(2001)}]{bock:01} B\"ockenholt U (2001). \newblock \enquote{Hierarchical Modeling of Paired Comparison Data.} \newblock \emph{Psychological Methods}, \textbf{6}(1), 49--66. \bibitem[{Bradley(1984)}]{brad:84} Bradley RA (1984). \newblock \enquote{Paired Comparisons: Some Basic Procedures and Examples.} \newblock In PR~Krishnaiah, PK~Sen (eds.), \emph{Nonparametric Methods}, volume~4 of \emph{Handbook of Statistics}, pp. 299 -- 326. Elsevier. \bibitem[{Bradley and Terry(1952)}]{brad:terr:52} Bradley RA, Terry ME (1952). \newblock \enquote{Rank Analysis of Incomplete Block Designs {I}: {T}he Method of Paired Comparisons.} \newblock \emph{Biometrika}, \textbf{39}, 324--45. \bibitem[{Breslow and Clayton(1993)}]{bres:93} Breslow NE, Clayton DG (1993). \newblock \enquote{Approximate Inference in Generalized Linear Mixed Models.} \newblock \emph{Journal of the American Statistical Association}, \textbf{88}(421), 9--25. \bibitem[{Davidson(1970)}]{davi:70} Davidson RR (1970). \newblock \enquote{On Extending the {B}radley-{T}erry Model to Accommodate Ties in Paired Comparison Experiments.} \newblock \emph{Journal of the American Statistical Association}, \textbf{65}, 317--328. \bibitem[{Dittrich \emph{et~al.}(2001)Dittrich, Hatzinger, and Katzenbeisser}]{ditt:01} Dittrich R, Hatzinger R, Katzenbeisser W (2001). \newblock \enquote{Corrigendum: {M}odelling the Effect of Subject-Specific Covariates in Paired Comparison Studies with an Application to University Rankings.} \newblock \emph{Applied Statistics}, \textbf{50}, 247--249. \bibitem[{Firth(1993)}]{firt:93} Firth D (1993). \newblock \enquote{Bias Reduction of Maximum Likelihood Estimates.} \newblock \emph{Biometrika}, \textbf{80}, 27--38. \bibitem[{Firth(2005)}]{firt:05} Firth D (2005). \newblock \enquote{Bradley-Terry Models in \proglang{R}.} \newblock \emph{Journal of Statistical Software}, \textbf{12}(1), 1--12. \newblock \urlprefix\url{http://www.jstatsoft.org/v12/i01/}. \bibitem[{Firth(2010)}]{firt:10} Firth D (2010). \newblock \emph{\pkg{qvcalc}: Quasi-Variances for Factor Effects in Statistical Models}. \newblock \proglang{R}~package version~0.8-7, \urlprefix\url{http://CRAN.R-project.org/package=qvcalc}. \bibitem[{Firth and {de Menezes}(2004)}]{firt:04} Firth D, {de Menezes} RX (2004). \newblock \enquote{Quasi-Variances.} \newblock \emph{Biometrika}, \textbf{91}, 65--80. \bibitem[{Hatzinger and Dittrich(2012)}]{hatz:12} Hatzinger R, Dittrich R (2012). \newblock \enquote{\pkg{prefmod}: An \proglang{R} Package for Modeling Preferences Based on Paired Comparisons, Rankings, or Ratings.} \newblock \emph{Journal of Statistical Software}, \textbf{48}(10), 1--31. \newblock \urlprefix\url{http://www.jstatsoft.org/v48/i10/}. \bibitem[{Heinze and Schemper(2002)}]{hein:sche:02} Heinze G, Schemper M (2002). \newblock \enquote{A Solution to the Problem of Separation in Logistic Regression.} \newblock \emph{Statistics in Medicine}, \textbf{21}, 2409--2419. \bibitem[{Ihaka and Gentleman(1996)}]{ihak:gent:96} Ihaka R, Gentleman R (1996). \newblock \enquote{\proglang{R}: A Language for Data Analysis and Graphics.} \newblock \emph{Journal of Computational and Graphical Statistics}, \textbf{5}(3), 299--314. \bibitem[{Kosmidis(2007)}]{kosm:07} Kosmidis I (2007). \newblock \emph{\pkg{brglm}: Bias Reduction in Binary-Response GLMs}. \newblock \proglang{R}~package version~0.5-6, \urlprefix\url{http://www.ucl.ac.uk/~ucakiko/software.html}. \bibitem[{Rao and Kupper(1967)}]{rao:kupp:67} Rao PV, Kupper LL (1967). \newblock \enquote{Ties in Paired-Comparison Experiments: {A} Generalization of the {B}radley-{T}erry Model.} \newblock \emph{Journal of the American Statistical Association}, \textbf{62}, 194--204. \bibitem[{{\proglang{R} Development Core Team}(2012)}]{R} {\proglang{R} Development Core Team} (2012). \newblock \emph{\proglang{R}: A Language and Environment for Statistical Computing}. \newblock \proglang{R} Foundation for Statistical Computing, Vienna, Austria. \newblock {ISBN} 3-900051-07-0, \urlprefix\url{http://www.R-project.org/}. \bibitem[{Sham and Curtis(1995)}]{sham:curt:95} Sham PC, Curtis D (1995). \newblock \enquote{An Extended Transmission/Disequilibrium Test ({TDT}) for Multi-Allele Marker Loci.} \newblock \emph{Annals of Human Genetics}, \textbf{59}(3), 323--336. \bibitem[{Stigler(1994)}]{stig:94} Stigler S (1994). \newblock \enquote{Citation Patterns in the Journals of Statistics and Probability.} \newblock \emph{Statistical Science}, \textbf{9}, 94--108. \bibitem[{Turner and Firth(2012)}]{turn:12} Turner H, Firth D (2012). \newblock \enquote{Bradley-Terry Models in \proglang{R}: The \pkg{BradleyTerry2} Package.} \newblock \emph{Journal of Statistical Software}, \textbf{48}(9), 1--21. \newblock \urlprefix\url{http://www.jstatsoft.org/v48/i09/}. \bibitem[{Whiting \emph{et~al.}({2006})Whiting, Stuart-Fox, O'Connor, Firth, Bennett, and Blomberg}]{whit:06} Whiting MJ, Stuart-Fox DM, O'Connor D, Firth D, Bennett NC, Blomberg SP ({2006}). \newblock \enquote{{Ultraviolet Signals Ultra-Aggression in a Lizard}.} \newblock \emph{Animal Behaviour}, \textbf{{72}}, 353--363. \end{thebibliography} \end{document} BradleyTerry2/vignettes/BradleyTerry.bib0000744000176200001440000001705013157723064020056 0ustar liggesusers@Manual{r, title = {\proglang{R}: A Language and Environment for Statistical Computing}, author = {{\proglang{R} Development Core Team}}, organization = {\proglang{R} Foundation for Statistical Computing}, address = {Vienna, Austria}, year = {2012}, note = {{ISBN} 3-900051-07-0}, url = {http://www.R-project.org/}, } @Article{bock:01, author = {U B\"ockenholt}, year = {2001}, title = {Hierarchical Modeling of Paired Comparison Data}, journal = {Psychological Methods}, volume = {6}, number = {1}, pages = {49-66}, } @Book{agre:02, year = {2002}, title = {Categorical Data Analysis}, edition = {2nd}, publisher = {John Wiley \& Sons}, author = {A. Agresti}, } @InCollection{brad:84, title = {Paired Comparisons: Some Basic Procedures and Examples}, editor = {P. R. Krishnaiah and P. K. Sen}, booktitle = {Nonparametric Methods}, publisher = {Elsevier}, year = {1984}, volume = {4}, pages = {299 - 326}, series = {Handbook of Statistics}, author = {R. A. Bradley}, } @Article{brad:terr:52, journal = {Biometrika}, year = {1952}, title = {Rank Analysis of Incomplete Block Designs {I}: {T}he Method of Paired Comparisons}, pages = {324--45}, author = {R. A. Bradley and M. E. Terry}, volume = {39}, } @Article{sham:curt:95, journal = {Annals of Human Genetics}, year = {1995}, title = {An Extended Transmission/Disequilibrium Test ({TDT}) for Multi-Allele Marker Loci}, number = {3}, pages = {323--336}, author = {P. C. Sham and D. Curtis}, volume = {59}, } @Article{ihak:gent:96, journal = {Journal of Computational and Graphical Statistics}, year = {1996}, title = {\proglang{R}: A Language for Data Analysis and Graphics}, number = {3}, pages = {299--314}, author = {Ross Ihaka and Robert Gentleman}, volume = {5}, } @article{spring:73, Journal = {Applied Statistics}, Year = {1973}, Title = {Response Surface Fitting Using a Generalization of the {B}radley-{T}erry Paired Comparisons Model}, Pages = {59--68}, Author = {Springall, A}, Volume = {22}} @article{crit:flig:91, Journal = {Psychometrika}, Year = {1991}, Title = {Paired Comparison, Triple Comparison, and Ranking Experiments as Generalized Linear Models, and Their Implementation in {GLIM}}, Pages = {517--533}, Author = {Critchlow, D E and Fligner, M A}, Volume = {56}} @Article{firt:93, journal = {Biometrika}, year = {1993}, title = {Bias Reduction of Maximum Likelihood Estimates}, pages = {27--38}, author = {D Firth}, volume = {80}, } @Article{hein:sche:02, journal = {Statistics in Medicine}, year = {2002}, title = {A Solution to the Problem of Separation in Logistic Regression}, author = {G Heinze and M Schemper}, pages = {2409--2419}, volume = {21}, } @Article{stig:94, journal = {Statistical Science}, year = {1994}, title = {Citation Patterns in the Journals of Statistics and Probability}, pages = {94--108}, author = {S Stigler}, volume = {9}, } @Article{bres:93, journal = {Journal of the American Statistical Association}, year = {1993}, title = {Approximate Inference in Generalized Linear Mixed Models}, pages = {9--25}, author = {N E Breslow and D G Clayton}, volume = {88}, number = {421}, } @article{spri:73, author = {Springall, A}, title = {Response Surface Fitting Using a Generalization of the {B}radley-{T}erry Paired Comparison Model}, year = {1973}, journal = {Applied Statistics}, volume = {22}, pages = {59--68} } @article{ditt:98, author = {Dittrich, R and Hatzinger, R and Katzenbeisser, W}, title = {Modelling the Effect of Subject-specific Covariates in Paired Comparison Studies with an Application to University Rankings}, year = {1998}, journal = {Applied Statistics}, volume = {47}, pages = {511--525}, keywords = {Bradley-Terry model; Log-linear model} } @Article{ditt:01, author = {R Dittrich and R Hatzinger and W Katzenbeisser}, title = {Corrigendum: {M}odelling the Effect of Subject-Specific Covariates in Paired Comparison Studies with an Application to University Rankings}, year = {2001}, journal = {Applied Statistics}, volume = {50}, pages = {247--249}, } @Article{davi:70, author = {R. R. Davidson}, title = {On Extending the {B}radley-{T}erry Model to Accommodate Ties in Paired Comparison Experiments}, year = {1970}, journal = {Journal of the American Statistical Association}, volume = {65}, pages = {317--328}, } @Article{rao:kupp:67, author = {P. V. Rao and L. L. Kupper}, title = {Ties in Paired-Comparison Experiments: {A} Generalization of the {B}radley-{T}erry Model}, year = {1967}, journal = {Journal of the American Statistical Association}, volume = {62}, pages = {194--204}, } @Article{whit:06, author = {Martin J. Whiting and Devi M. Stuart-Fox and David O'Connor and David Firth and Nigel C. Bennett and Simon P. Blomberg}, title = {{Ultraviolet Signals Ultra-Aggression in a Lizard}}, journal = {Animal Behaviour}, year = {{2006}}, volume = {{72}}, pages = {353--363}, } @article{stua:06, Author = {Stuart-Fox, D M and Firth, D and Moussalli, A and Whiting, M J}, Title = {Multiple Signals in Chameleon Contests: Designing and Analysing Animal Contests as a Tournament}, Journal = {Animal Behaviour}, Year = {{2006}}, Volume = {{71}}, Pages = {1263--1271}, DOI = {10.1016/j.anbehav.2005.07.028} } @article{kous:84, author = {Kousgaard, N}, title = {Analysis of a Sound Field Experiment by a Model for Paired Comparisons with Explanatory Variables}, year = {1984}, journal = {Scandinavian Journal of Statistics}, volume = {11}, pages = {51--57}, keywords = {Bradley-Terry model} } @Article{firt:04, author = {D. Firth and R. X. {de Menezes}}, title = {Quasi-Variances}, journal = {Biometrika}, volume = {91}, year = {2004}, pages = {65--80}, } @Article{firt:05, author = {David Firth}, title = {Bradley-Terry Models in \proglang{R}}, journal = {Journal of Statistical Software}, year = {2005}, volume = {12}, number = {1}, pages = {1--12}, url = {http://www.jstatsoft.org/v12/i01/} } @Manual{kosm:07, title = {\pkg{brglm}: Bias Reduction in Binary-Response GLMs}, author = {Ioannis Kosmidis}, year = {2007}, note = {\proglang{R}~package version~0.5-6}, url = {http://www.ucl.ac.uk/~ucakiko/software.html}, } @Manual{bate:11, title = {\pkg{lme4}: Linear Mixed-Effects Models Using \proglang{S}4 Classes}, author = {Douglas Bates and Martin M\"achler and Ben Bolker}, year = {2011}, note = {\proglang{R}~package version~0.999375-42}, url = {http://CRAN.R-project.org/package=lme4}, } @Manual{firt:10, title = {\pkg{qvcalc}: Quasi-Variances for Factor Effects in Statistical Models}, author = {David Firth}, year = {2010}, note = {\proglang{R}~package version~0.8-7}, url = {http://CRAN.R-project.org/package=qvcalc}, } @Article{hatz:12, author = {Reinhold Hatzinger and Regina Dittrich}, title = {\pkg{prefmod}: An \proglang{R} Package for Modeling Preferences Based on Paired Comparisons, Rankings, or Ratings}, journal = {Journal of Statistical Software}, year = {2012}, volume = {48}, number = {10}, pages = {1--31}, url = {http://www.jstatsoft.org/v48/i10/} } @Article{turn:12, author = {Heather Turner and David Firth}, title = {Bradley-Terry Models in \proglang{R}: The \pkg{BradleyTerry2} Package}, journal = {Journal of Statistical Software}, year = {2012}, volume = {48}, number = {9}, pages = {1--21}, url = {http://www.jstatsoft.org/v48/i09/} }BradleyTerry2/README.md0000744000176200001440000000216013157761356014241 0ustar liggesusers BradleyTerry2 ============= [![CRAN\_Status\_Badge](http://www.r-pkg.org/badges/version/BradleyTerry2)](https://cran.r-project.org/package=BradleyTerry2) [![Travis-CI Build Status](https://travis-ci.org/hturner/BradleyTerry2.svg?branch=master)](https://travis-ci.org/hturner/BradleyTerry2) [![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/github/hturner/BradleyTerry2?branch=master&svg=true)](https://ci.appveyor.com/project/hturner/BradleyTerry2) Specify and fit the Bradley-Terry model, including structured versions in which the parameters are related to explanatory variables through a linear predictor and versions with contest-specific effects, such as a home advantage. Installation ------------ You can install **BradleyTerry2** from github with: ``` r # install.packages("devtools") devtools::install_github("hturner/BradleyTerry2") ``` Code of conduct --------------- Please note that this project is released with a [Contributor Code of Conduct](CONDUCT.md). By participating in this project you agree to abide by its terms. BradleyTerry2/MD50000644000176200001440000001170013160432062013250 0ustar liggesusers36f46da602af24d36b19222a09261cd6 *DESCRIPTION b4a60f3ba1f9224eddfa1ddec28cf5eb *NAMESPACE a7cf8ec955c18d0ecc29a75f9818d374 *NEWS.md 720caf0bde9fd1064026e48a55c580b7 *R/BTabilities.R acefe1e19fe2d4a1950f26117152fc93 *R/BTm.R 841d118be1d10cb6d25f660761496b44 *R/BTm.setup.R 2d7f9ab898266a2aad38fc2157902b09 *R/CEMS.R dfc69bdef7a2e6d089bbfc2de1d1ad72 *R/Diff.R 070b197dc1b4c515ee22b614d15b98f7 *R/GenDavidson.R 02c0e53cac09c2faaa218c44faaf808f *R/add1.BTm.R fa4c5b6999b6a6ae94a2eb2dd624f187 *R/anova.BTm.R 292a69b59c992946815641430b70ba93 *R/anova.BTmlist.R ff3dc98281b3017bf721a018da9a90ce *R/baseball.R 8961705a1bf27f5c125f7ce3b8f829fc *R/chameleons.R 030b0ae34e782887c2fdaa60678af4d4 *R/citations.R 5f8079c7380b21012dc7cb18bbb9b2e6 *R/countsToBinomial.R cb77a329168157872f1083cb8fe7bef1 *R/drop1.BTm.R 92ad0230331f66b9b3b3554eddfdc780 *R/flatlizards.R 7a5e66bf9864169a9b4f30eb4740c55c *R/football.R 4a6ead5401d25a3e6a2dfdf0536748e1 *R/formula.BTm.R b1772ff8fb75ec2cb6b9645077dca51b *R/glmmPQL.R 3fa9494ca70fc2767ec48e34d9c5be11 *R/glmmPQL.control.R 9be3697e0db9dc0b7d4fbf1590fe4e45 *R/glmmPQL.fit.R 2f98d51cc1d1037c4fa34e57c4b57bfe *R/icehockey.R ad1d3edac7e8613101ee16ad0493d223 *R/missToZero.R f2ffb551a8d3250eb7e83bd94ad9a0d1 *R/model.matrix.BTm.R af3b0525d78eddd70e5cfdca89b5365f *R/plotProportions.R fbaa955c63502af2acb6eb0844b72bd5 *R/predict.BTglmmPQL.R 019675148b6e829f26c9fbcbbf2c6f3f *R/predict.BTm.R 93e6029dccb86b3bebee46f628cf9d58 *R/print.BTglmmPQL.R 1b35359cd5a9cee5c8850a00793c5cf2 *R/print.BTm.R 88464469f88eb7fb76a1e6af47b8d4c2 *R/print.summary.glmmPQL.R 44b35381db1bebbecfd3acbe9ec919a0 *R/qvcalc.BTabilities.R 9f32c8a38dcb0c958ee1c6c3f66c2b37 *R/residuals.BTm.R 55edfaa680490009b1582200d3656842 *R/seeds.R d8fd164845fe69f2ccb71a848112e11d *R/sound.fields.R 7c33db37fe53efdd1c6906c57c23d7d3 *R/springall.R 093319abf6bad55fa3ff447908221b5a *R/summary.BTglmmPQL.R 694c54f87587c2d001e7aac9f76bc069 *R/vcov.BTglmmPQL.R d8af3d5741b0ae8269f5badf4dfaff83 *README.md 5d17104cf18c37cb78fb3d857016f1fe *build/vignette.rds 1fc50a457f1d470b3240c6ad95bf6395 *data/CEMS.R ee0852f01caba3acc47ed1a8267062e8 *data/baseball.RData 574871743191288d58e87fdfc59fd2ae *data/chameleons.R af6fdfda88e19552edeeca8dbde50e0a *data/citations.R a8402c784331c5e6a14ded61b6d615e3 *data/flatlizards.rda 8aba0268f583bf5e16c86f094e7747e4 *data/football.RData 2984f5baded99506531e620f3608707f *data/icehockey.R 014c49b99c7580d80c4d6166b79b6c70 *data/seeds.R 7a61b76e0536fe9783aa057f7500df57 *data/sound.fields.R 266b9521161753354473b71fcade4561 *data/springall.R 804ad272cf67ae373aaedc303cdbfd4f *inst/CITATION b63448ce61ee61668d947c716ba757ca *inst/doc/BradleyTerry.R 065021176a325aa27e9491e68538c913 *inst/doc/BradleyTerry.Rnw 88742a5f50a5cf511d5b71694d506811 *inst/doc/BradleyTerry.pdf 87ea788b9784b38e7103b167a4195b30 *man/BTabilities.Rd 60fd5b43ec654a565d971a7c7ae8adb7 *man/BTm.Rd 6b369e235c46252de75d98442cf96ff7 *man/CEMS.Rd 6aaf029a48e0cd4efac481afa46b1348 *man/GenDavidson.Rd a6415f8e83c772854109710ef7cfce24 *man/add1.BTm.Rd 5af0710a5615975e8c2b73b1583fb32c *man/anova.BTm.Rd 0b364da66e82cce953250099cbd1dadd *man/baseball.Rd df285e1a641418a9489f739d394a0eec *man/chameleons.Rd f228b8d7ef3ee5a1e399822a50c108d7 *man/citations.Rd b605365c6595a030d90ca10684c073a9 *man/countsToBinomial.Rd c18273e252428ed2cd567ace14d673f9 *man/flatlizards.Rd 52043fc0c44693b6cbf58301cd516cf2 *man/football.Rd 69ef426a9342cbe9481d5736470a03cc *man/glmmPQL.Rd 874fca5df890a4b3cb532cabf41007a0 *man/glmmPQL.control.Rd fb1862821aa36196fb99d169b664739c *man/icehockey.Rd 71f9d36d72c095e2bb3d32339206a6a7 *man/plotProportions.Rd f4211875abfc847ea6a31e248c2ea847 *man/predict.BTglmmPQL.Rd 9380fb00d884949587d5b831d07b8dfe *man/predict.BTm.Rd f3cd7175c696d10dac7a5161b3c26e7c *man/qvcalc.BTabilities.Rd be513ac5a3744266c73c7d5df116bf24 *man/reexports.Rd 3ffb467b83fd35063f3e0882a52952e7 *man/residuals.BTm.Rd 7a3869e0fd4b4dfadb2abb167c8e758e *man/seeds.Rd 658fa7af7ed9323db1412452fd9b5c10 *man/sound.fields.Rd 45608b05f3c1a9e7cbde661e92b70187 *man/springall.Rd 6368e1d235aebb16f5280c060d296f4c *tests/BTabilities.R 9141a0e555a267ee4a9b731943657066 *tests/BTabilities.Rout.save 86476628089a34f0ed51c68144d663df *tests/add1.R 92cae3349d02ab282b5b5b60b91d9acc *tests/add1.Rout.save 4cc0529e96d6c80aea02b3b4aa6237ee *tests/baseball.R 170e41c094e402f3114e34806936c071 *tests/baseball.Rout.save 751c24cf5cc1e2d49f57abed4b8950c3 *tests/countsToBinomial.R ca934c9a8d4e690c9c8867a331a7a455 *tests/countsToBinomial.Rout.save 332b541bd540d7275b808a23a938ca9b *tests/flatlizards.R e127b99e8b3fd36c0f30202b05b33cf6 *tests/flatlizards.Rout.save e8fe5bdb325fad20220be8d64a4a004d *tests/nested.R 76282d600b407791847314c4576d5129 *tests/nested.Rout.save 6b7d44dfdc4d6c0dca802eb4bf83618e *tests/predict.R 504de2a6dfdfdfcdc04dc1998bce3213 *tests/predict.Rout.save 065021176a325aa27e9491e68538c913 *vignettes/BradleyTerry.Rnw df33930c7d7ef359d3a320cc0f44c132 *vignettes/BradleyTerry.bib 7abfe1eb2816210b8d43fcc351468f07 *vignettes/baseball-qvplot.pdf 44843a3e7711c360bc0ad21c743b7674 *vignettes/residuals.pdf BradleyTerry2/build/0000755000176200001440000000000013160423701014040 5ustar liggesusersBradleyTerry2/build/vignette.rds0000644000176200001440000000045613160423701016404 0ustar liggesusers}RN0tZ 8@U/\PWކU]؉p)AHXZk홝}د Xʲ= 2.10) Imports: brglm, gtools, lme4 (>= 1.0), qvcalc, stats Suggests: prefmod Enhances: gnm License: GPL (>= 2) LazyData: yes Encoding: UTF-8 RoxygenNote: 6.0.1 NeedsCompilation: no Packaged: 2017-09-20 08:33:05 UTC; hturner Author: Heather Turner [aut, cre], David Firth [aut] Maintainer: Heather Turner Repository: CRAN Date/Publication: 2017-09-20 09:26:09 UTC BradleyTerry2/man/0000755000176200001440000000000013157761356013535 5ustar liggesusersBradleyTerry2/man/icehockey.Rd0000744000176200001440000001055613152515662015767 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/icehockey.R \docType{data} \name{icehockey} \alias{icehockey} \title{College Hockey Men's Division I 2009-10 results} \format{A data frame with 1083 observations on the following 6 variables. \describe{ \item{date}{a numeric vector} \item{visitor}{a factor with 58 levels \code{Alaska Anchorage} ... \code{Yale}} \item{v_goals}{a numeric vector} \item{opponent}{a factor with 58 levels \code{Alaska Anchorage} ... \code{Yale}} \item{o_goals}{a numeric vector} \item{conference}{a factor with levels \code{AH}, \code{CC}, \code{CH}, \code{EC}, \code{HE}, \code{NC}, \code{WC}} \item{result}{a numeric vector: 1 if visitor won, 0.5 for a draw and 0 if visitor lost} \item{home.ice}{a logical vector: 1 if opponent on home ice, 0 if game on neutral ground} }} \source{ http://www.collegehockeystats.net/0910/schedules/men } \usage{ icehockey } \description{ Game results from American College Hockey Men's Division I composite schedule 2009-2010. } \details{ The Division I ice hockey teams are arranged in six conferences: Atlantic Hockey, Central Collegiate Hockey Association, College Hockey America, ECAC Hockey, Hockey East and the Western Collegiate Hockey Association, all part of the National Collegiate Athletic Association. The composite schedule includes within conference games and between conference games. The data set here contains only games from the regular season, the results of which determine the teams that play in the NCAA national tournament. There are six automatic bids that go to the conference tournament champions, the remaining 10 teams are selected based upon ranking under the NCAA's system of pairwise comparisons (http://www.collegehockeynews.com/info/?d=pwcrpi). Some have argued that Bradley-Terry rankings would be fairer (http://www.collegehockeynews.com/info/?d=krach). } \examples{ ### Fit the standard Bradley-Terry model standardBT <- BTm(outcome = result, player1 = visitor, player2 = opponent, id = "team", data = icehockey) ## Bradley-Terry abilities abilities <- exp(BTabilities(standardBT)[,1]) ## Compute round-robin winning probability and KRACH ratings ## (scaled abilities such that KRACH = 100 for a team with ## round-robin winning probability of 0.5) rankings <- function(abilities){ probwin <- abilities/outer(abilities, abilities, "+") diag(probwin) <- 0 nteams <- ncol(probwin) RRWP <- rowSums(probwin)/(nteams - 1) low <- quantile(abilities, 0.45) high <- quantile(abilities, 0.55) middling <- uniroot(function(x) {sum(x/(x+abilities)) - 0.5*nteams}, lower = low, upper = high)$root KRACH <- abilities/middling*100 cbind(KRACH, RRWP) } ranks <- rankings(abilities) ## matches those produced by Joe Schlobotnik's Build Your Own Rankings head(signif(ranks, 4)[order(ranks[,1], decreasing = TRUE),]) ## At one point the NCAA rankings gave more credit for wins on ## neutral/opponent's ground. Home ice effects are easily ## incorporated into the Bradley-Terry model, comparing teams ## on a "level playing field" levelBT <- BTm(result, data.frame(team = visitor, home.ice = 0), data.frame(team = opponent, home.ice = home.ice), ~ team + home.ice, id = "team", data = icehockey) abilities <- exp(BTabilities(levelBT)[,1]) ranks2 <- rankings(abilities) ## Look at movement between the two rankings change <- factor(rank(ranks2[,1]) - rank(ranks[,1])) barplot(xtabs(~change), xlab = "Change in Rank", ylab = "No. Teams") ## Take out regional winners and look at top 10 regional <- c("RIT", "Alabama-Huntsville", "Michigan", "Cornell", "Boston College", "North Dakota") ranks <- ranks[!rownames(ranks) \%in\% regional] ranks2 <- ranks2[!rownames(ranks2) \%in\% regional] ## compare the 10 at-large selections under both rankings ## with those selected under NCAA rankings cbind(names(sort(ranks, decr = TRUE)[1:10]), names(sort(ranks2, decr = TRUE)[1:10]), c("Miami", "Denver", "Wisconsin", "St. Cloud State", "Bemidji State", "Yale", "Northern Michigan", "New Hampshire", "Alsaka", "Vermont")) } \references{ Schlobotnik, J. Build your own rankings. http://slack.net/~whelan/tbrw/2010/rankings.diy.shtml http://www.collegehockeynews.com Selections for 2010 NCAA tournament. http://sports.espn.go.com/ncaa/news/story?id=5012918 } \keyword{datasets} BradleyTerry2/man/GenDavidson.Rd0000744000176200001440000002134713157761356016235 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/GenDavidson.R \name{GenDavidson} \alias{GenDavidson} \title{Specify a Generalised Davidson Term in a gnm Model Formula} \usage{ GenDavidson(win, tie, loss, player1, player2, home.adv = NULL, tie.max = ~1, tie.mode = NULL, tie.scale = NULL, at.home1 = NULL, at.home2 = NULL) } \arguments{ \item{win}{a logical vector: \code{TRUE} if player1 wins, \code{FALSE} otherwise.} \item{tie}{a logical vector: \code{TRUE} if the outcome is a tie, \code{FALSE} otherwise.} \item{loss}{a logical vector: \code{TRUE} if player1 loses, \code{FALSE} otherwise.} \item{player1}{an ID factor specifying the first player in each contest, with the same set of levels as \code{player2}.} \item{player2}{an ID factor specifying the second player in each contest, with the same set of levels as \code{player2}.} \item{home.adv}{a formula for the parameter corresponding to the home advantage effect. If \code{NULL}, no home advantage effect is estimated.} \item{tie.max}{a formula for the parameter corresponding to the maximum tie probability.} \item{tie.mode}{a formula for the parameter corresponding to the location of maximum tie probability, in terms of the probability that \code{player1} wins, given the outcome is not a draw.} \item{tie.scale}{a formula for the parameter corresponding to the scale of dependence of the tie probability on the probability that \code{player1} wins, given the outcome is not a draw.} \item{at.home1}{a logical vector: \code{TRUE} if \code{player1} is at home, \code{FALSE} otherwise.} \item{at.home2}{a logical vector: \code{TRUE} if \code{player2} is at home, \code{FALSE} otherwise.} } \value{ A list with the anticipated components of a "nonlin" function: \item{ predictors }{ the formulae for the different parameters and the ID factors for player 1 and player 2. } \item{ variables }{ the outcome variables and the \dQuote{at home} variables, if specified. } \item{ common }{ an index to specify that common effects are to be estimated for the players. } \item{ term }{ a function to create a deparsed mathematical expression of the term, given labels for the predictors.} \item{ start }{ a function to generate starting values for the parameters.} } \description{ GenDavidson is a function of class \code{"nonlin"} to specify a generalised Davidson term in the formula argument to \code{\link[gnm:gnm]{gnm::gnm()}}, providing a model for paired comparison data where ties are a possible outcome. } \details{ \code{GenDavidson} specifies a generalisation of the Davidson model (1970) for paired comparisons where a tie is a possible outcome. It is designed for modelling trinomial counts corresponding to the win/draw/loss outcome for each contest, which are assumed Poisson conditional on the total count for each match. Since this total must be one, the expected counts are equivalently the probabilities for each possible outcome, which are modelled on the log scale: \deqn{\log(p(i \textrm{beats} j)_k) = \theta_{ijk} + \log(\mu\alpha_i}{log(p(i beats j)_k) = theta_{ijk} + log(mu * alpha_i)} \deqn{\log(p(draw)_k) = \theta_{ijk} + \delta + c + }{ log(p(draw)_k) = theta_{ijk} + log(delta) + c + sigma * (pi * log(mu * alpha_i) + (1 - pi) * log(alpha_j)) + (1 - sigma) * log(mu * alpha_i + alpha_j) }\deqn{ \sigma(\pi\log(\mu\alpha_i) - (1 - \pi)log(\alpha_j)) + }{ log(p(draw)_k) = theta_{ijk} + log(delta) + c + sigma * (pi * log(mu * alpha_i) + (1 - pi) * log(alpha_j)) + (1 - sigma) * log(mu * alpha_i + alpha_j) }\deqn{ (1 - \sigma)(\log(\mu\alpha_i + \alpha_j))}{ log(p(draw)_k) = theta_{ijk} + log(delta) + c + sigma * (pi * log(mu * alpha_i) + (1 - pi) * log(alpha_j)) + (1 - sigma) * log(mu * alpha_i + alpha_j) } \deqn{\log(p(j \textrm{beats} i)_k) = \theta_{ijk} + }{log(p(j beats i)_k) = theta_{ijk} + log(alpha_j)}\deqn{ log(\alpha_j)}{log(p(j beats i)_k) = theta_{ijk} + log(alpha_j)} Here \eqn{\theta_{ijk}}{theta_{ijk}} is a structural parameter to fix the trinomial totals; \eqn{\mu}{mu} is the home advantage parameter; \eqn{\alpha_i}{alpha_i} and \eqn{\alpha_j}{alpha_j} are the abilities of players \eqn{i} and \eqn{j} respectively; \eqn{c}{c} is a function of the parameters such that \eqn{\textrm{expit}(\delta)}{plogis(delta)} is the maximum probability of a tie, \eqn{\sigma}{sigma} scales the dependence of the probability of a tie on the relative abilities and \eqn{\pi}{pi} allows for asymmetry in this dependence. For parameters that must be positive (\eqn{\alpha_i, \sigma, \mu}{alpha, sigma, mu}), the log is estimated, while for parameters that must be between zero and one (\eqn{\delta, \pi}), the logit is estimated, as illustrated in the example. } \examples{ ### example requires gnm if (require(gnm)) { ### convert to trinomial counts football.tri <- expandCategorical(football, "result", idvar = "match") head(football.tri) ### add variable to indicate whether team playing at home football.tri$at.home <- !logical(nrow(football.tri)) ### fit shifted & scaled Davidson model ### - subset to first and last season for illustration shifScalDav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, tie.scale = ~1, tie.mode = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri, subset = season \%in\% c("2008-9", "2012-13")) ### look at coefs coef <- coef(shifScalDav) ## home advantage exp(coef["home.adv"]) ## max p(tie) plogis(coef["tie.max"]) ## mode p(tie) plogis(coef["tie.mode"]) ## scale relative to Davidson of dependence of p(tie) on p(win|not a draw) exp(coef["tie.scale"]) ### check model fit alpha <- names(coef[-(1:4)]) plotProportions(result == 1, result == 0, result == -1, home:season, away:season, abilities = coef[alpha], home.adv = coef["home.adv"], tie.max = coef["tie.max"], tie.scale = coef["tie.scale"], tie.mode = coef["tie.mode"], at.home1 = at.home, at.home2 = !at.home, data = football.tri, subset = count == 1) } ### analyse all five seasons ### - takes a little while to run, particularly likelihood ratio tests \dontrun{ ### fit Davidson model Dav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ### fit scaled Davidson model scalDav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, tie.scale = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ### fit shifted & scaled Davidson model shifScalDav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, tie.scale = ~1, tie.mode = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ### compare models anova(Dav, scalDav, shifScalDav, test = "Chisq") ### diagnostic plots main <- c("Davidson", "Scaled Davidson", "Shifted & Scaled Davidson") mod <- list(Dav, scalDav, shifScalDav) names(mod) <- main ## use football.tri data so that at.home can be found, ## but restrict to actual match results par(mfrow = c(2,2)) for (i in 1:3) { coef <- parameters(mod[[i]]) plotProportions(result == 1, result == 0, result == -1, home:season, away:season, abilities = coef[alpha], home.adv = coef["home.adv"], tie.max = coef["tie.max"], tie.scale = coef["tie.scale"], tie.mode = coef["tie.mode"], at.home1 = at.home, at.home2 = !at.home, main = main[i], data = football.tri, subset = count == 1) } } } \references{ Davidson, R. R. (1970). On extending the Bradley-Terry model to accommodate ties in paired comparison experiments. \emph{Journal of the American Statistical Association}, \strong{65}, 317--328. } \seealso{ \code{\link[=football]{football()}}, \code{\link[=plotProportions]{plotProportions()}} } \author{ Heather Turner } \keyword{models} \keyword{nonlinear} BradleyTerry2/man/BTm.Rd0000744000176200001440000002340613152515662014504 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/BTm.R \name{BTm} \alias{BTm} \title{Bradley-Terry Model and Extensions} \usage{ BTm(outcome = 1, player1, player2, formula = NULL, id = "..", separate.ability = NULL, refcat = NULL, family = "binomial", data = NULL, weights = NULL, subset = NULL, na.action = NULL, start = NULL, etastart = NULL, mustart = NULL, offset = NULL, br = FALSE, model = TRUE, x = FALSE, contrasts = NULL, ...) } \arguments{ \item{outcome}{the binomial response: either a numeric vector, a factor in which the first level denotes failure and all others success, or a two-column matrix with the columns giving the numbers of successes and failures.} \item{player1}{either an ID factor specifying the first player in each contest, or a data.frame containing such a factor and possibly other contest-level variables that are specific to the first player. If given in a data.frame, the ID factor must have the name given in the \code{id} argument. If a factor is specified it will be used to create such a data.frame.} \item{player2}{an object corresponding to that given in \code{player1} for the second player in each contest, with identical structure -- in particular factors must have identical levels.} \item{formula}{a formula with no left-hand-side, specifying the model for player ability. See details for more information.} \item{id}{the name of the ID factor.} \item{separate.ability}{(if \code{formula} does not include the ID factor as a separate term) a character vector giving the names of players whose abilities are to be modelled individually rather than using the specification given by \code{formula}.} \item{refcat}{(if \code{formula} includes the ID factor as a separate term) a character specifying which player to use as a reference, with the first level of the ID factor as the default. Overrides any other contrast specification for the ID factor.} \item{family}{a description of the error distribution and link function to be used in the model. Only the binomial family is implemented, with either\code{"logit"}, \code{"probit"} , or \code{"cauchit"} link. (See \code{\link[stats:family]{stats::family()}} for details of family functions.)} \item{data}{an optional object providing data required by the model. This may be a single data frame of contest-level data or a list of data frames. Names of data frames are ignored unless they refer to data frames specified by \code{player1} and \code{player2}. The rows of data frames that do not contain contest-level data must correspond to the levels of a factor used for indexing, i.e. row 1 corresponds to level 1, etc. Note any rownames are ignored. Objects are searched for first in the \code{data} object if provided, then in the environment of \code{formula}. If \code{data} is a list, the data frames are searched in the order given.} \item{weights}{an optional numeric vector of \sQuote{prior weights}.} \item{subset}{an optional logical or numeric vector specifying a subset of observations to be used in the fitting process.} \item{na.action}{a function which indicates what should happen when any contest-level variables contain \code{NA}s. The default is the \code{na.action} setting of \code{options}. See details for the handling of missing values in other variables.} \item{start}{a vector of starting values for the fixed effects.} \item{etastart}{a vector of starting values for the linear predictor.} \item{mustart}{a vector of starting values for the vector of means.} \item{offset}{an optional offset term in the model. A vector of length equal to the number of contests.} \item{br}{logical. If \code{TRUE} fitting will be by penalized maximum likelihood as in Firth (1992, 1993), using \code{\link[brglm:brglm]{brglm::brglm()}}, rather than maximum likelihood using \code{\link[=glm]{glm()}}, when abilities are modelled exactly or when the abilities are modelled by covariates and the variance of the random effects is estimated as zero.} \item{model}{logical: whether or not to return the model frame.} \item{x}{logical: whether or not to return the design matrix for the fixed effects.} \item{contrasts}{an optional list specifying contrasts for the factors in \code{formula}. See the \code{contrasts.arg} of \code{\link[=model.matrix]{model.matrix()}}.} \item{\dots}{other arguments for fitting function (currently either \code{\link[=glm]{glm()}}, \code{\link[brglm:brglm]{brglm::brglm()}}, or \code{\link[=glmmPQL]{glmmPQL()}})} } \value{ An object of class \code{c("BTm", "x")}, where \code{"x"} is the class of object returned by the model fitting function (e.g. \code{glm}). Components are as for objects of class \code{"x"}, with additionally \item{id}{the \code{id} argument.} \item{separate.ability}{the \code{separate.ability} argument.} \item{refcat}{the \code{refcat} argument.} \item{player1}{a data frame for the first player containing the ID factor and any player-specific contest-level variables.} \item{player2}{a data frame corresponding to that for \code{player1}.} \item{assign}{a numeric vector indicating which coefficients correspond to which terms in the model.} \item{term.labels}{labels for the model terms.} \item{random}{for models with random effects, the design matrix for the random effects. } } \description{ Fits Bradley-Terry models for pair comparison data, including models with structured scores, order effect and missing covariate data. Fits by either maximum likelihood or maximum penalized likelihood (with Jeffreys-prior penalty) when abilities are modelled exactly, or by penalized quasi-likelihood when abilities are modelled by covariates. } \details{ In each comparison to be modelled there is a 'first player' and a 'second player' and it is assumed that one player wins while the other loses (no allowance is made for tied comparisons). The \code{\link[=countsToBinomial]{countsToBinomial()}} function is provided to convert a contingency table of wins into a data frame of wins and losses for each pair of players. The \code{formula} argument specifies the model for player ability and applies to both the first player and the second player in each contest. If \code{NULL} a separate ability is estimated for each player, equivalent to setting \code{formula = reformulate(id)}. Contest-level variables can be specified in the formula in the usual manner, see \code{\link[=formula]{formula()}}. Player covariates should be included as variables indexed by \code{id}, see examples. Thus player covariates must be ordered according to the levels of the ID factor. If \code{formula} includes player covariates and there are players with missing values over these covariates, then a separate ability will be estimated for those players. When player abilities are modelled by covariates, then random player effects should be added to the model. These should be specified in the formula using the vertical bar notation of \code{\link[lme4:lmer]{lme4::lmer()}}, see examples. When specified, it is assumed that random player effects arise from a \eqn{N(0, }{N(0, sigma^2)}\eqn{ \sigma^2)}{N(0, sigma^2)} distribution and model parameters, including \eqn{\sigma}{sigma}, are estimated using PQL (Breslow and Clayton, 1993) as implemented in the \code{\link[=glmmPQL]{glmmPQL()}} function. } \examples{ ######################################################## ## Statistics journal citation data from Stigler (1994) ## -- see also Agresti (2002, p448) ######################################################## ## Convert frequencies to success/failure data citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") ## First fit the "standard" Bradley-Terry model citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) ## Now the same thing with a different "reference" journal citeModel2 <- update(citeModel, refcat = "JASA") BTabilities(citeModel2) ################################################################## ## Now an example with an order effect -- see Agresti (2002) p438 ################################################################## data(baseball) # start with baseball data as provided by package ## Simple Bradley-Terry model, ignoring home advantage: baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, data = baseball, id = "team") ## Now incorporate the "home advantage" effect baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) ## Compare the fit of these two models: anova(baseballModel1, baseballModel2) ## ## For a more elaborate example with both player-level and contest-level ## predictor variables, see help(chameleons). ## } \references{ Agresti, A. (2002) \emph{Categorical Data Analysis} (2nd ed). New York: Wiley. Firth, D. (1992) Bias reduction, the Jeffreys prior and GLIM. In \emph{Advances in GLIM and Statistical Modelling}, Eds. Fahrmeir, L., Francis, B. J., Gilchrist, R. and Tutz, G., pp91--100. New York: Springer. Firth, D. (1993) Bias reduction of maximum likelihood estimates. \emph{Biometrika} \strong{80}, 27--38. Firth, D. (2005) Bradley-Terry models in R. \emph{Journal of Statistical Software}, \strong{12}(1), 1--12. Stigler, S. (1994) Citation patterns in the journals of statistics and probability. \emph{Statistical Science} \strong{9}, 94--108. Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 package. \emph{Journal of Statistical Software}, \strong{48}(9), 1--21. } \seealso{ \code{\link[=countsToBinomial]{countsToBinomial()}}, \code{\link[=glmmPQL]{glmmPQL()}}, \code{\link[=BTabilities]{BTabilities()}}, \code{\link[=residuals.BTm]{residuals.BTm()}}, \code{\link[=add1.BTm]{add1.BTm()}}, \code{\link[=anova.BTm]{anova.BTm()}} } \author{ Heather Turner, David Firth } \keyword{models} BradleyTerry2/man/countsToBinomial.Rd0000744000176200001440000000231413152515662017306 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/countsToBinomial.R \name{countsToBinomial} \alias{countsToBinomial} \title{Convert Contingency Table of Wins to Binomial Counts} \usage{ countsToBinomial(xtab) } \arguments{ \item{xtab}{a contingency table of wins cross-classified by \dQuote{winner} and \dQuote{loser}} } \value{ A data frame with four columns \item{player1 }{ the first player in the contest. } \item{player2 }{ the second player in the contest. } \item{win1 }{ the number of times \code{player1} won. } \item{win2 }{ the number of times \code{player2} won. } } \description{ Convert a contingency table of wins to a four-column data frame containing the number of wins and losses for each pair of players. } \examples{ ######################################################## ## Statistics journal citation data from Stigler (1994) ## -- see also Agresti (2002, p448) ######################################################## citations ## Convert frequencies to success/failure data citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") citations.sf } \seealso{ \code{\link[=BTm]{BTm()}} } \author{ Heather Turner } \keyword{models} BradleyTerry2/man/add1.BTm.Rd0000744000176200001440000000541413152515662015313 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/add1.BTm.R \name{add1.BTm} \alias{add1.BTm} \alias{drop1.BTm} \title{Add or Drop Single Terms to/from a Bradley Terry Model} \usage{ \method{add1}{BTm}(object, scope, scale = 0, test = c("none", "Chisq", "F"), x = NULL, ...) } \arguments{ \item{object}{a fitted object of class inheriting from \code{"BTm"}.} \item{scope}{a formula specifying the model including all terms to be considered for adding or dropping.} \item{scale}{an estimate of the dispersion. Not implemented for models with random effects.} \item{test}{should a p-value be returned? The F test is only appropriate for models with no random effects for which the dispersion has been estimated. The Chisq test is a likelihood ratio test for models with no random effects, otherwise a Wald test.} \item{x}{a model matrix containing columns for all terms in the scope. Useful if \code{add1} is to be called repeatedly. \strong{Warning:} no checks are done on its validity.} \item{\dots}{further arguments passed to \code{\link[=add1.glm]{add1.glm()}}.} } \value{ An object of class \code{"anova"} summarizing the differences in fit between the models. } \description{ Add or drop single terms within the limit specified by the \code{scope} argument. For models with no random effects, compute an analysis of deviance table, otherwise compute the Wald statistic of the parameters that have been added to or dropped from the model. } \details{ The hierarchy is respected when considering terms to be added or dropped: all main effects contained in a second-order interaction must remain, and so on. In a scope formula \samp{.} means \sQuote{what is already there}. For \code{drop1}, a missing \code{scope} is taken to mean that all terms in the model may be considered for dropping. If \code{scope} includes player covariates and there are players with missing values over these covariates, then a separate ability will be estimated for these players in \emph{all} fitted models. Similarly if there are missing values in any contest-level variables in \code{scope}, the corresponding contests will be omitted from all models. If \code{formula} includes random effects, the same random effects structure will apply to all models. } \examples{ result <- rep(1, nrow(flatlizards$contests)) BTmodel1 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + (1|..), data = flatlizards, tol = 1e-4, sigma = 2, trace = TRUE) drop1(BTmodel1) add1(BTmodel1, ~ . + head.length[..] + SVL[..], test = "Chisq") BTmodel2 <- update(BTmodel1, formula = ~ . + head.length[..]) drop1(BTmodel2, test = "Chisq") } \seealso{ \code{\link[=BTm]{BTm()}}, \code{\link[=anova.BTm]{anova.BTm()}} } \author{ Heather Turner } \keyword{models} BradleyTerry2/man/springall.Rd0000744000176200001440000000571413152515662016017 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/springall.R \docType{data} \name{springall} \alias{springall} \title{Springall (1973) Data on Subjective Evaluation of Flavour Strength} \format{A list containing two data frames, \code{springall$contests} and \code{springall$predictors}. The \code{springall$contests} data frame has 36 observations (one for each possible pairwise comparison of the 9 treatments) on the following 7 variables: \describe{ \item{row}{a factor with levels \code{1:9}, the row number in Springall's dataset} # \item{col}{a factor with levels \code{1:9}, the column number in Springall's dataset} \item{win}{integer, the number of wins for column treatment over row treatment} \item{loss}{integer, the number of wins for row treatment over column treatment} \item{tie}{integer, the number of ties between row and column treatments} \item{win.adj}{numeric, equal to \code{win + tie/2}} \item{loss.adj}{numeric, equal to \code{loss + tie/2}} } The \code{predictors} data frame has 9 observations (one for each treatment) on the following 5 variables: \describe{ \item{flav}{numeric, the flavour concentration} \item{gel}{numeric, the gel concentration} \item{flav.2}{numeric, equal to \code{flav^2}} \item{gel.2}{numeric, equal to \code{gel^2}} \item{flav.gel}{numeric, equal to \code{flav * gel}} }} \source{ Springall, A (1973) Response surface fitting using a generalization of the Bradley-Terry paired comparison method. \emph{Applied Statistics} \strong{22}, 59--68. } \usage{ springall } \description{ Data from Section 7 of the paper by Springall (1973) on Bradley-Terry response surface modelling. An experiment to assess the effects of gel and flavour concentrations on the subjective assessment of flavour strength by pair comparisons. } \details{ The variables \code{win.adj} and \code{loss.adj} are provided in order to allow a simple way of handling ties (in which a tie counts as half a win and half a loss), which is slightly different numerically from the Rao and Kupper (1967) model that Springall (1973) uses. } \examples{ ## ## Fit the same response-surface model as in section 7 of ## Springall (1973). ## ## Differences from Springall's fit are minor, arising from the ## different treatment of ties. ## ## Springall's model in the paper does not include the random effect. ## In this instance, however, that makes no difference: the random-effect ## variance is estimated as zero. ## summary(springall.model <- BTm(cbind(win.adj, loss.adj), col, row, ~ flav[..] + gel[..] + flav.2[..] + gel.2[..] + flav.gel[..] + (1 | ..), data = springall)) } \references{ Rao, P. V. and Kupper, L. L. (1967) Ties in paired-comparison experiments: a generalization of the Bradley-Terry model. \emph{Journal of the American Statistical Association}, \strong{63}, 194--204. } \author{ David Firth } \keyword{datasets} BradleyTerry2/man/anova.BTm.Rd0000744000176200001440000000555413152515662015613 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/anova.BTm.R \name{anova.BTm} \alias{anova.BTm} \title{Compare Nested Bradley Terry Models} \usage{ \method{anova}{BTm}(object, ..., dispersion = NULL, test = NULL) } \arguments{ \item{object}{a fitted object of class inheriting from \code{"BTm"}.} \item{...}{additional \code{"BTm"} objects.} \item{dispersion}{a value for the dispersion. Not implemented for models with random effects.} \item{test}{optional character string (partially) matching one of \code{"Chisq"}, \code{"F"} or \code{"Cp"} to specify that p-values should be returned. The Chisq test is a likelihood ratio test for models with no random effects, otherwise a Wald test. Options \code{"F"} and \code{"Cp"} are only applicable to models with no random effects, see \code{\link[=stat.anova]{stat.anova()}}.} } \value{ An object of class \code{"anova"} inheriting from class \code{"data.frame"}. } \description{ Compare nested models inheriting from class \code{"BTm"}. For models with no random effects, compute analysis of deviance table, otherwise compute Wald tests of additional terms. } \details{ For models with no random effects, an analysis of deviance table is computed using \code{\link[=anova.glm]{anova.glm()}}. Otherwise, Wald tests are computed as detailed here. If a single object is specified, terms are added sequentially and a Wald statistic is computed for the extra parameters. If the full model includes player covariates and there are players with missing values over these covariates, then the \code{NULL} model will include a separate ability for these players. If there are missing values in any contest-level variables in the full model, the corresponding contests will be omitted throughout. The random effects structure of the full model is assumed for all sub-models. For a list of objects, consecutive pairs of models are compared by computing a Wald statistic for the extra parameters in the larger of the two models. The Wald statistic is always based on the variance-covariance matrix of the larger of the two models being compared. } \section{Warning}{ The comparison between two or more models will only be valid if they are fitted to the same dataset. This may be a problem if there are missing values and 's default of \code{na.action = na.omit} is used. An error will be returned in this case. The same problem will occur when separate abilities have been estimated for different subsets of players in the models being compared. However no warning is given in this case. } \examples{ result <- rep(1, nrow(flatlizards$contests)) BTmodel <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + (1|..), data = flatlizards, trace = TRUE) anova(BTmodel) } \seealso{ \code{\link[=BTm]{BTm()}}, \code{\link[=add1.BTm]{add1.BTm()}} } \author{ Heather Turner } \keyword{models} BradleyTerry2/man/CEMS.Rd0000744000176200001440000001566713152515662014563 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/CEMS.R \docType{data} \name{CEMS} \alias{CEMS} \title{Dittrich, Hatzinger and Katzenbeisser (1998, 2001) Data on Management School Preference in Europe} \format{A list containing three data frames, \code{CEMS$preferences}, \code{CEMS$students} and \code{CEMS$schools}. The \code{CEMS$preferences} data frame has \code{303 * 15 = 4505} observations (15 possible comparisons, for each of 303 students) on the following 8 variables: \describe{ \item{student}{a factor with levels \code{1:303}} \item{school1}{a factor with levels \code{c("Barcelona", "London", "Milano", "Paris", "St.Gallen", "Stockholm")}; the first management school in a comparison} \item{school2}{a factor with the same levels as \code{school1}; the second management school in a comparison} \item{win1}{integer (value 0 or 1) indicating whether \code{school1} was preferred to \code{school2}} \item{win2}{integer (value 0 or 1) indicating whether \code{school2} was preferred to \code{school1}} \item{tied}{integer (value 0 or 1) indicating whether no preference was expressed} \item{win1.adj}{numeric, equal to \code{win1 + tied/2}} \item{win2.adj}{numeric, equal to \code{win2 + tied/2}} } The \code{CEMS$students} data frame has 303 observations (one for each student) on the following 8 variables: \describe{ \item{STUD}{a factor with levels \code{c("other", "commerce")}, the student's main discipline of study} \item{ENG}{a factor with levels \code{c("good, poor")}, indicating the student's knowledge of English} \item{FRA}{a factor with levels \code{c("good, poor")}, indicating the student's knowledge of French} \item{SPA}{a factor with levels \code{c("good, poor")}, indicating the student's knowledge of Spanish} \item{ITA}{a factor with levels \code{c("good, poor")}, indicating the student's knowledge of Italian} \item{WOR}{a factor with levels \code{c("no", "yes")}, whether the student was in full-time employment while studying} \item{DEG}{a factor with levels \code{c("no", "yes")}, whether the student intended to take an international degree} \item{SEX}{a factor with levels \code{c("female", "male")} } } The \code{CEMS$schools} data frame has 6 observations (one for each management school) on the following 7 variables: \describe{ \item{Barcelona}{numeric (value 0 or 1)} \item{London}{numeric (value 0 or 1)} \item{Milano}{numeric (value 0 or 1)} \item{Paris}{numeric (value 0 or 1)} \item{St.Gallen}{numeric (value 0 or 1)} \item{Stockholm}{numeric (value 0 or 1)} \item{LAT}{numeric (value 0 or 1) indicating a 'Latin' city} }} \source{ Royal Statistical Society datasets website, at http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1467-9876/homepage/47_4.htm. } \usage{ CEMS } \description{ \emph{Community of European management schools} (CEMS) data as used in the paper by Dittrich et al. (1998, 2001), re-formatted for use with \code{\link[=BTm]{BTm()}} } \details{ The variables \code{win1.adj} and \code{win2.adj} are provided in order to allow a simple way of handling ties (in which a tie counts as half a win and half a loss), which is slightly different numerically from the Davidson (1970) method that is used by Dittrich et al. (1998): see the examples. } \examples{ ## ## Fit the standard Bradley-Terry model, using the simple 'add 0.5' ## method to handle ties: ## table3.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~.. , refcat = "Stockholm", data = CEMS) ## The results in Table 3 of Dittrich et al (2001) are reproduced ## approximately by a simple re-scaling of the estimates: table3 <- summary(table3.model)$coef[, 1:2]/1.75 print(table3) ## ## Now fit the 'final model' from Table 6 of Dittrich et al.: ## table6.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * Paris[..] + WOR[student] * Milano[..] + WOR[student] * Barcelona[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) ## ## Again re-scale to reproduce approximately Table 6 of Dittrich et ## al. (2001): ## table6 <- summary(table6.model)$coef[, 1:2]/1.75 print(table6) ## \dontrun{ ## Now the slightly simplified model of Table 8 of Dittrich et al. (2001): ## table8.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * LAT[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) table8 <- summary(table8.model)$coef[, 1:2]/1.75 ## ## Notice some larger than expected discrepancies here (the coefficients ## named "..Barcelona", "..Milano" and "..Paris") from the results in ## Dittrich et al. (2001). Apparently a mistake was made in Table 8 of ## the published Corrigendum note (R. Dittrich personal communication, ## February 2010). ## print(table8) } } \references{ Davidson, R. R. (1970) Extending the Bradley-Terry model to accommodate ties in paired comparison experiments. \emph{Journal of the American Statistical Association} \strong{65}, 317--328. Dittrich, R., Hatzinger, R. and Katzenbeisser, W. (1998) Modelling the effect of subject-specific covariates in paired comparison studies with an application to university rankings. \emph{Applied Statistics} \strong{47}, 511--525. Dittrich, R., Hatzinger, R. and Katzenbeisser, W. (2001) Corrigendum: Modelling the effect of subject-specific covariates in paired comparison studies with an application to university rankings. \emph{Applied Statistics} \strong{50}, 247--249. Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 package. \emph{Journal of Statistical Software}, \strong{48}(9), 1--21. } \author{ David Firth } \keyword{datasets} BradleyTerry2/man/citations.Rd0000744000176200001440000000327613157761356016032 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/citations.R \docType{data} \name{citations} \alias{citations} \title{Statistics Journal Citation Data from Stigler (1994)} \format{A 4 by 4 contingency table of citations, cross-classified by the factors \code{cited} and \code{citing} each with levels \code{Biometrika}, \code{Comm Statist}, \code{JASA}, and \code{JRSS-B}.} \source{ Agresti, A. (2002) \emph{Categorical Data Analysis} (2nd ed). New York: Wiley. } \usage{ citations } \description{ Extracted from a larger table in Stigler (1994). Inter-journal citation counts for four journals, \dQuote{Biometrika}, \dQuote{Comm Statist.}, \dQuote{JASA} and \dQuote{JRSS-B}, as used on p448 of Agresti (2002). } \details{ In the context of paired comparisons, the \sQuote{winner} is the cited journal and the \sQuote{loser} is the one doing the citing. } \examples{ ## Data as a square table, as in Agresti p448 citations ## ## Convert frequencies to success/failure data: ## citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") ## Standard Bradley-Terry model fitted to these data citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) } \references{ Firth, D. (2005) Bradley-Terry models in R. \emph{Journal of Statistical Software} \strong{12}(1), 1--12. Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 package. \emph{Journal of Statistical Software}, \strong{48}(9), 1--21. Stigler, S. (1994) Citation patterns in the journals of statistics and probability. \emph{Statistical Science} \strong{9}, 94--108. } \seealso{ \code{\link[=BTm]{BTm()}} } \keyword{datasets} BradleyTerry2/man/predict.BTglmmPQL.Rd0000744000176200001440000000725613152515662017217 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/predict.BTglmmPQL.R \name{predict.BTglmmPQL} \alias{predict.BTglmmPQL} \title{Predict Method for BTglmmPQL Objects} \usage{ \method{predict}{BTglmmPQL}(object, newdata = NULL, newrandom = NULL, level = ifelse(object$sigma == 0, 0, 1), type = c("link", "response", "terms"), se.fit = FALSE, terms = NULL, na.action = na.pass, ...) } \arguments{ \item{object}{a fitted object of class \code{"BTglmmPQL"}} \item{newdata}{(optional) a data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used.} \item{newrandom}{if \code{newdata} is provided, a corresponding design matrix for the random effects, will columns corresponding to the random effects estimated in the original model.} \item{level}{an integer vector giving the level(s) at which predictions are required. Level zero corresponds to population-level predictions (fixed effects only), whilst level one corresponds to the individual-level predictions (full model) which are NA for contests involving individuals not in the original data. By default \code{level = 0} if the model converged to a fixed effects model, \code{1} otherwise.} \item{type}{the type of prediction required. The default is on the scale of the linear predictors; the alternative \code{"response"} is on the scale of the response variable. Thus for a default binomial model the default predictions are of log-odds (probabilities on logit scale) and \code{type = "response"} gives the predicted probabilities. The \code{"terms"} option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale (fixed effects only).} \item{se.fit}{logical switch indicating if standard errors are required.} \item{terms}{with \code{type ="terms"} by default all terms are returned. A character vector specifies which terms are to be returned.} \item{na.action}{function determining what should be done with missing values in \code{newdata}. The default is to predict \code{NA}.} \item{\dots}{further arguments passed to or from other methods.} } \value{ If \code{se.fit = FALSE}, a vector or matrix of predictions. If \code{se = TRUE}, a list with components \item{fit }{Predictions} \item{se.fit }{Estimated standard errors} } \description{ Obtain predictions and optionally standard errors of those predictions from a \code{"BTglmmPQL"} object. } \details{ If \code{newdata} is omitted the predictions are based on the data used for the fit. In that case how cases with missing values in the original fit are treated is determined by the \code{na.action} argument of that fit. If \code{na.action = na.omit} omitted cases will not appear in the residuals, whereas if \code{na.action = na.exclude} they will appear (in predictions and standard errors), with residual value \code{NA}. See also \code{napredict}. Standard errors for the predictions are approximated assuming the variance of the random effects is known, see Booth and Hobert (1998). } \examples{ seedsModel <- glmmPQL(cbind(r, n - r) ~ seed + extract, random = diag(nrow(seeds)), family = binomial, data = seeds) pred <- predict(seedsModel, level = 0) predTerms <- predict(seedsModel, type = "terms") all.equal(pred, rowSums(predTerms) + attr(predTerms, "constant")) } \references{ Booth, J. G. and Hobert, J. P. (1998). Standard errors of prediction in Generalized Linear Mixed Models. \emph{Journal of the American Statistical Association} \strong{93}(441), 262 -- 272. } \seealso{ \code{\link[=predict.glm]{predict.glm()}}, \code{\link[=predict.BTm]{predict.BTm()}} } \author{ Heather Turner } \keyword{models} BradleyTerry2/man/plotProportions.Rd0000744000176200001440000002050413152515662017253 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/plotProportions.R \name{plotProportions} \alias{plotProportions} \title{Plot Proportions of Tied Matches and Non-tied Matches Won} \usage{ plotProportions(win, tie = NULL, loss, player1, player2, abilities = NULL, home.adv = NULL, tie.max = NULL, tie.scale = NULL, tie.mode = NULL, at.home1 = NULL, at.home2 = NULL, data = NULL, subset = NULL, bin.size = 20, xlab = "P(player1 wins | not a tie)", ylab = "Proportion", legend = NULL, col = 1:2, ...) } \arguments{ \item{win}{a logical vector: \code{TRUE} if player1 wins, \code{FALSE} otherwise.} \item{tie}{a logical vector: \code{TRUE} if the outcome is a tie, \code{FALSE} otherwise (\code{NULL} if there are no ties).} \item{loss}{a logical vector: \code{TRUE} if player1 loses, \code{FALSE} otherwise.} \item{player1}{an ID factor specifying the first player in each contest, with the same set of levels as \code{player2}.} \item{player2}{an ID factor specifying the second player in each contest, with the same set of levels as \code{player2}.} \item{abilities}{the fitted abilities from a generalized Davidson model (or a Bradley-Terry model).} \item{home.adv}{if applicable, the fitted home advantage parameter from a generalized Davidson model (or a Bradley-Terry model).} \item{tie.max}{the fitted parameter from a generalized Davidson model corresponding to the maximum tie probability.} \item{tie.scale}{if applicable, the fitted parameter from a generalized Davidson model corresponding to the scale of dependence of the tie probability on the probability that \code{player1} wins, given the outcome is not a draw.} \item{tie.mode}{if applicable, the fitted parameter from a generalized Davidson model corresponding to the location of maximum tie probability, in terms of the probability that \code{player1} wins, given the outcome is not a draw.} \item{at.home1}{a logical vector: \code{TRUE} if \code{player1} is at home, \code{FALSE} otherwise.} \item{at.home2}{a logical vector: \code{TRUE} if \code{player2} is at home, \code{FALSE} otherwise.} \item{data}{an optional data frame providing variables required by the model, with one observation per match.} \item{subset}{an optional logical or numeric vector specifying a subset of observations to include in the plot.} \item{bin.size}{the approximate number of matches in each bin.} \item{xlab}{the label to use for the x-axis.} \item{ylab}{the label to use for the y-axis.} \item{legend}{text to use for the legend.} \item{col}{a vector specifying colours to use for the proportion of non-tied matches won and the proportion of tied matches.} \item{\dots}{further arguments passed to plot.} } \value{ A list of data frames: \item{win}{ a data frame comprising \code{prop.win}, the proportion of non-tied matches won by the first player in each bin and \code{bin.win}, the mid-point of each bin. } \item{tie}{ (when ties are present) a data frame comprising \code{prop.tie}, the proportion of tied matches in each bin and \code{bin.tie}, the mid-point of each bin. } } \description{ Plot proportions of tied matches and non-tied matches won by the first player, within matches binned by the relative player ability, as expressed by the probability that the first player wins, given the match is not a tie. Add fitted lines for each set of matches, as given by the generalized Davidson model. } \details{ If \code{home.adv} is specified, the results are re-ordered if necessary so that the home player comes first; any matches played on neutral ground are omitted. First the probability that the first player wins given that the match is not a tie is computed: \deqn{expit(home.adv + abilities[player1] - abilities[player2])} where \code{home.adv} and \code{abilities} are parameters from a generalized Davidson model that have been estimated on the log scale. The matches are then binned according to this probability, grouping together matches with similar relative ability between the first player and the second player. Within each bin, the proportion of tied matches is computed and these proportions are plotted against the mid-point of the bin. Then the bins are re-computed omitting the tied games and the proportion of non-tied matches won by the first player is found and plotted against the new mid-point. Finally curves are added for the probability of a tie and the conditional probability of win given the match is not a tie, under a generalized Davidson model with parameters as specified by \code{tie.max}, \code{tie.scale} and \code{tie.mode}. The function can also be used to plot the proportions of wins along with the fitted probability of a win under the Bradley-Terry model. } \note{ This function is designed for single match outcomes, therefore data aggregated over player pairs will need to be expanded. } \examples{ #### A Bradley-Terry example using icehockey data ## Fit the standard Bradley-Terry model, ignoring home advantage standardBT <- BTm(outcome = result, player1 = visitor, player2 = opponent, id = "team", data = icehockey) ## comparing teams on a "level playing field" levelBT <- BTm(result, data.frame(team = visitor, home.ice = 0), data.frame(team = opponent, home.ice = home.ice), ~ team + home.ice, id = "team", data = icehockey) ## compare fit to observed proportion won ## exclude tied matches as not explicitly modelled here par(mfrow = c(1, 2)) plotProportions(win = result == 1, loss = result == 0, player1 = visitor, player2 = opponent, abilities = BTabilities(standardBT)[,1], data = icehockey, subset = result != 0.5, main = "Without home advantage") plotProportions(win = result == 1, loss = result == 0, player1 = visitor, player2 = opponent, home.adv = coef(levelBT)["home.ice"], at.home1 = 0, at.home2 = home.ice, abilities = BTabilities(levelBT)[,1], data = icehockey, subset = result != 0.5, main = "With home advantage") #### A generalized Davidson example using football data if (require(gnm)) { ## subset to first and last season for illustration football <- subset(football, season \%in\% c("2008-9", "2012-13")) ## convert to trinomial counts football.tri <- expandCategorical(football, "result", idvar = "match") ## add variable to indicate whether team playing at home football.tri$at.home <- !logical(nrow(football.tri)) ## fit Davidson model Dav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ## fit shifted & scaled Davidson model shifScalDav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, tie.scale = ~1, tie.mode = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ## diagnostic plots main <- c("Davidson", "Shifted & Scaled Davidson") mod <- list(Dav, shifScalDav) names(mod) <- main alpha <- names(coef(Dav)[-(1:2)]) ## use football.tri data so that at.home can be found, ## but restrict to actual match results par(mfrow = c(1,2)) for (i in 1:2) { coef <- parameters(mod[[i]]) plotProportions(result == 1, result == 0, result == -1, home:season, away:season, abilities = coef[alpha], home.adv = coef["home.adv"], tie.max = coef["tie.max"], tie.scale = coef["tie.scale"], tie.mode = coef["tie.mode"], at.home1 = at.home, at.home2 = !at.home, main = main[i], data = football.tri, subset = count == 1) } } } \seealso{ \code{\link[=GenDavidson]{GenDavidson()}}, \code{\link[=BTm]{BTm()}} } \author{ Heather Turner } \keyword{models} \keyword{nonlinear} BradleyTerry2/man/seeds.Rd0000744000176200001440000000223013152515662015115 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/seeds.R \docType{data} \name{seeds} \alias{seeds} \title{Seed Germination Data from Crowder (1978)} \format{A data frame with 21 observations on the following 4 variables. \describe{ \item{r}{the number of germinated seeds.} \item{n}{the total number of seeds.} \item{seed}{the seed variety.} \item{extract}{the type of root extract.} }} \source{ Crowder, M. (1978) Beta-Binomial ANOVA for proportions. \emph{Applied Statistics}, \strong{27}, 34--37. } \usage{ seeds } \description{ Data from Crowder(1978) giving the proportion of seeds germinated for 21 plates that were arranged according to a 2x2 factorial layout by seed variety and type of root extract. } \examples{ summary(glmmPQL(cbind(r, n - r) ~ seed + extract, random = diag(nrow(seeds)), family = binomial, data = seeds)) } \references{ Breslow, N. E. and Clayton, D. G. (1993) Approximate inference in Generalized Linear Mixed Models. \emph{Journal of the American Statistical Association}, \strong{88}(421), 9--25. } \seealso{ \code{\link[=glmmPQL]{glmmPQL()}} } \keyword{datasets} BradleyTerry2/man/baseball.Rd0000744000176200001440000000370413152515662015566 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/baseball.R \docType{data} \name{baseball} \alias{baseball} \title{Baseball Data from Agresti (2002)} \format{A data frame with 42 observations on the following 4 variables. \describe{ \item{home.team}{a factor with levels \code{Baltimore}, \code{Boston}, \code{Cleveland}, \code{Detroit}, \code{Milwaukee}, \code{New York}, \code{Toronto}.} \item{away.team}{a factor with levels \code{Baltimore}, \code{Boston}, \code{Cleveland}, \code{Detroit}, \code{Milwaukee}, \code{New York}, \code{Toronto}.} \item{home.wins}{a numeric vector.} \item{away.wins}{a numeric vector.} }} \source{ Page 438 of Agresti, A. (2002) \emph{Categorical Data Analysis} (2nd Edn.). New York: Wiley. } \usage{ baseball } \description{ Baseball results for games in the 1987 season between 7 teams in the Eastern Division of the American League. } \note{ This dataset is in a simpler format than the one described in Firth (2005). } \examples{ ## This reproduces the analysis in Sec 10.6 of Agresti (2002). data(baseball) # start with baseball data as provided by package ## Simple Bradley-Terry model, ignoring home advantage: baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, data = baseball, id = "team") ## Now incorporate the "home advantage" effect baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) ## Compare the fit of these two models: anova(baseballModel1, baseballModel2) } \references{ Firth, D. (2005) Bradley-Terry models in R. \emph{Journal of Statistical Software}, \strong{12}(1), 1--12. Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 package. \emph{Journal of Statistical Software}, \strong{48}(9), 1--21. } \seealso{ \code{\link[=BTm]{BTm()}} } \keyword{datasets} BradleyTerry2/man/flatlizards.Rd0000744000176200001440000001465013152515662016342 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/flatlizards.R \docType{data} \name{flatlizards} \alias{flatlizards} \title{Augrabies Male Flat Lizards: Contest Results and Predictor Variables} \format{This dataset is a list containing two data frames: \code{flatlizards$contests} and \code{flatlizards$predictors}. The \code{flatlizards$contests} data frame has 100 observations on the following 2 variables: \describe{ \item{winner}{a factor with 77 levels \code{lizard003} ... \code{lizard189}.} \item{loser}{a factor with the same 77 levels \code{lizard003} ... \code{lizard189}.} } The \code{flatlizards$predictors} data frame has 77 observations (one for each of the 77 lizards) on the following 18 variables: \describe{ \item{id}{factor with 77 levels (3 5 6 ... 189), the lizard identifiers.} \item{throat.PC1}{numeric, the first principal component of the throat spectrum.} \item{throat.PC2}{numeric, the second principal component of the throat spectrum.} \item{throat.PC3}{numeric, the third principal component of the throat spectrum.} \item{frontleg.PC1}{numeric, the first principal component of the front-leg spectrum.} \item{frontleg.PC2}{numeric, the second principal component of the front-leg spectrum.} \item{frontleg.PC3}{numeric, the third principal component of the front-leg spectrum.} \item{badge.PC1}{numeric, the first principal component of the ventral colour patch spectrum.} \item{badge.PC2}{numeric, the second principal component of the ventral colour patch spectrum.} \item{badge.PC3}{numeric, the third principal component of the ventral colour patch spectrum.} \item{badge.size}{numeric, a measure of the area of the ventral colour patch.} \item{testosterone}{numeric, a measure of blood testosterone concentration.} \item{SVL}{numeric, the snout-vent length of the lizard.} \item{head.length}{numeric, head length.} \item{head.width}{numeric, head width.} \item{head.height}{numeric, head height.} \item{condition}{numeric, a measure of body condition.} \item{repro.tactic}{a factor indicating reproductive tactic; levels are \code{resident} and \code{floater}.} }} \source{ The data were collected by Dr Martin Whiting, http://whitinglab.com/?page_id=3380, and they appear here with his kind permission. } \usage{ flatlizards } \description{ Data collected at Augrabies Falls National Park (South Africa) in September-October 2002, on the contest performance and background attributes of 77 male flat lizards (\emph{Platysaurus broadleyi}). The results of exactly 100 contests were recorded, along with various measurements made on each lizard. Full details of the study are in Whiting et al. (2006). } \details{ There were no duplicate contests (no pair of lizards was seen fighting more than once), and there were no tied contests (the result of each contest was clear). The variables \code{head.length}, \code{head.width}, \code{head.height} and \code{condition} were all computed as residuals (of directly measured head length, head width, head height and body mass index, respectively) from simple least-squares regressions on \code{SVL}. Values of some predictors are missing (\code{NA}) for some lizards, \sQuote{at random}, because of instrument problems unconnected with the value of the measurement being made. } \examples{ ## ## Fit the standard Bradley-Terry model, using the bias-reduced ## maximum likelihood method: ## result <- rep(1, nrow(flatlizards$contests)) BTmodel <- BTm(result, winner, loser, br = TRUE, data = flatlizards$contests) summary(BTmodel) ## ## That's fairly useless, though, because of the rather small ## amount of data on each lizard. And really the scientific ## interest is not in the abilities of these particular 77 ## lizards, but in the relationship between ability and the ## measured predictor variables. ## ## So next fit (by maximum likelihood) a "structured" B-T model in ## which abilities are determined by a linear predictor. ## ## This reproduces results reported in Table 1 of Whiting et al. (2006): ## Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..], data = flatlizards) summary(Whiting.model) ## ## Equivalently, fit the same model using glmmPQL: ## Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), sigma = 0, sigma.fixed = TRUE, data = flatlizards) summary(Whiting.model) ## ## But that analysis assumes that the linear predictor formula for ## abilities is _perfect_, i.e., that there is no error in the linear ## predictor. This will always be unrealistic. ## ## So now fit the same predictor but with a normally distributed error ## term --- a generalized linear mixed model --- by using the BTm ## function instead of glm. ## Whiting.model2 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), data = flatlizards, trace = TRUE) summary(Whiting.model2) ## ## The estimated coefficients (of throat.PC1, throat.PC3, ## head.length and SVL are not changed substantially by ## the recognition of an error term in the model; but the estimated ## standard errors are larger, as expected. The main conclusions from ## Whiting et al. (2006) are unaffected. ## ## With the normally distributed random error included, it is perhaps ## at least as natural to use probit rather than logit as the link ## function: ## require(stats) Whiting.model3 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), family = binomial(link = "probit"), data = flatlizards, trace = TRUE) summary(Whiting.model3) BTabilities(Whiting.model3) ## Note the "separate" attribute here, identifying two lizards with ## missing values of at least one predictor variable ## ## Modulo the usual scale change between logit and probit, the results ## are (as expected) very similar to Whiting.model2. } \references{ Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 package. \emph{Journal of Statistical Software}, \strong{48}(9), 1--21. Whiting, M. J., Stuart-Fox, D. M., O'Connor, D., Firth, D., Bennett, N. C. and Blomberg, S. P. (2006). Ultraviolet signals ultra-aggression in a lizard. \emph{Animal Behaviour} \strong{72}, 353--363. } \seealso{ \code{\link[=BTm]{BTm()}} } \keyword{datasets} BradleyTerry2/man/reexports.Rd0000744000176200001440000000062513152515662016053 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/qvcalc.BTabilities.R \docType{import} \name{reexports} \alias{reexports} \alias{qvcalc} \title{Objects exported from other packages} \keyword{internal} \description{ These objects are imported from other packages. Follow the links below to see their documentation. \describe{ \item{qvcalc}{\code{\link[qvcalc]{qvcalc}}} }} BradleyTerry2/man/glmmPQL.Rd0000744000176200001440000001615413152515662015335 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/glmmPQL.R \name{glmmPQL} \alias{glmmPQL} \title{PQL Estimation of Generalized Linear Mixed Models} \usage{ glmmPQL(fixed, random = NULL, family = "binomial", data = NULL, subset = NULL, weights = NULL, offset = NULL, na.action = NULL, start = NULL, etastart = NULL, mustart = NULL, control = glmmPQL.control(...), sigma = 0.1, sigma.fixed = FALSE, model = TRUE, x = FALSE, contrasts = NULL, ...) } \arguments{ \item{fixed}{a formula for the fixed effects.} \item{random}{a design matrix for the random effects, with number of rows equal to the length of variables in \code{formula}.} \item{family}{a description of the error distribution and link function to be used in the model. This can be a character string naming a family function, a family function or the result of a call to a family function. (See \code{\link[=family]{family()}} for details of family functions.)} \item{data}{an optional data frame, list or environment (or object coercible by \code{\link[=as.data.frame]{as.data.frame()}} to a data frame) containing the variables in the model. If not found in \code{data}, the variables are taken from \code{environment(formula)}, typically the environment from which \code{glmmPQL} called.} \item{subset}{an optional logical or numeric vector specifying a subset of observations to be used in the fitting process.} \item{weights}{an optional vector of \sQuote{prior weights} to be used in the fitting process.} \item{offset}{an optional numeric vector to be added to the linear predictor during fitting. One or more \code{offset} terms can be included in the formula instead or as well, and if more than one is specified their sum is used. See \code{\link[=model.offset]{model.offset()}}.} \item{na.action}{a function which indicates what should happen when the data contain \code{NA}s. The default is set by the \code{na.action} setting of \code{\link[=options]{options()}}, and is \code{\link[=na.fail]{na.fail()}} if that is unset.} \item{start}{starting values for the parameters in the linear predictor.} \item{etastart}{starting values for the linear predictor.} \item{mustart}{starting values for the vector of means.} \item{control}{a list of parameters for controlling the fitting process. See the \code{\link[=glmmPQL.control]{glmmPQL.control()}} for details.} \item{sigma}{a starting value for the standard deviation of the random effects.} \item{sigma.fixed}{logical: whether or not the standard deviation of the random effects should be fixed at its starting value.} \item{model}{logical: whether or not the model frame should be returned.} \item{x}{logical: whether or not the design matrix for the fixed effects should be returned.} \item{contrasts}{an optional list. See the \code{contrasts.arg} argument of \code{\link[=model.matrix]{model.matrix()}}.} \item{\dots}{arguments to be passed to \code{\link[=glmmPQL.control]{glmmPQL.control()}}.} } \value{ An object of class \code{"BTglmmPQL"} which inherits from \code{"glm"} and \code{"lm"}: \item{coefficients}{ a named vector of coefficients, with a \code{"random"} attribute giving the estimated random effects.} \item{residuals}{ the working residuals from the final iteration of the IWLS loop.} \item{random}{the design matrix for the random effects.} \item{fitted.values}{ the fitted mean values, obtained by transforming the linear predictors by the inverse of the link function.} \item{rank}{the numeric rank of the fitted linear model.} \item{family}{the \code{family} object used.} \item{linear.predictors}{the linear fit on link scale.} \item{deviance}{up to a constant, minus twice the maximized log-likelihood.} \item{aic}{a version of Akaike's \emph{An Information Criterion}, minus twice the maximized log-likelihood plus twice the number of parameters, computed by the \code{aic} component of the family.} \item{null.deviance}{the deviance for the null model, comparable with \code{deviance}.} \item{iter}{the numer of iterations of the PQL algorithm.} \item{weights}{the working weights, that is the weights in the final iteration of the IWLS loop.} \item{prior.weights}{the weights initially supplied, a vector of \code{1}'s if none were.} \item{df.residual}{the residual degrees of freedom.} \item{df.null}{the residual degrees of freedom for the null model.} \item{y}{if requested (the default) the \code{y} vector used. (It is a vector even for a binomial model.)} \item{x}{if requested, the model matrix.} \item{model}{if requested (the default), the model frame.} \item{converged}{logical. Was the PQL algorithm judged to have converged?} \item{call}{the matched call.} \item{formula}{the formula supplied.} \item{terms}{the \code{terms} object used.} \item{data}{the \code{data} argument used.} \item{offset}{the offset vector used.} \item{control}{the value of the \code{control} argument used.} \item{contrasts}{(where relevant) the contrasts used.} \item{xlevels}{(where relevant) a record of the levels of the factors used in fitting.} \item{na.action}{(where relevant) information returned by \code{model.frame} on the special handling of \code{NA}s.} \item{sigma}{the estimated standard deviation of the random effects} \item{sigma.fixed}{logical: whether or not \code{sigma} was fixed} \item{varFix}{the variance-covariance matrix of the fixed effects} \item{varSigma}{the variance of \code{sigma}} } \description{ Fits GLMMs with simple random effects structure via Breslow and Clayton's PQL algorithm. The GLMM is assumed to be of the form \ifelse{html}{\out{g(μ) = + Ze}}{\deqn{g(\boldsymbol{\mu}) = \boldsymbol{X\beta} + \boldsymbol{Ze}}{ g(mu) = X * beta + Z * e}} where \eqn{g} is the link function, \ifelse{html}{\out{μ}}{\eqn{\boldsymbol{\mu}}{mu}} is the vector of means and \ifelse{html}{\out{X, Z}}{\eqn{\boldsymbol{X}, \boldsymbol{Z}}{X,Z}} are design matrices for the fixed effects \ifelse{html}{\out{β}}{\eqn{\boldsymbol{\beta}}{beta}} and random effects \ifelse{html}{\out{e}}{\eqn{\boldsymbol{e}}{e}} respectively. Furthermore the random effects are assumed to be i.i.d. \ifelse{html}{\out{N(0, σ2)}}{\eqn{N(0, \sigma^2)}{ N(0, sigma^2)}}. } \examples{ ############################################### ## Crowder seeds example from Breslow & Clayton ############################################### summary(glmmPQL(cbind(r, n - r) ~ seed + extract, random = diag(nrow(seeds)), family = "binomial", data = seeds)) summary(glmmPQL(cbind(r, n - r) ~ seed*extract, random = diag(nrow(seeds)), family = "binomial", data = seeds)) } \references{ Breslow, N. E. and Clayton, D. G. (1993) Approximate inference in Generalized Linear Mixed Models. \emph{Journal of the American Statistical Association} \strong{88}(421), 9--25. Harville, D. A. (1977) Maximum likelihood approaches to variance component estimation and to related problems. \emph{Journal of the American Statistical Association} \strong{72}(358), 320--338. } \seealso{ \code{\link[=predict.BTglmmPQL]{predict.BTglmmPQL()}},\code{\link[=glmmPQL.control]{glmmPQL.control()}},\code{\link[=BTm]{BTm()}} } \author{ Heather Turner } \keyword{models} BradleyTerry2/man/qvcalc.BTabilities.Rd0000744000176200001440000000647513157761356017504 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/qvcalc.BTabilities.R \name{qvcalc.BTabilities} \alias{qvcalc.BTabilities} \title{Quasi Variances for Estimated Abilities} \usage{ \method{qvcalc}{BTabilities}(object, ...) } \arguments{ \item{object}{a \code{"BTabilities"} object as returned by \code{\link[=BTabilities]{BTabilities()}}.} \item{...}{additional arguments, currently ignored.} } \value{ A list of class \code{"qv"}, with components \item{covmat}{The full variance-covariance matrix for the estimated abilities.} \item{qvframe}{A data frame with variables \code{estimate}, \code{SE}, \code{quasiSE} and \code{quasiVar}, the last two being a quasi standard error and quasi-variance for each ability.} \item{dispersion}{\code{NULL} (dispersion is fixed to 1).} \item{relerrs}{Relative errors for approximating the standard errors of all simple contrasts.} \item{factorname}{The name of the ID factor identifying players in the \code{BTm} formula.} \item{coef.indices}{\code{NULL} (no required for this method).} \item{modelcall}{The call to \code{BTm} to fit the Bradley-Terry model from which the abilities were estimated.} } \description{ A method for \code{\link[qvcalc:qvcalc]{qvcalc::qvcalc()}} to compute a set of quasi variances (and corresponding quasi standard errors) for estimated abilities from a Bradley-Terry model as returned by \code{\link[=BTabilities]{BTabilities()}}. } \details{ For details of the method see Firth (2000), Firth (2003) or Firth and de Menezes (2004). Quasi variances generalize and improve the accuracy of \dQuote{floating absolute risk} (Easton et al., 1991). This device for economical model summary was first suggested by Ridout (1989). Ordinarily the quasi variances are positive and so their square roots (the quasi standard errors) exist and can be used in plots, etc. } \examples{ example(baseball) baseball.qv <- qvcalc(BTabilities(baseballModel2)) print(baseball.qv) plot(baseball.qv, xlab = "team", levelNames = c("Bal", "Bos", "Cle", "Det", "Mil", "NY", "Tor")) } \references{ Easton, D. F, Peto, J. and Babiker, A. G. A. G. (1991) Floating absolute risk: an alternative to relative risk in survival and case-control analysis avoiding an arbitrary reference group. \emph{Statistics in Medicine} \strong{10}, 1025--1035. Firth, D. (2000) Quasi-variances in Xlisp-Stat and on the web. \emph{Journal of Statistical Software} \strong{5.4}, 1--13. At http://www.jstatsoft.org Firth, D. (2003) Overcoming the reference category problem in the presentation of statistical models. \emph{Sociological Methodology} \strong{33}, 1--18. Firth, D. and de Menezes, R. X. (2004) Quasi-variances. \emph{Biometrika} \strong{91}, 65--80. Menezes, R. X. de (1999) More useful standard errors for group and factor effects in generalized linear models. \emph{D.Phil. Thesis}, Department of Statistics, University of Oxford. Ridout, M.S. (1989). Summarizing the results of fitting generalized linear models to data from designed experiments. In: \emph{Statistical Modelling: Proceedings of GLIM89 and the 4th International Workshop on Statistical Modelling held in Trento, Italy, July 17--21, 1989} (A. Decarli et al., eds.), pp 262--269. New York: Springer. } \seealso{ \code{\link[qvcalc:worstErrors]{qvcalc::worstErrors()}}, \code{\link[qvcalc:plot.qv]{qvcalc::plot.qv()}}. } \author{ David Firth } BradleyTerry2/man/chameleons.Rd0000744000176200001440000000714313152515662016140 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/chameleons.R \docType{data} \name{chameleons} \alias{chameleons} \title{Male Cape Dwarf Chameleons: Measured Traits and Contest Outcomes} \format{A list containing three data frames: \code{chameleons$winner}, \code{chameleons$loser} and \code{chameleons$predictors}. The \code{chameleons$winner} and \code{chameleons$loser} data frames each have 106 observations (one per contest) on the following 4 variables: \describe{ \item{ID}{a factor with 35 levels \code{C01}, \code{C02}, ... , \code{C43}, the identity of the winning (or losing) male in each contest} \item{prev.wins.1}{integer (values 0 or 1), did the winner/loser of this contest win in an immediately previous contest?} \item{prev.wins.2}{integer (values 0, 1 or 2), how many of his (maximum) previous 2 contests did each male win?} \item{prev.wins.all}{integer, how many previous contests has each male won?} } The \code{chameleons$predictors} data frame has 35 observations, one for each male involved in the contests, on the following 7 variables: \describe{ \item{ch.res}{numeric, residuals of casque height regression on \code{SVL}, i.e. relative height of the bony part on the top of the chameleons' heads} \item{jl.res}{numeric, residuals of jaw length regression on \code{SVL}} \item{tl.res}{numeric, residuals of tail length regression on \code{SVL}} \item{mass.res}{numeric, residuals of body mass regression on \code{SVL} (body condition)} \item{SVL}{numeric, snout-vent length (body size)} \item{prop.main}{numeric, proportion (arcsin transformed) of area of the flank occupied by the main pink patch on the flank} \item{prop.patch}{numeric, proportion (arcsin transformed) of area of the flank occupied by the entire flank patch} }} \source{ The data were obtained by Dr Devi Stuart-Fox, http://www.zoology.unimelb.edu.au/research/groups/animal/labs/stuart-fox/index.php, and they are reproduced here with her kind permission. These are the same data that were used in Stuart-Fox, D. M., Firth, D., Moussalli, A. and Whiting, M. J. (2006) Multiple signals in chameleon contests: designing and analysing animal contests as a tournament. \emph{Animal Behaviour} \strong{71}, 1263--1271. } \usage{ chameleons } \description{ Data as used in the study by Stuart-Fox et al. (2006). Physical measurements made on 35 male Cape dwarf chameleons, and the results of 106 inter-male contests. } \details{ The published paper mentions 107 contests, but only 106 contests are included here. Contest number 16 was deleted from the data used to fit the models, because it involved a male whose predictor-variables were incomplete (and it was the only contest involving that lizard, so it is uninformative). } \examples{ ## ## Reproduce Table 3 from page 1268 of the above paper: ## summary(chameleon.model <- BTm(player1 = winner, player2 = loser, formula = ~ prev.wins.2 + ch.res[ID] + prop.main[ID] + (1|ID), id = "ID", data = chameleons)) head(BTabilities(chameleon.model)) ## ## Note that, although a per-chameleon random effect is specified as in the ## above [the term "+ (1|ID)"], the estimated variance for that random ## effect turns out to be zero in this case. The "prior experience" ## effect ["+ prev.wins.2"] in this analysis has explained most of the ## variation, leaving little for the ID-specific predictors to do. ## Despite that, two of the ID-specific predictors do emerge as ## significant. ## ## Test whether any of the other ID-specific predictors has an effect: ## add1(chameleon.model, ~ . + jl.res[ID] + tl.res[ID] + mass.res[ID] + SVL[ID] + prop.patch[ID]) } \author{ David Firth } \keyword{datasets} BradleyTerry2/man/residuals.BTm.Rd0000744000176200001440000000514613152515662016477 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/residuals.BTm.R \name{residuals.BTm} \alias{residuals.BTm} \title{Residuals from a Bradley-Terry Model} \usage{ \method{residuals}{BTm}(object, type = c("deviance", "pearson", "working", "response", "partial", "grouped"), by = object$id, ...) } \arguments{ \item{object}{a model object for which \code{inherits(model, "BTm")} is \code{TRUE}.} \item{type}{the type of residuals which should be returned. The alternatives are: \code{"deviance"} (default), \code{"pearson"}, \code{"working"}, \code{"response"}, and \code{"partial"}.} \item{by}{the grouping factor to use when \code{type = "grouped"}.} \item{...}{arguments to pass on other methods.} } \value{ A numeric vector of length equal to the number of players, with a \code{"weights"} attribute. } \description{ Computes residuals from a model object of class \code{"BTm"}. In additional to the usual options for objects inheriting from class \code{"glm"}, a \code{"grouped"} option is implemented to compute player-specific residuals suitable for diagnostic checking of a predictor involving player-level covariates. } \details{ For \code{type} other than \code{"grouped"} see \code{\link[=residuals.glm]{residuals.glm()}}. For \code{type = "grouped"} the residuals returned are weighted means of working residuals, with weights equal to the binomial denominators in the fitted model. These are suitable for diagnostic model checking, for example plotting against candidate predictors. } \examples{ ## ## See ?springall ## springall.model <- BTm(cbind(win.adj, loss.adj), col, row, ~ flav[..] + gel[..] + flav.2[..] + gel.2[..] + flav.gel[..] + (1 | ..), data = springall) res <- residuals(springall.model, type = "grouped") with(springall$predictors, plot(flav, res)) with(springall$predictors, plot(gel, res)) ## Weighted least-squares regression of these residuals on any variable ## already included in the model yields slope coefficient zero: lm(res ~ flav, weights = attr(res, "weights"), data = springall$predictors) lm(res ~ gel, weights = attr(res, "weights"), data = springall$predictors) } \references{ Firth, D. (2005) Bradley-Terry models in R. \emph{Journal of Statistical Software} \strong{12}(1), 1--12. Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 package. \emph{Journal of Statistical Software}, \strong{48}(9), 1--21. } \seealso{ \code{\link[=BTm]{BTm()}}, \code{\link[=BTabilities]{BTabilities()}} } \author{ David Firth and Heather Turner } \keyword{models} BradleyTerry2/man/sound.fields.Rd0000744000176200001440000000730113152515662016413 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/sound.fields.R \docType{data} \name{sound.fields} \alias{sound.fields} \title{Kousgaard (1984) Data on Pair Comparisons of Sound Fields} \format{A list containing two data frames, \code{sound.fields$comparisons}, and \code{sound.fields$design}. The \code{sound.fields$comparisons} data frame has 84 observations on the following 8 variables: \describe{ \item{field1}{a factor with levels \code{c("000", "001", "010", "011", "100", "101", "110", "111")}, the first sound field in a comparison} \item{field2}{a factor with the same levels as \code{field1}; the second sound field in a comparison} \item{win1}{integer, the number of times that \code{field1} was preferred to \code{field2}} \item{tie}{integer, the number of times that no preference was expressed when comparing \code{field1} and \code{field2}} \item{win2}{integer, the number of times that \code{field2} was preferred to \code{field1}} \item{win1.adj}{numeric, equal to \code{win1 + tie/2}} \item{win2.adj}{numeric, equal to \code{win2 + tie/2}} \item{instrument}{a factor with 3 levels, \code{c("cello", "flute", "violin")}} } The \code{sound.fields$design} data frame has 8 observations (one for each of the sound fields compared in the experiment) on the following 3 variables: \describe{ \item{a")}{a factor with levels \code{c("0", "1")}, the \emph{direct sound} factor (0 for \emph{obstructed sight line}, 1 for \emph{free sight line}); contrasts are sum contrasts} \item{b}{a factor with levels \code{c("0", "1")}, the \emph{reflection} factor (0 for \emph{-26dB}, 1 for \emph{-20dB}); contrasts are sum contrasts} \item{c}{a factor with levels \code{c("0", "1")}, the \emph{reverberation} factor (0 for \emph{-24dB}, 1 for \emph{-20dB}); contrasts are sum contrasts} }} \source{ Kousgaard, N. (1984) Analysis of a Sound Field Experiment by a Model for Paired Comparisons with Explanatory Variables. \emph{Scandinavian Journal of Statistics} \strong{11}, 51--57. } \usage{ sound.fields } \description{ The results of a series of factorial subjective room acoustic experiments carried out at the Technical University of Denmark by A C Gade. } \details{ The variables \code{win1.adj} and \code{win2.adj} are provided in order to allow a simple way of handling ties (in which a tie counts as half a win and half a loss), which is slightly different numerically from the Davidson (1970) method that is used by Kousgaard (1984): see the examples. } \examples{ ## ## Fit the Bradley-Terry model to data for flutes, using the simple 'add 0.5' ## method to handle ties: ## flutes.model <- BTm(cbind(win1.adj, win2.adj), field1, field2, ~ field, id = "field", subset = (instrument == "flute"), data = sound.fields) ## ## This agrees (after re-scaling) quite closely with the estimates given ## in Table 3 of Kousgaard (1984): ## table3.flutes <- c(-0.581, -1.039, 0.347, 0.205, 0.276, 0.347, 0.311, 0.135) plot(c(0, coef(flutes.model)), table3.flutes) abline(lm(table3.flutes ~ c(0, coef(flutes.model)))) ## ## Now re-parameterise that model in terms of the factorial effects, as ## in Table 5 of Kousgaard (1984): ## flutes.model.reparam <- update(flutes.model, formula = ~ a[field] * b[field] * c[field] ) table5.flutes <- c(.267, .250, -.088, -.294, .062, .009, -0.070) plot(coef(flutes.model.reparam), table5.flutes) abline(lm(table5.flutes ~ coef(flutes.model.reparam))) } \references{ Davidson, R. R. (1970) Extending the Bradley-Terry model to accommodate ties in paired comparison experiments. \emph{Journal of the American Statistical Association} \strong{65}, 317--328. } \author{ David Firth } \keyword{datasets} BradleyTerry2/man/BTabilities.Rd0000744000176200001440000000615713152515662016221 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/BTabilities.R \name{BTabilities} \alias{BTabilities} \alias{print.BTabilities} \alias{coef.BTabilities} \alias{vcov.BTabilities} \title{Estimated Abilities from a Bradley-Terry Model} \usage{ BTabilities(model) } \arguments{ \item{model}{a model object for which \code{inherits(model, "BTm")} is \code{TRUE}} } \value{ A two-column numeric matrix of class \code{c("BTabilities", "matrix")}, with columns named \code{"ability"} and \code{"se"}; has one row for each player; has attributes named \code{"vcov"}, \code{"modelcall"}, \code{"factorname"} and (sometimes --- see below) \code{"separate"}. The first three attributes are not printed by the method \code{print.BTabilities}. } \description{ Computes the (baseline) ability of each player from a model object of class \code{"BTm"}. } \details{ The player abilities are either directly estimated by the model, in which case the appropriate parameter estimates are returned, otherwise the abilities are computed from the terms of the fitted model that involve player covariates only (those indexed by \code{model$id} in the model formula). Thus parameters in any other terms are assumed to be zero. If one player has been set as the reference, then \code{predict.BTm()} can be used to obtain ability estimates with non-player covariates set to other values, see examples for \code{\link[=predict.BTm]{predict.BTm()}}. If the abilities are structured according to a linear predictor, and if there are player covariates with missing values, the abilities for the corresponding players are estimated as separate parameters. In this event the resultant matrix has an attribute, named \code{"separate"}, which identifies those players whose ability was estimated separately. For an example, see \code{\link[=flatlizards]{flatlizards()}}. } \examples{ ### citations example ## Convert frequencies to success/failure data citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") ## Fit the "standard" Bradley-Terry model citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) BTabilities(citeModel) ### baseball example data(baseball) # start with baseball data as provided by package ## Fit mode with home advantage baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) baseballModel2 <- BTm(cbind(home.wins, away.wins), home.team, away.team, formula = ~ team + at.home, id = "team", data = baseball) ## Estimate abilities for each team, relative to Baltimore, when ## playing away from home: BTabilities(baseballModel2) } \references{ Firth, D. (2005) Bradley-Terry models in R. \emph{Journal of Statistical Software}, \strong{12}(1), 1--12. Turner, H. and Firth, D. (2012) Bradley-Terry models in R: The BradleyTerry2 package. \emph{Journal of Statistical Software}, \strong{48}(9), 1--21. } \seealso{ \code{\link[=BTm]{BTm()}}, \code{\link[=residuals.BTm]{residuals.BTm()}} } \author{ David Firth and Heather Turner } \keyword{models} BradleyTerry2/man/glmmPQL.control.Rd0000744000176200001440000000441313152515662017007 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/glmmPQL.control.R \name{glmmPQL.control} \alias{glmmPQL.control} \title{Control Aspects of the glmmPQL Algorithm} \usage{ glmmPQL.control(maxiter = 50, IWLSiter = 10, tol = 1e-06, trace = FALSE) } \arguments{ \item{maxiter}{the maximum number of outer iterations.} \item{IWLSiter}{the maximum number of iterated weighted least squares iterations used to estimate the fixed effects, given the standard deviation of the random effects.} \item{tol}{the tolerance used to determine convergence in the IWLS iterations and over all (see details).} \item{trace}{logical: whether or not to print the score for the random effects variance at the end of each iteration.} } \value{ A list with the arguments as components. } \description{ Set control variables for the glmmPQL algorithm. } \details{ This function provides an interface to control the PQL algorithm used by \code{\link[=BTm]{BTm()}} for fitting Bradley Terry models with random effects. The algorithm iterates between a series of iterated weighted least squares iterations to update the fixed effects and a single Fisher scoring iteration to update the standard deviation of the random effects. Convergence of both the inner and outer iterations are judged by comparing the squared components of the relevant score vector with corresponding elements of the diagonal of the Fisher information matrix. If, for all components of the relevant score vector, the ratio is less than \code{tolerance^2}, or the corresponding diagonal element of the Fisher information matrix is less than 1e-20, iterations cease. } \examples{ ## Variation on example(flatlizards) result <- rep(1, nrow(flatlizards$contests)) ## BTm passes arguments on to glmmPQL.control() args(BTm) BTmodel <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), data = flatlizards, tol = 1e-3, trace = TRUE) summary(BTmodel) } \references{ Breslow, N. E. and Clayton, D. G. (1993), Approximate inference in Generalized Linear Mixed Models. \emph{Journal of the American Statistical Association} \strong{88}(421), 9--25. } \seealso{ \code{\link[=glmmPQL]{glmmPQL()}}, \code{\link[=BTm]{BTm()}} } \author{ Heather Turner } \keyword{models} BradleyTerry2/man/football.Rd0000744000176200001440000000443113152515662015621 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/football.R \docType{data} \name{football} \alias{football} \title{English Premier League Football Results 2008/9 to 2012/13} \format{A data frame with 1881 observations on the following 4 variables. \describe{ \item{season}{a factor with levels \code{2008-9}, \code{2009-10}, \code{2010-11}, \code{2011-12}, \code{2012-13}} \item{home}{a factor specifying the home team, with 29 levels \code{Ars} (Arsenal), ... , \code{Wol} (Wolverhampton)} \item{away}{a factor specifying the away team, with the same levels as \code{home}.} \item{result}{a numeric vector giving the result for the home team: 1 for a win, 0 for a draw, -1 for a loss.} }} \source{ These data were downloaded from http://soccernet.espn.go.com in 2013. The site has since moved and the new site does not appear to have an equivalent source. } \usage{ football } \description{ The win/lose/draw results for five seasons of the English Premier League football results, from 2008/9 to 2012/13 } \details{ In each season, there are 20 teams, each of which plays one home game and one away game against all the other teams in the league. The results in 380 games per season. } \examples{ ### example requires gnm if (require(gnm)) { ### convert to trinomial counts football.tri <- expandCategorical(football, "result", idvar = "match") head(football.tri) ### add variable to indicate whether team playing at home football.tri$at.home <- !logical(nrow(football.tri)) ### fit Davidson model for ties ### - subset to first and last season for illustration Davidson <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri, subset = season \%in\% c("2008-9", "2012-13")) ### see ?GenDavidson for further analysis } } \references{ Davidson, R. R. (1970). On extending the Bradley-Terry model to accommodate ties in paired comparison experiments. \emph{Journal of the American Statistical Association}, \strong{65}, 317--328. } \seealso{ \code{\link[=GenDavidson]{GenDavidson()}} } \keyword{datasets} BradleyTerry2/man/predict.BTm.Rd0000744000176200001440000001765313152515662016144 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/predict.BTm.R \name{predict.BTm} \alias{predict.BTm} \title{Predict Method for Bradley-Terry Models} \usage{ \method{predict}{BTm}(object, newdata = NULL, level = ifelse(is.null(object$random), 0, 1), type = c("link", "response", "terms"), se.fit = FALSE, dispersion = NULL, terms = NULL, na.action = na.pass, ...) } \arguments{ \item{object}{a fitted object of class \code{"BTm"}} \item{newdata}{(optional) a data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used.} \item{level}{for models with random effects: an integer vector giving the level(s) at which predictions are required. Level zero corresponds to population-level predictions (fixed effects only), whilst level one corresponds to the player-level predictions (full model) which are NA for contests involving players not in the original data. By default, \code{level = 0} for a fixed effects model, \code{1} otherwise.} \item{type}{the type of prediction required. The default is on the scale of the linear predictors; the alternative \code{"response"} is on the scale of the response variable. Thus for a default Bradley-Terry model the default predictions are of log-odds (probabilities on logit scale) and \code{type = "response"`` gives the predicted probabilities. The}"terms"` option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale (fixed effects only).} \item{se.fit}{logical switch indicating if standard errors are required.} \item{dispersion}{a value for the dispersion, not used for models with random effects. If omitted, that returned by \code{summary} applied to the object is used, where applicable.} \item{terms}{with \code{type ="terms"} by default all terms are returned. A character vector specifies which terms are to be returned.} \item{na.action}{function determining what should be done with missing values in \code{newdata}. The default is to predict \code{NA}.} \item{\dots}{further arguments passed to or from other methods.} } \value{ If \code{se.fit = FALSE}, a vector or matrix of predictions. If \code{se = TRUE}, a list with components \item{fit }{Predictions} \item{se.fit }{Estimated standard errors} } \description{ Obtain predictions and optionally standard errors of those predictions from a fitted Bradley-Terry model. } \details{ If \code{newdata} is omitted the predictions are based on the data used for the fit. In that case how cases with missing values in the original fit are treated is determined by the \code{na.action} argument of that fit. If \code{na.action = na.omit} omitted cases will not appear in the residuals, whereas if \code{na.action = na.exclude} they will appear (in predictions and standard errors), with residual value \code{NA}. See also \code{napredict}. } \examples{ ## The final model in example(flatlizards) result <- rep(1, nrow(flatlizards$contests)) Whiting.model3 <- BTm(1, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), family = binomial(link = "probit"), data = flatlizards, trace = TRUE) ## `new' data for contests between four of the original lizards ## factor levels must correspond to original levels, but unused levels ## can be dropped - levels must match rows of predictors newdata <- list(contests = data.frame( winner = factor(c("lizard048", "lizard060"), levels = c("lizard006", "lizard011", "lizard048", "lizard060")), loser = factor(c("lizard006", "lizard011"), levels = c("lizard006", "lizard011", "lizard048", "lizard060")) ), predictors = flatlizards$predictors[c(3, 6, 27, 33), ]) predict(Whiting.model3, level = 1, newdata = newdata) ## same as predict(Whiting.model3, level = 1)[1:2] ## introducing a new lizard newpred <- rbind(flatlizards$predictors[c(3, 6, 27), c("throat.PC1","throat.PC3", "SVL", "head.length")], c(-5, 1.5, 1, 0.1)) rownames(newpred)[4] <- "lizard059" newdata <- list(contests = data.frame( winner = factor(c("lizard048", "lizard059"), levels = c("lizard006", "lizard011", "lizard048", "lizard059")), loser = factor(c("lizard006", "lizard011"), levels = c("lizard006", "lizard011", "lizard048", "lizard059")) ), predictors = newpred) ## can only predict at population level for contest with new lizard predict(Whiting.model3, level = 0:1, se.fit = TRUE, newdata = newdata) ## predicting at specific levels of covariates ## consider a model from example(CEMS) table6.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * Paris[..] + WOR[student] * Milano[..] + WOR[student] * Barcelona[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) ## estimate abilities for a combination not seen in the original data ## same schools schools <- levels(CEMS$preferences$school1) ## new student data students <- data.frame(STUD = "other", ENG = "good", FRA = "good", SPA = "good", ITA = "good", WOR = "yes", DEG = "no", SEX = "female", stringsAsFactors = FALSE) ## set levels to be the same as original data for (i in seq_len(ncol(students))){ students[,i] <- factor(students[,i], levels(CEMS$students[,i])) } newdata <- list(preferences = data.frame(student = factor(500), # new id matching with `students[1,]` school1 = factor("London", levels = schools), school2 = factor("Paris", levels = schools)), students = students, schools = CEMS$schools) ## warning can be ignored as model specification was over-parameterized predict(table6.model, newdata = newdata) ## if treatment contrasts are use (i.e. one player is set as the reference ## category), then predicting the outcome of contests against the reference ## is equivalent to estimating abilities with specific covariate values ## add student with all values at reference levels students <- rbind(students, data.frame(STUD = "other", ENG = "good", FRA = "good", SPA = "good", ITA = "good", WOR = "no", DEG = "no", SEX = "female", stringsAsFactors = FALSE)) ## set levels to be the same as original data for (i in seq_len(ncol(students))){ students[,i] <- factor(students[,i], levels(CEMS$students[,i])) } newdata <- list(preferences = data.frame(student = factor(rep(c(500, 502), each = 6)), school1 = factor(schools, levels = schools), school2 = factor("Stockholm", levels = schools)), students = students, schools = CEMS$schools) predict(table6.model, newdata = newdata, se.fit = TRUE) ## the second set of predictions (elements 7-12) are equivalent to the output ## of BTabilities; the first set are adjust for `WOR` being equal to "yes" BTabilities(table6.model) } \seealso{ \code{\link[=predict.glm]{predict.glm()}}, \code{\link[=predict.glmmPQL]{predict.glmmPQL()}} } \author{ Heather Turner } \keyword{models}