Hello folks,
I have MikTex 2.7 installed on my PC at home which works to perfection, and a 2.8 version installed on my office PC (Windows 2000) which gives me a constant headache. Virtually anytime I call a non-trivial package the file refuses to compile, although I have personally configured the system to install new packages automatically. I am a mere user and am not familiar with the intrinsic mechanism. I would provide any file needed in order for you to guide me through this.
Cheers,
Ofir
MiKTeX and proTeXt ⇒ What is wrong with my MikTex 2.8?
NEW: TikZ book now 40% off at Amazon.com for a short time.
Re: What is wrong with my MikTex 2.8?
I'm new to MikTex and I was having problems with documents where MikTex was supposed to be automatically installing packages as needed. Mine would also fail to compile when it was running across one of these packages. I don't have web link now, but I think my error was "Windows API Error 87" and the problem was apparently caused by MikTex somehow having two versions of itself running at the same time. The fix (or workaround) was to reboot the computer. I still occasionally have the problem, but a reboot seems to fix it.
What is wrong with my MikTex 2.8?
Thanks for taking the trouble to reply!
Unfortunately, things don't seem to be as simple for me.
Below you can find my latex code.
I will attach the log shortly.
Unfortunately, things don't seem to be as simple for me.
Below you can find my latex code.
I will attach the log shortly.
Code: Select all
\documentclass[12pt]{article}
\usepackage[cp1255]{inputenc}
\usepackage{amsmath,amssymb, amsthm}
\usepackage{ifpdf}
\usepackage{color}
\usepackage{graphicx}
\usepackage{epsfig}
\usepackage[bookmarks=false,colorlinks=true,linkcolor={blue},pdfstartview={XYZ null null 1}]{hyperref}
\usepackage{vector}
\usepackage{listings}
\usepackage{verbatim}
\usepackage{html,makeidx}
\usepackage{booktabs}
\usepackage{subfig}
\usepackage{multirow}
\usepackage{array}
\usepackage{multicol}
\DeclareMathOperator{\Sp}{Sp}
\newcommand{\spc}{\hspace{2 mm}}
\newcommand{\spcc}{\hspace{5 mm}}
\newcommand{\real}{\mathbb{R}}
\newcommand{\comp}{\mathbb{C}}
\newcommand{\nat}{\mathbb{N}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\Spc}{\hspace{10 mm}}
\newcommand{\f}{\frac}
\newcommand{\N}{\mathbb{N}}
\newcommand{\F}{\mathbb{F}}
\newcommand{\lmp}{\langle}
\newcommand{\rmp}{\rangle}
\renewcommand{\to}{\longrightarrow}
\renewcommand{\iff}{\Leftrightarrow}
\newcommand{\Aro}{\Rightarrow}
\newcommand{\aro}{\rightarrow}
\newcommand{\vo}{\{\overline{0}\}}
\newcommand{\bv}[1]{\overline{#1}}
\renewcommand{\a}{\alpha}
\renewcommand{\b}{\beta}
\newcommand{\ff}{\varphi}
\newcommand{\bpm}{\begin{bmatrix}}
\newcommand{\epm}{\end{bmatrix}}
\newcommand{\bit}{\begin{itemize}}
\newcommand{\eit}{\end{itemize}}
\newcommand{\bn}[2]{\binom{#1}{#2}}
\newcommand{\sm}[2]{\sum_{k=#1}^{#2}}
\newcommand{\lr}[1]{\left(#1\right)}
\newcommand{\str}[2]{\{^#1_#2\}}
\newcommand{\cnt}[1]{\begin{center}#1\end{center}}
\newcommand{\ld}{\ldots}
\newcommand{\ve}{\varepsilon}
\newcommand{\ub}{\uvec{\beta}}
\newcommand{\uve}{\uvec{\varepsilon}}
\newcommand{\e}{\text{\large{e}}}
\newcommand{\di}{\mathrm{d}}
\newcommand{\slfrac}[2]{\left.#1\middle/#2\right.}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\begin{document}
\begin{flushleft}
{\small
Ofir Harari \spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spc Generalized Linear Models \\
ID 036335099 \spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spcc\spc Spring Semester 2010}
\end{flushleft}
\vspace{1em}
\begin{center}
\underline{
\textbf{\Large{$\cal{Final\spcc PROJECT}$}}
}
\end{center}
\vspace{0.5em}
\begin{enumerate}
\item
A hospital planed to carry out a medical study on a large sample of patients to investigate possible association between a certain disease D and patients' characteristics x (e.g. age, sex, smoking status, etc.). However, due to budget cuts it was decided to select a smaller sample from the original one. Let
\begin{itemize}
\item
$D_i=1$ if the $i$-th patient has the disease, $Di=0$ otherwise
\item
$x_i$ - the vector of values for the $i$-th patient (fixed and known)
\item
$S_i=1$ if the $i$-th patient is selected to a smaller sample for the study, $S_i=0$ otherwise.
\end{itemize}
For the selected patients, logistic regression model has been fitted:
\begin{align*}
\log\frac{P\left(D_i=1\big|x_i,S_i=1\right)}{P\left(D_i=0\big|x_i,S_i=1\right)} = a+b'x_i\spc.
\end{align*}
Unless budget limitations, one would be naturally interested in fitting the logistic regression to the whole data set:
\begin{align*}
\log\frac{P\left(D_i=1\big|x_i\right)}{P\left(D_i=0\big|x_i\right)} = a^*+{b^*}'x_i\spc.
\end{align*}
\begin{enumerate}
\item
Suppose the proportion of chosen patients was the same among both groups (say, $r$). What is the connection between the coefficients in both models, i.e. between $a$, $b$ and $a^*$, $b^*$ ?
\item
Repeat the previous paragraph for the case where proportions of selected patients are different for patients with and without the disease (say, $P\left(S_i=1\big|D_i=1\right)=r_1$, while $P\left(S_i=1\big|D_i=0\right)=r_0)$.
\item
Comment the results and make conclusions. How will the decreased samle size affect the model fit?
\end{enumerate}
{\bf\underline{Solution}:}
\begin{enumerate}
\item
Here
\begin{align*}
P\left(D_i=1\big|x_i,S_i=1\right)& = \frac{P\left(S_i=1\big|x_i,D_i=1\right)P\left(D_i=1\big|x_i\right)}{P\left(S_i=1\big|x_i\right)} = \\[1em]
& = \frac{rP\left(D_i=1\big|x_i\right)}{r} = P\left(D_i=1\big|x_i\right)\spc.
\end{align*}
Similarly,\spc $P\left(D_i=0\big|x_i,S_i=1\right) = P\left(D_i=0\big|x_i\right)$ \spc and therefore, assuming the model is true
\begin{align*}
a+b'x_i &= \log\frac{P\left(D_i=1\big|x_i,S_i=1\right)}{P\left(D_i=0\big|x_i,S_i=1\right)} = \log\frac{P\left(D_i=1\big|x_i\right)}{P\left(D_i=0\big|x_i\right)} =\\[1em] & = a^*+{b^*}'x_i\spc,
\end{align*}
hence
\begin{align*}
X
\left[
\begin{array}{c}
a - a^*\\
b_1-b_1^*\\
\vdots\\
b_p-b_p^*
\end{array}
\right]
=\underline{0}
\end{align*}
meaning (assuming $X$ is full-rank) $a=a^*$ and $b=b^*$ .
\item
Here
\begin{align*}
P\left(D_i=1\big|x_i,S_i=1\right)& = \frac{P\left(S_i=1\big|x_i,D_i=1\right)P\left(D_i=1\big|x_i\right)}{P\left(S_i=1\big|x_i\right)} = \\[1em]
& = \frac{r_1P\left(D_i=1\big|x_i\right)}{r_0P\left(D_i=0\big|x_i\right)+r_1P\left(D_i=1\big|x_i\right)}
\end{align*}
and similarly
\begin{align*}
P\left(D_i=0\big|x_i,S_i=1\right) = \frac{r_0P\left(D_i=0\big|x_i\right)}{r_0P\left(D_i=0\big|x_i\right)+r_1P\left(D_i=1\big|x_i\right)}
\end{align*}
hence
\begin{align*}
a+b'x_i &= \log\frac{P\left(D_i=1\big|x_i,S_i=1\right)}{P\left(D_i=0\big|x_i,S_i=1\right)} = \log\frac{r_1P\left(D_i=1\big|x_i\right)}{r_0P\left(D_i=0\big|x_i\right)} =\\[1em]&= \log\frac{r_1}{r_0} + \log\frac{P\left(D_i=1\big|x_i\right)}{P\left(D_i=0\big|x_i\right)} = \log\frac{r_1}{r_0} + a^*+{b^*}'x_i\spc,
\end{align*}
and thus
\begin{align*}
X
\left[
\begin{array}{c}
a - a^* - \log\dfrac{r_1}{r_0}\\
b_1-b_1^*\\
\vdots\\
b_p-b_p^*
\end{array}
\right]
=\underline{0}\spc,
\end{align*}
and again, going by the assumption that $X$ is full-rank, we have
\begin{align*}
a=a^*+\log\dfrac{r_1}{r_0}\spcc \text{and}\spcc b=b^* \spc.
\end{align*}
\item
From previous paragraphs it is clear that if our objective is to identify the major factors that affect the probability to become diseased sampling makes no difference. The same goes for Odds Ratio, seeing as the intercepts cancell each other out.
If, however, we aim to make a prediction about the probability of a prticular subject, uneven sampling would result in larger estimates in favor of the over-sampled group.
Figure \ref{1.Simulation} shows the results of a simulation we performed, which contained an indepedent variable $x$ ($500$ draws from a $\mathrm{U}(0,1)$ distribution) and a dependent variable
\begin{align*}
D\sim \text{{\large $\mathrm{Binom}$}}\left(1,\dfrac{\exp\left\{0.3x-0.5\right\}}{1+\exp\left\{0.3x-0.5\right\}}\right)\spcc.
\end{align*}
The logistic model is then fitted for the complete set of data, and thereafter refitted for the partial data, where in the first time $r_0=r_1=0.8$ and in the second time $r_0 = 0.9$ and $r_1 = 0.65$.
In Figure \ref{1.Simulation} one can see how $\hat{\beta}_1 - \hat{\beta}_1^*$ is centered at $0$ no matter the sampling ratios, while the uneven sampling moves $\beta_0 - \beta_0^*$ about $\log\dfrac{0.9}{0.65} = 0.325$ .
The simulation code follows:
{\scriptsize
\begin{lstlisting}
n <- 500
X <- runif(n)
D <- rep(0,n)
p <- exp(0.3*X - 0.5)/(1+exp(0.3*X-0.5))
for(i in 1:n)
{
D[i] <- rbinom(1,1,p[i])
}
Coeff.Compar <- function(k,r0,r1)
{
k <- 2000
d0 <- rep(0,k)
d1 <- rep(0,k)
for(j in 1:k)
{
true <- glm(D~X, family=binomial)
beta.true <- true$coeff
rand <- 1:length(D[D==0])
index <- sample(rand,length(D[D==0])*r0)
Select <- rep(0,length(D[D==0]))
Select[index] <- 1
samp0 <- data.frame(cbind(X=X[D==0][Select==1],
D=D[D==0][Select==1]))
names(samp0) <- c("X","D")
rand <- 1:length(D[D==1])
index <- sample(rand, length(D[D==1])*r1)
Select <- rep(0, length(D[D==1]))
Select[index] <- 1
samp1 <- data.frame(cbind(X=X[D==1][Select==1],
D=D[D==1][Select==1]))
names(samp1) <- c("X", "D")
samp <- rbind(samp0,samp1)
sampled <- glm(samp$D ~ samp$X, family = binomial)
beta.sampled <- sampled$coeff
d0[j] <- beta.true[1] - beta.sampled[1]
d1[j] <- beta.true[2] - beta.sampled[2]
}
return(d0,d1)
}
sim1 <- Coeff.Compar(1000,0.8,0.8)
sim2 <- Coeff.Compar(1000,0.9,0.65)
windows(record=TRUE, width=30, height=20)
par(mfrow = c(2,2))
hist(sim1$d0, main="", xlab="")
title(main=expression(beta[0]-beta[0]^"*"), cex.main=1.8,
sub=expression(r[0]==r[1]~"=0.8"),cex.sub=1.8)
hist(sim1$d1, main="", xlab="")
title(main=expression(beta[1]-beta[1]^"*"), cex.main=1.8,
sub=expression(r[0]==r[1]~"=0.8"), cex.sub=1.8)
hist(sim2$d0, main="", xlab="")
title(main=expression(beta[0]-beta[0]^"*"), cex.main=1.8,
sub=expression(r[0]==0.9~" , "~r[1]==0.65), cex.sub=1.8)
hist(sim2$d1, main="", xlab="")
title(main=expression(beta[1]-beta[1]^"*"), cex.main=1.8,
sub=expression(r[0]==0.9~" , "~r[1]==0.65), cex.sub=1.8)
\end{lstlisting}
}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{1.Simulation.eps}
\end{center}
\caption{Simulation results for $\beta_0 - \beta_0^*$ and $\beta_1 - \beta_1^*$ both when $r_0=r_1$ and when $r_0\ne r_1$}
\label{1.Simulation}
\end{figure}
\clearpage
\end{enumerate}
\item
The data below are the number of cases of lung cancer and the number of ``man-years at risk'' in a very large British study of smoking men and its effect on lung cancer. The table is classified by number of years of smoking in five-year intervals, beginning at $15$-$19$ and up to $55$-$59$, and equivalent number of cigarettes smoked per day, in intervals as shown in the Table below. The data are in the form $r/n$, where $r$ is the number of lung cancer cases and $n$ the number of men at risk.
\begin{center}
{\footnotesize
\begin{tabular}{l{c}cccccc}
&Years smoking &\multicolumn{6}{c}{Cigs/day} \\
\midrule
& &$1$-$9$ &$10$-$14$ &$15$-$19$ &$20$-$24$ &$25$-$34$ &$35+$\\
\midrule
&$15$-$19$ &0/3121 &0/3577 &0/4317 &0/5683 &0/3042 &0/670\\
\midrule
&$20$-$24$ &0/2937 &1/3286 &0/4214 &1/6385 &1/4050 &0/1166\\
\midrule
&$25$-$29$ &0/2288 &1/2546 &0/3185 &1/5483 &4/4290 &0/1482\\
\midrule
&$30$-$34$ &0/2015 &1/2219 &4/2560 &6/4687 &9/4268 &4/1580\\
\midrule
&$35$-$39$ &1/1648 &0/1826 &0/1893 &5/3646 &9/3529 &6/1336\\
\midrule
&$40$-$44$ &2/1310 &1/1886 &2/1334 &12/2411 &11/2424 &10/924\\
\midrule
&$45$-$49$ &0/927 &2/988 &2/849 &9/1567 &10/1409 &7/556\\
\midrule
&$50$-$54$ &3/710 &4/684 &2/470 &7/857 &5/663 &4/255\\
\midrule
&$55$-$59$ &0/606 &3/449 &5/280 &7/416 &3/284 &1/104\\
\midrule
\end{tabular}
}
\end{center}
\begin{enumerate}
\item
For these data, find a well-fitting parsimonious model relating the proportion suffering from lung cancer to smoking rate and years of smoking. Give the interpretation of your model in terms of the risk of developing lung cancer.
\item
What are the chances of developing lung cancer for a man smoking 20 cigarettes per day for the last 40 years? (give a pointwise estimate and the corresponding confidence interval).
\end{enumerate}
{\bf\underline{Solution}:}
\begin{enumerate}
\item
All model selection methods lead to the Main Effect binomial model:
\begin{align*}
\log\frac{p_i}{1-p_i} = \beta_0 + \beta_{1,\ldots,9}\text{YearsSmok}_i + \beta_{10,\ldots,15}\text{CigsPerDay}_i\spc.
\end{align*}
The residuals of this fit can be seen in Figure \ref{2.ME.Resid}, ANOVA table is
{\scriptsize
\begin{lstlisting}
Df Deviance Resid. Df Resid. Dev P(>|Chi|)
NULL 53 358.77
Years_Smk 8 258.78 45 99.99 2.372e-51
Cigs_per_Day 5 60.53 40 39.46 9.448e-12
\end{lstlisting}
}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{2.ME.Resid.eps}
\end{center}
\caption{Residuals for the Main-Effect Model}
\label{2.ME.Resid}
\end{figure}
and goodness of fit test yields $\text{p-value} = 0.494$ for $D=39.465$ on $\mathrm{df} = 40$. All in all a lovely fit, except for its lack of interpretability, and the statistical insignificance of most of the coefficients of the different levels as stand-alone variables.\\[0.5em]
To take care of these issues, we proceed to treat both independent variables as continuous. this is done by
{\scriptsize
\begin{lstlisting}
Yrs_Smoke <- Years_Smk
levels(Yrs_Smoke) <- c("0","1","2","3","4","5","6","7","8")
Yrs_Smk <- as.numeric(as.character(Yrs_Smoke))
Cigs_Day <- Cigs_per_Day
levels(Cigs_Day) <- c("0","1","2","3","4","5")
Cig_Day <- as.numeric(as.character(Cigs_Day))
\end{lstlisting}
}
and we can therefore fit
\begin{align*}
\log\frac{p_i}{1-p_i} = \beta_0 + \beta_1\text{YrsSmk}_i + \beta_2\text{CigDay}_i\spc.
\end{align*}
This idea is motivated by Figure \ref{2.Cont.Logits}, where both continuous predictors are shown against $p$ in the log-scale.
The results of this fit can be seen in
{\scriptsize
\begin{lstlisting}
Call:
glm(formula = cbind(Cases, At_Risk - Cases) ~ Yrs_Smk + Cig_Day,
family = binomial)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.03960 0.29838 -33.647 < 2e-16 ***
Yrs_Smk 0.56570 0.03847 14.704 < 2e-16 ***
Cig_Day 0.43653 0.05764 7.573 3.64e-14 ***
Null deviance: 358.771 on 53 degrees of freedom
Residual deviance: 59.035 on 51 degrees of freedom
\end{lstlisting}
}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{2.Cont.Logits.eps}
\end{center}
\caption{Years of Smoking and Cigarettes per Day in the logit scale}
\label{2.Cont.Logits}
\end{figure}
Comparing the main-effect model with the current one, we have
{\scriptsize
\begin{lstlisting}
Model 1: cbind(Cases, At_Risk-Cases)~Yrs_Smk+Cig_Day
Model 2: cbind(Cases, At_Risk-Cases)~Years_Smk+Cigs_per_Day
Resid. Df Resid. Dev Df Deviance P(>|Chi|)
1 51 59.035
2 40 39.465 11 19.570 0.052
\end{lstlisting}
}
which is borderline-significant, but we will let ourselves enjoy the benefit of the doubt, seeing as the goodness of fit is sufficiently good, with $\text{p-value} = 0.205$ for $D=59.035$ on $\mathrm{df} = 51$.\\[0.5em]
Taking it one step further, we would like to test
\begin{align*}
{\cal{H}}:\beta_0 = -10\spc;\spc \beta_1 = \beta_2 = \frac{1}{2}\spcc.
\end{align*}
We do that by denoting \spc$\mathrm{Score}_i = \mathrm{YrsSmk}_i + \mathrm{CigDay}_i$\spc and comparing the deterministic model
\begin{align*}
\log\frac{p_i}{1-p_i} = \frac{1}{2}\text{Score}_i - 10\spc.
\end{align*}
to the previous model. Doing so, we learn that
{\scriptsize
\begin{lstlisting}
Model 1: cbind(Cases, At_Risk-Cases)~offset(-10+0.5*score)-1
Model 2: cbind(Cases, At_Risk-Cases)~Yrs_Smk+Cig_Day
Resid. Df Resid. Dev Df Deviance P(>|Chi|)
1 54 64.465
2 51 59.035 3 5.430 0.143
\end{lstlisting}
}
and also the goodness of fit test yields $\text{p-value} = 0.156$ with $\mathrm{df}=54$ and $D=64.46$ , and so we are quite happy with the model we ended up having.\\[0.5em]
Figure \ref{2.Score.Logit} displays the newly-introduced Score variable against the logit of the empirical probabilities. This hints that opting for a univariate Score model cannot be completely ruled out, although it is far from perfect, as implied from the residuals plotted in Figure \ref{2.Score.Resid}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.65\textwidth]{2.Score.Logit.eps}
\end{center}
\caption{The Score variable in the logit scale}
\label{2.Score.Logit}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\textwidth]{2.Score.Resid.eps}
\end{center}
\caption{Residuals for the Score model}
\label{2.Score.Resid}
\end{figure}
The Score model is very easy to interpret if we consider years of smoking and smoking rate as equally-contributive to the risk of having lung-cancer. Taking $1/(1+\e^{10})$ to be the probability of a non-smoker to get lung cancer, the odds-ratio goes up $\e^{0.5}$ units whenever one climbs up the scale in either factor.
\item
As aesthetically pleasing as our model is, it contains no stochastic elements, due to the fixation of both coefficients in the end of the previous paragraph. We therefore accept the Score model as a qualitative instrument, giving us a clear insight of the nature of the influence of smoking on lung cancer, but retreat to the main-effect model for predictions, confidence intervals etc.\\[0.5em]
Using R's \textit{predict.glm($\cdot$)} function we can have, for example, all point estimates and $95\%$ confidence intervals for men who have been smoking for $40$-$44$ years:
{\footnotesize
\begin{lstlisting}
Cig_Day score P.Obs P.Exp P.Low P.Up
1-9 5 0.00153 0.00079 0.00033 0.00186
10-14 6 0.00053 0.00177 0.00098 0.00321
15-19 7 0.00150 0.00233 0.00130 0.00417
20-24 8 0.00498 0.00420 0.00281 0.00627
25-34 9 0.00454 0.00519 0.00353 0.00763
35+ 10 0.01082 0.00845 0.00544 0.01311
\end{lstlisting}
}
Therefore, for a $20$ cigarettes a day smoker, who has been having this bad habit for the last $40$ years, a point estimate for the probability to have lung cancer would be $0.42\%$ , with a $95\%$ confidence interval of $(0.28\%,0.63\%)$ .
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{2.ME.Predict.eps}
\end{center}
\caption{Confidence bands for the probability for lung cancer conditioned on each of the independent variables in the Main-Effect model}
\label{2.ME.Predict}
\end{figure}
An illustration of the point estimates and the confidence bands for the relevant sector, projected on both predictors, is in display in Figure \ref{2.ME.Predict}.
\begin{comment}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{2.Exp.Obs.eps}
\end{center}
\caption{Predictions vs. empirical probabilities along Score for the Main-Effect and final Score model}
\label{2.Exp.Obs}
\end{figure}
\end{comment}
\end{enumerate}
\clearpage
\item
In a study male and female drivers were interviewed about the importance of various features of vehicle safety to them when they were buying a car. Table below shows the ratings for air conditioning according to the sex and age of the driver.
\begin{center}
{\footnotesize
\begin{tabular}{l{c}cccc}
&Sex &Age &No or little Importance &Important &Very Important\\
\midrule
&\multirow{3}{*}{Women} &18-23 &$26$ &$12$ &$7$\\
& &24-40 &9 &21 &15\\
& &$>40$ &5 &14 &41\\
\midrule
&\multirow{3}{*}{Men} &18-23 &40 &17 &8\\
& &24-40 &17 &15 &12\\
& &$>40$ &8 &15 &18\\
\midrule
&Total & &105 &94 &101\\
\midrule
\end{tabular}
}
\end{center}
\begin{enumerate}
\item
Look at the data and try to make some preliminary conclusions (conjectures?).
\item
Fit an appropriate model for these data. Do the ratings change with the age similarly in both sex groups? Does sex influence at all? Can you say that the ratings do not change with the age?
\item
In fact, the response variable for these data is an \textit{ordinal} categorical variable. Exploit this fact and fit the corresponding proportional odds logistic model. Is it an adequate model for the data?
\item
Return to all the questions from the second paragraph.
\item
Compare the results from both models. In particular, compare the estimated probabilities with the observed proportions. Make final conclusions and comment the results of the study.
\end{enumerate}
{\bf\underline{Solution}:}
\begin{enumerate}
\item
Looking at Figure \ref{3.Prob.Crude}, it is quite clear that the probabilities of a random subject (be it male or female) follow different pathes, suggesting we cannot bin together any two groups. It is also of note that the trend in the ``Important'' group is different for men and women, meaning the ``Sex'' factor is likely to be statistically significant.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\textwidth]{3.Prob.Crude.eps}
\end{center}
\caption{Early examination of the data}
\label{3.Prob.Crude}
\end{figure}
\item
Fitting the multinomial model
\begin{align*}
\log\left(\frac{P_{{ji}}}{P_{{\text{NotImp},i}}}\right) = \beta_{0j} + \beta_{1j}\mathrm{Sex}_i + \beta_{2j}\mathrm{Age}_i\spc;\spc j = \text{Imp},\text{VeryImp}
\end{align*}
we have
{\scriptsize
\begin{lstlisting}
Call:
multinom(formula = cbind(No_Imp, Imp, Very_Imp) ~ Sex + Age)
Coefficients:
(Intercept) SexM Age18-23 Age24-40
Imp 0.996906 -0.3881235 -1.587705 -0.4594416
Very_Imp 1.877669 -0.8130098 -2.916737 -1.4386386
Residual Deviance: 580.7022
AIC: 596.7022
\end{lstlisting}
}
and knowing the shortcomings of R's deviance calculation in this kind of models, we use our own routine
{\scriptsize
\begin{lstlisting}
Obs <- cbind(No_Imp,Imp, Very_Imp)
Exp <- Total*model1$fitted.values
Chi <- sum((Obs - Exp)^2/Exp)
d.first <- ifelse(Obs == 0, 0, Obs*log(Obs/Exp))
dev <- 2*sum(d.first)
df <- length(Sex)*2 - model1$edf
1 - pchisq(dev, df)
\end{lstlisting}
}
To obtain $\text{p-value}=0.414$ for $D=3.939$ on $\mathrm{df}=4$. A visualization of the multinomial fit is in display in Figure \ref{3.Multinom}, along with the empirical probabilities for the different groups.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{3.Multinom.eps}
\end{center}
\caption{Observed probabilities along with the fitted ones (thin solid lines) for the Multinomial model}
\label{3.Multinom}
\end{figure}
From the summary, the ``Not Important'' initial rate is higher among men than among women, whereas the opposite is true for the other groups. This pattern is maintained in the ``Not Important'' and ``Very Important'' groups. In the ``Important'' group, however, there is a reversal of trends, and older man belong to this group more often than their female counterparts (obviously, women tend to suffer heat waves midway-through their $40$'s, making ``Very important'' a more obvious choice).\\[0.5em]
To make the importance of ``Sex'' official, let us compare our model with a nested one, containing no ``Sex'' factor:
{\scriptsize
\begin{lstlisting}
Likelihood ratio tests of Multinomial Models
Response: cbind(No_Imp, Imp, Very_Imp)
Model Resid.df Resid.Dev Test Df LR stat. Pr(Chi)
Age 6 587.2074
Sex + Age 4 580.7022 1 vs 2 4 26.505178 0.03867396
\end{lstlisting}
}
and we see ``Sex'' is rather influential.\\[0.5em]
Repeating for ``Age'':
{\scriptsize
\begin{lstlisting}
Likelihood ratio tests of Multinomial Models
Response: cbind(No_Imp, Imp, Very_Imp)
Model Resid.df Resid.Dev Test Df LR stat. Pr(Chi)
Sex 8 646.2812
Sex + Age 4 580.7022 1 vs 2 4 65.579 1.942890e-13
\end{lstlisting}
}
\item
Fitting the Proportional Odds model
\begin{align*}
\log\frac{P\left(y\leq y_j\big|\text{Sex}_i,\text{Age}_i\right)}{P\left(y > y_j\big|\text{Sex}_i,\text{Age}_i\right)} = \beta_{0j} + \beta_{1}\mathrm{Sex}_i + \beta_{2}\mathrm{Age}_i
\end{align*}
we have Figure \ref{3.Prop.Odds} and
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{3.Prop.Odds.eps}
\end{center}
\caption{Observed probabilities along with the fitted ones (thin solid lines) for the Proportional Odds model}
\label{3.Prop.Odds}
\end{figure}
{\scriptsize
\begin{lstlisting}
Call:
polr(formula = factor(Rating) ~ Age1 + Sex1)
Coefficients:
Value Std. Error t value
Age1 1.1164159 0.1457415 7.660246
Sex1 -0.5769928 0.2261076 -2.551850
Intercepts:
Value Std. Error t value
A|B 0.5725 0.4662 1.2281
B|C 2.1838 0.4862 4.4920
Residual Deviance: 581.3124
AIC: 589.3124
\end{lstlisting}
}
The true deviance for this model is $D=4.549$ on $\mathrm{df}=12$, giving $\text{p-value}=0.97$ for a terrific goodness of fit.
\item
From the t-values in the summary table of the proportional-odds model it looks like both factors are significant again. To be on the safe side, we chech this again by
{\scriptsize
\begin{lstlisting}
Likelihood ratio tests of ordinal regression models
Response: factor(Rating)
Model Resid.df Resid.Dev Test Df LR stat. Pr(Chi)
Age1 297 587.8542
Age1 + Sex1 296 581.3124 1 vs 2 1 6.541726 0.01053730
\end{lstlisting}
}
and
{\scriptsize
\begin{lstlisting}
Model Resid.df Resid.Dev Test Df LR stat. Pr(Chi)
Sex1 297 646.2848
Age1 + Sex1 296 581.3124 1 vs 2 1 64.97237 7.771561e-16
\end{lstlisting}
}
and we are left with the exact same conclusions.
\item
Judging on Figures \ref{3.Multinom} and \ref{3.Prop.Odds}, it is hard to tell which model is better: the multinomial fit seems to suit better the female group while the proportional odds model is perhaps slightly better for the males. We give the edge to the proportional odds model for its simplicity (less parameters to be estimated) and exceptional goodness of fit. The increase in the proportion of men in the ``Important'' group over age is not reflected in either model, but the proportional-odds model seems to do a little more justice with this phenomenon.
\end{enumerate}
\clearpage
\item
The data set {\color{blue}{\textit{kyphosis}}} in available in the library \textit{rpart} and contains the data on $81$ children who have had corrective spinal surgery. The binary outcome \textit{Kyphosis} indicate presence absence of postoperative deformity called kyphosis. The three explanatory variables are age in months (\textit{Age}), the number of vertebrae involved (\textit{Number}) and the number of the first (topmost) vertebra operated on (\textit{Start}).
\begin{enumerate}
\item
Fit the logit main effects model. Does it seem adequate to the data? What can you suggest to improve the model?
\item
Add quadratic terms, try to add iteractions (if necessary) and remove unsignificant terms. Are all the explanatory variables significant? Comment on the resulting model and compare it with the main effects one.
\item
Fit the nonparametric additive model (gam) using the explanatory variables you have found significant on the previous steps. Plot the resulting curves, comment the results.
\item
Make final (?) conclusions.
\end{enumerate}
{\bf\underline{Solution}:}
\begin{enumerate}
\item
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{4.ME.Resid.eps}
\end{center}
\caption{Residuals for the Main-Effect logit binary model}
\label{4.ME.Resid}
\end{figure}
\item
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{4.Step.Resid.eps}
\end{center}
\caption{Residuals for the logit binary model obtained by applying Stepwise Regression}
\label{4.Step.Resid}
\end{figure}
\item
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.83\textwidth]{4.Spline.eps}
\end{center}
\caption{Plots for the general additive nonparametric model}
\label{4.Spline}
\end{figure}
\item
\end{enumerate}
\item
In an experiment to investigate the social behavior of hornets, different numbers of hornets were placed in boxes, and the number of cells built by the hornets was counted. Below are given the data from $38$ boxes: the number of cells built for each number of hornets.
\begin{center}
{\small
\begin{tabular}{l{c}l}
&$\#$ Hornets &\makebox[1.5in]{$\#$ Cells}\\
\midrule
&1 &0, 1, 2, 2, 4, 4, 5, 10, 11, 18\\
&2 &0, 4, 5, 7, 8, 13, 18, 29\\
&5 &7, 8, 17, 18, 19\\
&6 &17\\
&10 &12, 17, 18, 23, 25, 32\\
&16 &12\\
&19 &23\\
&20 &21, 23, 30, 31\\
&41 &30
\end{tabular}
}
\end{center}
\begin{enumerate}
\item
Assuming a normal model with constant variance, find the appropriate transformation of $\#\text{Cells}$ using $\#\text{Hornets}$ or $\log(\#\text{Hornets})$ as explanatory variable. Analyse the results of fitting and point out problems you have found (if any).
\item
Consider the previous model but allow heterogeneity for variance assuming that it is also a function of $\#\text{Hornets}$ or $\log(\#\text{Hornets})$ respectively. Compare your final model with that from the previous paragraph. Is the assumption of equal variances reasonable?
\item
Assume the Poisson model for $\#\text{Cells}$ and fit the corresponding regression model for $\#\text{Hornets}$ or $\log(\#\text{Hornets})$. Comment on the fit of the Poisson model, and compare the results with those from previous paragraphs. Is there overdispersion? If ``yes'', modify your original Poisson model. Make final conclusions.
\end{enumerate}
{\bf\underline{Solution}:}
\begin{enumerate}
\item
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.8\textwidth]{5.Initial.eps}
\end{center}
\caption{Early examination of the data}
\label{5.Initial}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{5.Boxcox.eps}
\end{center}
\caption{Box-Cox log-likelihood plots for both No. Hornets and $\log(\text{No. Hornets})$ as independent variables}
\label{5.Boxcox}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{5.Lin.Resid.eps}
\end{center}
\caption{Residuals for the homogeneous variance model with No. Hornets as the predictor}
\label{5.Lin.Resid}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{5.log.Resid.eps}
\end{center}
\caption{Residuals for the homogeneous variance model with $\log(\text{No. Hornets})$ as the predictor}
\label{5.log.Resid}
\end{figure}
\item
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{5.Lin.Hetero.Resid.eps}
\end{center}
\caption{Residuals for the heterogeneous variance model with $\text{No. Hornets}$ as the predictor}
\label{5.Lin.Hetero.Resid}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{5.log.Hetero.Resid.eps}
\end{center}
\caption{Residuals for the heterogeneous variance model with $\log(\text{No. Hornets})$ as the predictor}
\label{5.log.Hetero.Resid}
\end{figure}
\item
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{5.Poisson.Resid.eps}
\end{center}
\caption{Residuals for the Poisson model, both for No. Hornets ans $\log(\text{No. Hornets})$ as the predictor}
\label{5.Poisson.Resid}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{5.NB.Resid.eps}
\end{center}
\caption{Residuals for the Negative Binomial model, both for No. Hornets ans $\log(\text{No. Hornets})$ as the predictor}
\label{5.NB.Resid}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.8\textwidth]{5.Overdisp.eps}
\end{center}
\caption{Within-cell variance vs. within-cell mean. Dashed line: no overdispersion. Solid line: de-facto estimates.}
\label{5.Overdisp.Resid}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{5.Quasi.Resid.eps}
\end{center}
\caption{Residuals for the Quasi-Likelihood model, both for No. Hornets ans $\log(\text{No. Hornets})$ as the predictor}
\label{5.Quasi.Resid}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\textwidth]{5.Curve.eps}
\end{center}
\caption{$\lambda$ estimates according to the heterogeneous-variance linear model, Poisson model and Quasi-Likelihood model, all in the log scale.}
\label{5.Curve}
\end{figure}
\end{enumerate}
\end{enumerate}
\end{document}
What is wrong with my MikTex 2.8?
There we go.
I will appreciate any kind of help.
At the moment the file is not even compiled, and no warnings/errors are to be found.
I will appreciate any kind of help.
At the moment the file is not even compiled, and no warnings/errors are to be found.
Code: Select all
This is pdfTeX, Version 3.1415926-1.40.10 (MiKTeX 2.8) (preloaded format=latex 2010.6.20) 20 JUN 2010 14:35
entering extended mode
**Exam.tex
("C:\Documents and Settings\Ofir\My Documents\Courses\Generalized Linear Models
\Final Project\Exam.tex"
LaTeX2e <2009/09/24>
Babel <v3.8l> and hyphenation patterns for english, dumylang, nohyphenation, ge
rman, ngerman, german-x-2009-06-19, ngerman-x-2009-06-19, french, loaded.
("C:\Program Files\MiKTeX 2.8\tex\latex\base\article.cls"
Document Class: article 2007/10/19 v1.4h Standard LaTeX document class
("C:\Program Files\MiKTeX 2.8\tex\latex\base\size12.clo"
File: size12.clo 2007/10/19 v1.4h Standard LaTeX file (size option)
)
\c@part=\count79
\c@section=\count80
\c@subsection=\count81
\c@subsubsection=\count82
\c@paragraph=\count83
\c@subparagraph=\count84
\c@figure=\count85
\c@table=\count86
\abovecaptionskip=\skip41
\belowcaptionskip=\skip42
\bibindent=\dimen102
)
("C:\Program Files\MiKTeX 2.8\tex\latex\base\inputenc.sty"
Package: inputenc 2008/03/30 v1.1d Input encoding file
\inpenc@prehook=\toks14
\inpenc@posthook=\toks15
("C:\Program Files\MiKTeX 2.8\tex\generic\babel\cp1255.def"
File: cp1255.def 2004/02/20 v1.1b Hebrew input encoding file
))
("C:\Program Files\MiKTeX 2.8\tex\latex\ams\math\amsmath.sty"
Package: amsmath 2000/07/18 v2.13 AMS math features
\@mathmargin=\skip43
For additional information on amsmath, use the `?' option.
("C:\Program Files\MiKTeX 2.8\tex\latex\ams\math\amstext.sty"
Package: amstext 2000/06/29 v2.01
("C:\Program Files\MiKTeX 2.8\tex\latex\ams\math\amsgen.sty"
File: amsgen.sty 1999/11/30 v2.0
\@emptytoks=\toks16
\ex@=\dimen103
))
("C:\Program Files\MiKTeX 2.8\tex\latex\ams\math\amsbsy.sty"
Package: amsbsy 1999/11/29 v1.2d
\pmbraise@=\dimen104
)
("C:\Program Files\MiKTeX 2.8\tex\latex\ams\math\amsopn.sty"
Package: amsopn 1999/12/14 v2.01 operator names
)
\inf@bad=\count87
LaTeX Info: Redefining \frac on input line 211.
\uproot@=\count88
\leftroot@=\count89
LaTeX Info: Redefining \overline on input line 307.
\classnum@=\count90
\DOTSCASE@=\count91
LaTeX Info: Redefining \ldots on input line 379.
LaTeX Info: Redefining \dots on input line 382.
LaTeX Info: Redefining \cdots on input line 467.
\Mathstrutbox@=\box26
\strutbox@=\box27
\big@size=\dimen105
LaTeX Font Info: Redeclaring font encoding OML on input line 567.
LaTeX Font Info: Redeclaring font encoding OMS on input line 568.
\macc@depth=\count92
\c@MaxMatrixCols=\count93
\dotsspace@=\muskip10
\c@parentequation=\count94
\dspbrk@lvl=\count95
\tag@help=\toks17
\row@=\count96
\column@=\count97
\maxfields@=\count98
\andhelp@=\toks18
\eqnshift@=\dimen106
\alignsep@=\dimen107
\tagshift@=\dimen108
\tagwidth@=\dimen109
\totwidth@=\dimen110
\lineht@=\dimen111
\@envbody=\toks19
\multlinegap=\skip44
\multlinetaggap=\skip45
\mathdisplay@stack=\toks20
LaTeX Info: Redefining \[ on input line 2666.
LaTeX Info: Redefining \] on input line 2667.
)
("C:\Program Files\MiKTeX 2.8\tex\latex\amsfonts\amssymb.sty"
Package: amssymb 2009/06/22 v3.00
("C:\Program Files\MiKTeX 2.8\tex\latex\amsfonts\amsfonts.sty"
Package: amsfonts 2009/06/22 v3.00 Basic AMSFonts support
\symAMSa=\mathgroup4
\symAMSb=\mathgroup5
LaTeX Font Info: Overwriting math alphabet `\mathfrak' in version `bold'
(Font) U/euf/m/n --> U/euf/b/n on input line 96.
))
("C:\Program Files\MiKTeX 2.8\tex\latex\ams\classes\amsthm.sty"
Package: amsthm 2004/08/06 v2.20
\thm@style=\toks21
\thm@bodyfont=\toks22
\thm@headfont=\toks23
\thm@notefont=\toks24
\thm@headpunct=\toks25
\thm@preskip=\skip46
\thm@postskip=\skip47
\thm@headsep=\skip48
\dth@everypar=\toks26
)
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\ifpdf.sty"
Package: ifpdf 2010/01/28 v2.1 Provides the ifpdf switch (HO)
Package ifpdf Info: pdfTeX in pdf mode not detected.
)
("C:\Program Files\MiKTeX 2.8\tex\latex\graphics\color.sty"
Package: color 2005/11/14 v1.0j Standard LaTeX Color (DPC)
("C:\Program Files\MiKTeX 2.8\tex\latex\00miktex\color.cfg"
File: color.cfg 2007/01/18 v1.5 color configuration of teTeX/TeXLive
)
Package color Info: Driver file: dvips.def on input line 130.
("C:\Program Files\MiKTeX 2.8\tex\latex\graphics\dvips.def"
File: dvips.def 1999/02/16 v3.0i Driver-dependant file (DPC,SPQR)
)
("C:\Program Files\MiKTeX 2.8\tex\latex\graphics\dvipsnam.def"
File: dvipsnam.def 1999/02/16 v3.0i Driver-dependant file (DPC,SPQR)
))
("C:\Program Files\MiKTeX 2.8\tex\latex\graphics\graphicx.sty"
Package: graphicx 1999/02/16 v1.0f Enhanced LaTeX Graphics (DPC,SPQR)
("C:\Program Files\MiKTeX 2.8\tex\latex\graphics\keyval.sty"
Package: keyval 1999/03/16 v1.13 key=value parser (DPC)
\KV@toks@=\toks27
)
("C:\Program Files\MiKTeX 2.8\tex\latex\graphics\graphics.sty"
Package: graphics 2009/02/05 v1.0o Standard LaTeX Graphics (DPC,SPQR)
("C:\Program Files\MiKTeX 2.8\tex\latex\graphics\trig.sty"
Package: trig 1999/03/16 v1.09 sin cos tan (DPC)
)
("C:\Program Files\MiKTeX 2.8\tex\latex\00miktex\graphics.cfg"
File: graphics.cfg 2007/01/18 v1.5 graphics configuration of teTeX/TeXLive
)
Package graphics Info: Driver file: dvips.def on input line 91.
)
\Gin@req@height=\dimen112
\Gin@req@width=\dimen113
)
("C:\Program Files\MiKTeX 2.8\tex\latex\graphics\epsfig.sty"
Package: epsfig 1999/02/16 v1.7a (e)psfig emulation (SPQR)
\epsfxsize=\dimen114
\epsfysize=\dimen115
)
("C:\Program Files\MiKTeX 2.8\tex\latex\hyperref\hyperref.sty"
Package: hyperref 2010/01/25 v6.80d Hypertext links for LaTeX
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\kvsetkeys.sty"
Package: kvsetkeys 2010/01/28 v1.8 Key value parser (HO)
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\infwarerr.sty"
Package: infwarerr 2007/09/09 v1.2 Providing info/warning/message (HO)
)
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\etexcmds.sty"
Package: etexcmds 2010/01/28 v1.3 Prefix for e-TeX command names (HO)
Package etexcmds Info: Could not find \expanded.
(etexcmds) That can mean that you are not using pdfTeX 1.50 or
(etexcmds) that some package has redefined \expanded.
(etexcmds) In the latter case, load this package earlier.
))
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\pdfescape.sty"
Package: pdfescape 2007/11/11 v1.8 Provides hex, PDF name and string conversion
s (HO)
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\pdftexcmds.sty"
Package: pdftexcmds 2009/12/12 v0.7 Utility functions of pdfTeX for LuaTeX (HO)
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\ifluatex.sty"
Package: ifluatex 2009/04/17 v1.2 Provides the ifluatex switch (HO)
Package ifluatex Info: LuaTeX not detected.
)
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\ltxcmds.sty"
Package: ltxcmds 2010/01/28 v1.2 LaTeX kernel commands for general use (HO)
)
Package pdftexcmds Info: LuaTeX not detected.
Package pdftexcmds Info: \pdf@primitive is available.
Package pdftexcmds Info: \pdf@ifprimitive is available.
))
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\ifvtex.sty"
Package: ifvtex 2008/11/04 v1.4 Switches for detecting VTeX and its modes (HO)
Package ifvtex Info: VTeX not detected.
)
("C:\Program Files\MiKTeX 2.8\tex\latex\ifxetex\ifxetex.sty"
Package: ifxetex 2009/01/23 v0.5 Provides ifxetex conditional
)
("C:\Program Files\MiKTeX 2.8\tex\latex\oberdiek\hycolor.sty"
Package: hycolor 2009/12/12 v1.6 Color options of hyperref/bookmark (HO)
("C:\Program Files\MiKTeX 2.8\tex\latex\oberdiek\xcolor-patch.sty"
Package: xcolor-patch 2009/12/12 xcolor patch
))
\@linkdim=\dimen116
\Hy@linkcounter=\count99
\Hy@pagecounter=\count100
("C:\Program Files\MiKTeX 2.8\tex\latex\hyperref\pd1enc.def"
File: pd1enc.def 2010/01/25 v6.80d Hyperref: PDFDocEncoding definition (HO)
)
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\intcalc.sty"
Package: intcalc 2007/09/27 v1.1 Expandable integer calculations (HO)
)
("C:\Program Files\MiKTeX 2.8\tex\latex\00miktex\hyperref.cfg"
File: hyperref.cfg 2002/06/06 v1.2 hyperref configuration of TeXLive
)
("C:\Program Files\MiKTeX 2.8\tex\latex\oberdiek\kvoptions.sty"
Package: kvoptions 2009/12/08 v3.6 Keyval support for LaTeX options (HO)
)
Package hyperref Info: Option `bookmarks' set `false' on input line 3214.
Package hyperref Info: Option `colorlinks' set `true' on input line 3214.
Package hyperref Info: Hyper figures OFF on input line 3295.
Package hyperref Info: Link nesting OFF on input line 3300.
Package hyperref Info: Hyper index ON on input line 3303.
Package hyperref Info: Plain pages OFF on input line 3310.
Package hyperref Info: Backreferencing OFF on input line 3315.
Implicit mode ON; LaTeX internals redefined
Package hyperref Info: Bookmarks OFF on input line 3517.
("C:\Program Files\MiKTeX 2.8\tex\latex\ltxmisc\url.sty"
\Urlmuskip=\muskip11
Package: url 2006/04/12 ver 3.3 Verb mode for urls, etc.
)
LaTeX Info: Redefining \url on input line 3748.
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\bitset.sty"
Package: bitset 2007/09/28 v1.0 Data type bit set (HO)
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\bigintcalc.sty"
Package: bigintcalc 2007/11/11 v1.1 Expandable big integer calculations (HO)
))
\Fld@menulength=\count101
\Field@Width=\dimen117
\Fld@charsize=\dimen118
\Field@toks=\toks28
Package hyperref Info: Hyper figures OFF on input line 4707.
Package hyperref Info: Link nesting OFF on input line 4712.
Package hyperref Info: Hyper index ON on input line 4715.
Package hyperref Info: backreferencing OFF on input line 4722.
Package hyperref Info: Link coloring ON on input line 4725.
Package hyperref Info: Link coloring with OCG OFF on input line 4732.
Package hyperref Info: PDF/A mode OFF on input line 4737.
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\atbegshi.sty"
Package: atbegshi 2009/12/02 v1.10 At begin shipout hook (HO)
)
\Hy@abspage=\count102
\c@Item=\count103
\c@Hfootnote=\count104
)
*hyperref using default driver hdvips*
("C:\Program Files\MiKTeX 2.8\tex\latex\hyperref\hdvips.def"
File: hdvips.def 2010/01/25 v6.80d Hyperref driver for dvips
("C:\Program Files\MiKTeX 2.8\tex\latex\hyperref\pdfmark.def"
File: pdfmark.def 2010/01/25 v6.80d Hyperref definitions for pdfmark specials
\pdf@docset=\toks29
\pdf@box=\box28
\pdf@toks=\toks30
\pdf@defaulttoks=\toks31
\Fld@listcount=\count105
\c@bookmark@seq@number=\count106
("C:\Program Files\MiKTeX 2.8\tex\latex\oberdiek\rerunfilecheck.sty"
Package: rerunfilecheck 2010/01/25 v1.3 Rerun checks for auxiliary files (HO)
("C:\Program Files\MiKTeX 2.8\tex\latex\oberdiek\atveryend.sty"
Package: atveryend 2010/01/25 v1.4 Hooks at very end of document (HO)
Package atveryend Info: \enddocument detected (standard).
)
("C:\Program Files\MiKTeX 2.8\tex\generic\oberdiek\uniquecounter.sty"
Package: uniquecounter 2009/12/18 v1.1 Provides unlimited unique counter (HO)
)
Package uniquecounter Info: New unique counter `rerunfilecheck' on input line 2
70.
)
\Hy@SectionHShift=\skip49
))