My file displya fine when I use Dvi.
However i need to send it to my supervisor as pdf and I when I use LatEX=>pdf none of the .eps pictures I included appear?
How can I fix this?
Code: Select all
\documentclass{article} % Your input file must contain these two lines
\usepackage{natbib}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{bm}
\usepackage{grffile}
\usepackage{graphicx}
\usepackage[%
font=small,
labelfont=bf,
figurewithin=section,
tablewithin=section,
tableposition=top
]{caption}
\numberwithin{equation}{section}
\makeatletter
\def\env@matrix{\hskip -\arraycolsep
\let\@ifnextchar\new@ifnextchar
\array{*\c@MaxMatrixCols l}}
\makeatother
\begin{document} % plus the \end{document} command at the end
\parskip = 1pc %change spacing between paragraphs
\parindent = 0pc %change paragraph indentation
\section{Introduction}
\section{LMM}
\subsection{Introduction} % This command makes a section title.
The Black model was already well established in the interest rate market. This allowed trader to price caps and swaptions individually i.e. in their own specific measure. However there was no framework to price caps swaptions or any other LIBOR product of a different maturity (and so measure) consistently. The seminal work of Heath, Jarrow and Morton (1992) whose great insight was that the no arbitrage of the state variables (e.g.) forward rates could be expressed as a function of the volatilities of and correlations between the state variables themselves. The HJM was originally cast is terms of instantenous forward rates, which don't actually trade in the market, also a point raised in the HJM paper was that in the continuous time limit for truely instantaneous and log-normal forward rates, their process explodes with positive probability. This lead early implementation of the approach that tried to steer clear of log normality: this was inconsistent with the already widely accepted Black approach and ultimately proved to be a dead end.
A new approach besed on HJM and first described in the papers by Brace et al(1996), Jamshidian(1997) and Musiela abd Rutkowski (1997) appeared in the mid 90's.
\begin{itemize}
\item Recast the yield curve in terms of market observable discrete sets of foreard rates
\item The no arbitrage drifts for the forwards were translated from the continuous time HJM setting tothe new discrete setting.
\item a numeraire had to be chosen (in early attempts this a discretely compounded money market account was invented, but forward and swap measures soon followed.
\item the log normal distribution assumtption for forward rates was introduced
\end{itemize}
The LMM (like its HJM forebearer) is not a model per se but rather a set of no arbitrage conditions among forward rates. There are various incarnations of the model. It can be formulated in terms of forward rates, swap rates, they can be normally, lognomally or otherwise distributed, the associated numeraire could be a zero coupon bond, a swap annuity or the money market account. These choice along with the specific forms of the instantaneous volatility and correaltion functions used fully specuify the model. The simultaneous specification of these time-dependent volatilities and correlations became \textit{the} problem in the specification of the LMM model.
When using the forwards as the state variabels and log normality is assumes, caplet vols can be recovered exactly and in a very strightforward maner. Indeed the Lognormal forward market model prices caps using blacks formula so calibration is very strightforward. This is one of the reasons the model has become so popular it uses the market prices of underlying instruments as building blocks, hence the name the `market` model.
It should be noted that the Libor Forward Model and Libor Swap Model are theoretically inconsistent but this is a not a concern in practive as excellent approxiamtions exist see Brigo (2007) and Rebonato (2002).
As we will see the LMM lends itself well to monte carlo simulation and the pricing of path dependent options in multi-factor framework. However because the forward process implied by the LMM is in general non-Markovian it does not lend it self easily to recombining lattice type pricing techniques, consequently dealing with early exercise features can be problematic. However Carr and Yang 1997 provide an example of how to approximate a Bermuden price using a markov chain, as does Andersen (1999) (who approxiamtes the early exercise boundary as an function of intrinsic value and `sill alive' nested European swaptions. Further a general method for combining backward induction with Monte Carlo simulation has been proposed by Longstaff and Schwartz (2001).
\subsection{Model Set-Up}
Following the presentation in Rebonato et al 2009 \nocite{*}.
A discrete set of default-free discount bonds are assumed to trade in the economy $P_t^i$. Spanning forward rates are denoted by.
\begin{equation}
f(t,T_i,T_{i+1}) = f_i^t \qquad i=1,2,...,N \\
\end{equation}
The instantaneous volatilities of the forward reates are denoted:
\begin{equation}
\sigma(t,T_i)=\sigma_t^i \qquad i=1,2,...,N \\
\end{equation}
The instantaneous volatility between forward rate i and j is denoted:
\begin{equation}
\rho(t,T_i,T_j)=\rho_{i,j}^t \qquad i,j=1,2,...,N \\
\end{equation}
We choose as numeraire the discount bond $P_t^i=P(t,T_i)$
The forward rate can be expressed in terms of discount bonds as follows:
\begin{equation}
f_t^i= (\frac{P_t^i}{P_t^{i+1}}-1)\frac{1}{\tau_i} \\
\end{equation}
The description of the discrete yield curve is them completed by providing the value of the spot rate, i.e., the rate for lending/borrowing from spot time to $T_1$, given by:
\begin{equation}
r_0= (\frac{1}{P_0^{1}}-1)\frac{1}{\tau_1} \\
\end{equation}
Thus in the determinisitic-volatiltiy LMM the evolution of the forward rates is given by:
\begin{equation}
\frac{df_t^i}{f_t^i}= \mu^i(\{\textbf{f}_{t}\},\{ \bm{\sigma}_t\}, \bm{\rho},t)dt + \sigma^i(t,T_i)dz_t^i \\
\end{equation}
with
\begin{equation}
\mathbb{E}[dz_t^idz_t^j] = \rhp(t,T_i,T_j)dt \\
\end{equation}
Where $\textbf{f}_{t}$ is a vector of spanning forward rates, $\bm{\sigma}_t$ is the vector of associated volatilities, and $\bm{\rho}$ the correlation matrix
The following result is taken from Brigo et al 2006, under the lognormal assumption, we obtain the dynamics of $f_k$ under the forward adjusted measure $Q^i$ in the three cases $i<k$, i=k and $i>k$.
\begin{equation}
df_k(t) =
\begin{cases}
\sigma_k(t)f_k(t) \displaystyle\sum_{j=i+1}^k \frac{\rho_{k,j}\tau_j \sigma_j(t)f_j(t)}
{1+\tau_jf_j(t)}dt +\sigma_k(t)f_k(f)dZ_k(t) &\text{if }i<k, t \leq T_i.\\
\sigma_k(t)f_k(f)dZ_k(t) &\text{if }i=k, t \leq T_i .\\
-\sigma_k(t)f_k(t) \displaystyle\sum_{j=k+1}^i \frac{\rho_{k,j}\tau_j \sigma_j(t)f_j(t)}
{1+\tau_jf_j(t)}dt +\sigma_k(t)f_k(f)dZ_k(t) &\text{if }i > k, t \leq T_i.
\end{cases}
\end{equation}
where $Z=Z^i$ is brownian motion under $Q^i$. All of the equations above asmit a unique strong solution if the coefficients $\sigma(.)$ are bounded.
Going back to equaiton (2.6) it can be re-written as:
\begin{equation}
\frac{df_t^i}{f_t^i}= \mu^i(\{\textbf{f}_{t}\},\{ \bm{\sigma}_t\}, \bm{\rho},t)dt + \sum_{k=1}^{m} \sigma_{ik}dz_k \\
\end{equation}
where we assume that we are dealing with m $(m \leq N)$ factors and that the Brownian motions are independent. The quantities $ \sigma_{ik} $ can be interpreted as the loading of the ith forard rate on to the kth factor. Hence:
\begin{equation}
\sigma_i(t)= \sqrt{\sum_{k=1}^m \sigma_{ik}^2(t)} \\
\end{equation}
If the function has been chosen such that
\begin{equation}
\int_0^{T_i} \sigma_i(t)^2 dt = \hat{\sigma}_i^2 T_i \\
\end{equation}
holds true, then the market caplets will be correctly priced. $ \hat{\sigma}$ represents the Black impled volatility.
If each loading $\sigma_{ik}$ is now multiplied and divided by the volatility $\sigma_i$ of the ithe forward rate:
\begin{equation}
\frac{df_t^i}{f_t^i}= \mu^i(\{\textbf{f}_{t}\},\{ \bm{\sigma}_t\}, \bm{\rho},t)dt + \sigma_{i} \sum_{k=1}^{m} \frac{\sigma_{ik}}{\sigma_i}dz_k \\
\end{equation}
Using (2.10) this can be re-written as
\begin{equation}
\frac{df_t^i}{f_t^i}= \mu^i(\{\textbf{f}_{t}\},\{ \bm{\sigma}_t\}, \bm{\rho},t)dt + \sigma_{i} \sum_{k=1}^{m} \frac{\sigma_{ik}}{\sqrt{\sum_{k'=1}^{m}\sigma_{ik'}^2}}dz_k \\
\end{equation}
Defining $b_{ik}$ as
\begin{equation}
b_{ik} \equiv \frac{\sigma_{ik}}{\sqrt{\sum_{k'=1}^m \sigma_{ik'}^2}} \\
\end{equation}
thus (2.13) can be expressed as
\begin{equation}
\frac{df_t^i}{f_t^i}= \mu^i(\{\textbf{f}_{t}\},\{ \bm{\sigma}_t\}, \bm{\rho},t)dt + \sigma_{i} \sum_{k=1}^{m} b_{ik} dz_k \\
\end{equation}
where $ \bm{b} $ the $[N*m]$ matrix of elements $b_{jk}$. It can readily be shown that the correlation matrix is
\begin{equation}
\bm{b}\bm{b^T}= \bm{\rho} \\
\end{equation}
Equation (2.15) allows the stochastic part to be decomposed into a pure volatility component which can be easily calibrated to caplet prices and a correlation component $\bm{b}$ which can for instance be fitted to historical correlation matrices.
\subsection{Volatility Function}
As stated earlier a given implentation of the LMM depends laregely on the calibration of the volatlity and correlation functions.
There are various choices but again following Rebonato et al 2009. I show an example of a volatility function for the determinsitc case. This is both for paedological reasons and because this function is used and expanded by Rebonato et al when the apply the SABR model. This model has been shown to be a good match empirically in Rebonato (2002) and (2004).
\begin{equation}
\sigma_t^T=[a+b(\tau)]\exp^{[-c(\tau)]}+d \\
\end{equation}
\begin{itemize}
\item As shown empirically in Rebonato (2002) and (2004) the humped shape shown is prevalent in `normal' trading environments, while the monotonically decreasing shape is prevalent in `excited' trading environments after major market dislocations
\item It is time homogeneous which as shown in Rebonato (2002) and (2004) is a very desirable property matching empirical behaviour of the instantaneous volatility curve
\item It is square integrable
\item It is very intuitive to use. a+d is the value of the instantaneous volatility of the expiry reaches zero; d is the instantaneous vol for long maturities and the position of the hump (assuming there is one) is given by $\hat{\tau}=\frac{1}{c}- \frac{a}{b}$
\item When fitted to ATM swaptions or caplets natural fits are obtained. Although some fine tuning is often necessary see below.
\end{itemize}
In general an extra paremeter is required to attain a prefect fit to caption prices. This parameter is maturity specific and so not time homogeneous. However it is constrained during the optimisation process so that it is as close as possible to one. Rebonato himself confeses to this being a fudge factor but it ends up playing an inportant role in the SABR version of the model.
\begin{equation}
k_{Ti}^2=\frac{\hat{\sigma_{T_i}^2T_i}}{\int_0^{T_i}([a+b(\tau)]\exp^{-c(\tau)]}+d)^2d\tau} \\
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5, angle=-1]{volcurve.eps}
\caption{Possible shapes of the volatility function in equation (2.17)}\label{fig:2.1}
\end{figure}
\subsection{Correlation Function}
\subsubsection{Simple Exponential Function}
\begin{equation}
\rho(t,T_i,T_j) = \exp^{[-\beta|T_i-T_j|]} \qquad t \leqmin(T_j,T_j) \\
\end{equation}
\begin{itemize}
\item Assuming $\beta$ is positive the further apart two forwart rates are the more decorrelatied they will be which matches empirical observation
\item For any positive $\beta$ the corresponding matrix $\rho$ will always be real symmentric matrix with positive eigenvalues
\item However it gives the same decorrealtion between forwards at 1y and 1y 1m as ones at 10y and 10y 1m. This does not match empirical observation. However for instumetns like swaptions where it is only the net level of correlation that matters not the shape of the correaltion matrix this makes very little difference in practice. However for products like CMS spread options this would be a very serious flaw.
\end{itemize}
A further advantage is that this simple function has no dependence on time variable in:
\begin{equation}
C(i,j,k) = \int_{T_k}^{T_{k+1}} \sigma_u^i \sigma_u^j \rho(u,T_i,T_j) du \\
\end{equation}
and therefore one can write
\begin{equation}
C(i,j,k) = \rho_{i,j} \int_{T_k}^{T_{k+1}} \sigma_u^i \sigma_u^j du \\
\end{equation}
which simplifies the computational burden
\subsubsection{Doust Correlation Function}
Obviously we would like to find a way to introduce a dependence of the correlation factor $\beta$ on the epries of teh forward rates, $\beta=\beta(T_i,T_j)$ in such a way that the resulting correlation matric reamins positive definite.
Assuming we takea 5 x 5 real symmetric matrix, where we retrict ourselves to only specifying (5-1) quantities, $a_1, a_2, a_3$ and $a_4$ keeping then between 1 and -1. Then Doust (2007) (as reported in Robonato et al 2009) showed that the resulting matrix is always positive defintie and so a possible correlation matrix.
\begin{equation}
\begin{bmatrix}
1 & a_1 & a_1a_2 & a_1a_2a_3 & a_1a_2a_3a_4 \\
a_1 & 1 & a_2 & a_2a_3 & a_2a_3a_4 \\
a_1a_2 & a_2 & 1 & a_3 & a_3a_4 \\
a_1a_2a_3 & a_2a_3 & a_3 & 1 & a_4 \\
a_1a_2a_3a_4 & a_2a_3a_4 & a_3a_4 & a_4 & 1 \end{bmatrix} &&
\end{equation}
The result admits a cholesky decomposition
\begin{equation}
\begin{flalign}
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\
a_1 & \sqrt{1-a_1^2} & 0 & 0 & 0 \\
a_1a_2 & a_2 \sqrt{1-a_1^2} & \sqrt{1-a_2^2} & 0 & 0 \\
a_1a_2a_3 & a_2a_3\sqrt{1-a_1^2} & a_3\sqrt{1-a_2^2} & \sqrt{1-a_3^2} & 0 \\
a_1a_2a_3a_4 & a_2a_3a_4\sqrt{1-a_1^2} & a_3a_4\sqrt{1-a_2^2} & a_4\sqrt{1-a_3^2} & \sqrt{1-a_4^2} \end{bmatrix} &&
\end{flalign}
\end{equation}
The condition that ensures a valid correlation matrix is that all the lements on the main diagonal must be real. This will always be the case so long as
\begin{equation}
-1 \leq a_i \leq 1 \qquad \text{for any i} \\
\end{equation}
Therefore tp ensure that you are contructing a valid correlation matrix one proceeds af follows. All the diagonal elements are set to
\begin{equation}
\rho_{ii} = 1 \\
\end{equation}
The elements in the first row are then defines as:
\begin{equation}
\rho_{i,i} = \prod_{k=1}^{j-1} a_k = \rhp_{j,1}, \qquad j =2,...,n \\
\end{equation}
Where assuming $i > j$
\begin{equation}
\rho_{i,j} = \frac{\rho{1,i}}{\rho{j,1}} = \frac{\rho{1,i}}{\rho{1,j}}= \frac{\prod_{k=1}^{i-1}a_k}{\prod_{k=1}^{j-1}a_k}=\prod_{k=j}^{i-1}a_k \\
\end{equation}
The remaining elements may be found by symmetry \rho_{ij} =\rho_{ji}$
Rebonato et al 2009 suggest that the $a_i$ be chosen in the following maner.
\begin{equation}
a_k = \exp^{[-\beta_k \Delta T]} \\
\end{equation}
where $\Delta T$ is the spacing between the forward rates. Therefore
\begin{equation}
\rho_{i,j} = \prod_{k=j}^{i-1} a_k = \prod_{k=j}^{i-1} \exp^{[-\beta_k \Delta T]} = \exp^{[-sum_{k=j}^{i-1}\beta_k \Delta T]} \\
\end{equation}
When $\beta_k = \beta_0$ for all k's the expression degenerates to the simple exponential equation shown earlier
\begin{equation}
\rho_{i,j}^0 = \exp^{[-(i-j) \beta_0 \Delta T]} = \exp^{[-\beta_0|T_i - t_j|]} \\
\end{equation}
If we set $\beta_k = g_0+g1/k+g2/k^2 + ... $ for positive $g_i$ then the $\beta_k$ are both decreasing in k and always positive. Which is in line with the emirical finding outlines earlier.
The Doust correlation function can be simply augmented to allow the long term correlation to level off at some level rather that decreasign montonely to zero
\begin{equation}
\rho_{ij}(t) = LongCorr _(1-LongCorr) \hat{\rho_{ij}}(t) \\
\end{equation}
where $ \hat{\rho_{ij}}(t) $ is a valis correaltion matrix produced in this instance by the Doust method.
In the Doust model the decorrelation does not depend just depend on teh distance between two forwards also on their maturities. This means that correlation can no longer be pulled out of the covariance integral
\begin{equation}
C(i,j,k) = \int_{T_k}^{T_{k+1}} \sigma_u^i \sigma_u^j \rho(u,T_i,T_j) du \equiv I_k \\
\end{equation}
However because the Doust correlation function is time homogeneous, so thatthe decorrelation between contracts 4 and 5 at time $T_1$ will be equal to the decorrelation between contracts 3 and 4 at time $T_0$. Thus in the determinsitic volatility case is the tims steps of the monte carlo simulation are chosen to match the maturities of the underlying forwards then calander time will not enter the covariance integral.
Things are not quite so straightforward in the stochastic volatility case. However at worst one simply has to store a covariance matrix at each time step (expiry of each forward) and then use linear interploation between them. The covariance matrices themselves thanks to the time homogeneous property of the Doust correlation function.
\begin{figure}[h]
\centering
\includegraphics[scale=0.1, angle=-1]{correlation.eps}
\caption{Example of Doust correlation structure}\label{fig:2.2}
\end{figure}
\subsection{Intrinsic Incompleteness of LMM}
In what follows I give a couple of very strightforwad example from Rebonato (2004) that highlights some of the problems surrounding the calibration of the LMM model even in the deterministic volatility environment.
\begin{figure}[h]
\centering
\includegraphics[scale=0.1]{swaption.eps}
\caption{The reset and expiry times of the two forwards and of the swap rate}\label{fig:2.3}
\end{figure}
If we consider two forwards $f_1(t,T_1,T_2), f_s(t,T_2,T_3), SR_{12} = SR_{12}(t,T_1,T_3)$. The swap rate is a linear combination of the two forwards
\begin{equation}
SR_{12} = w_1f_1 +w_2f_2 \\
\end{equation}
If we assume that all volatilities and correlation to be piecewise constant over the two time steps. Further assume that the forwards f1 and f2 both trade in the market with implied Black volatilities of $\hat{\sigma_1}$ and $\hat{\sigma_2}$ of $20\%$. The swaption Black volatility is taken to be $\hat{\sigma_{SR}}$ is $18\%$.
As shown in Rebonato and Jackel (2002) a reasonably accurate approximation for the volatility of the swaption is given by
\begin{equation}
\sigma_{SR}^2SR^2= w_1^2f_1^2 \sigma_{1,1}^2+w_2^2f_2^2 \sigma_{2,1}^2 + w_1w_2f_1f_2\sigma_{1,1}\sigma_{2,1} \rho \\
\end{equation}
Clearly the volatility of the first forward must simply equal its Black volatiltity of $20\%$. Therefore so long as the net volatility of the second forward over the time periods is still matched to $20\%$
Then the problem of matching the swaption price is findamentally over specified. We have twp desgrees of freedom and only one price to match. This is true even though there is some constraint on the value if the 2nd forward.
\begin{equation}
\hat{\sigma_{2}^2}(t_2-T_0)= \int_{T_0}^{T_2} \sigma_2^2 du = \sigma_{2,1}^2)(T_1-T_0) + \sigma_{2,2}^2)(T_2-T_1) = 20\% \\
\end{equation}
In the extrame we could take $\rho = 1$ and reduce the volatility between $T_1$ and $T_2$, clearly we would have to raise teh volatility of the same forward between times$T_2$ and $T_3$ to keep equation 2.35 balanced. Or we could keep the volatilties flat at $20\%$ and reduce the correlation. Or indeed any of an infinite number of combinations varrying the correlation and the instantaneous volatilities.
As you can see the caplet and swaption surfaces can be fitted in an infintiy of ways. This is a problem because each fit corresponds to a different future evelotion of instantaneous volatilities and correlations and so give rise to different exotics prices. Thus the caplet and swaption markets are instrinsically incomplete even in the determinsitic volatility universe.
This is even more of a problem in the smile case. The LMM does allow the trader to at least directly express their views about the future term structure of volatilities and other unknowns, in a much more straight forwardly then with tradtional spot rate term structure models. However since vega hedging is essential to exotics derivaties trading, future re hedging costs must be considered and this requires future modelling of implied volatilities. This is why a plausible evolution of the smile is essential. If a model implies an implausible shape for the future smile as for example local-volatilitiy models do, it also implies implausible future prices for caplets and swaptions and therefore implausible exotics prices.
\section{Smile Models used with LMM}
\subsection{Local-Volatility Models}
\subsubsection{Shifted-Lognormal Model}
Following Brigo (2006) assume that the forward $f_j$ evolves under its $T_j£ forward measure according to
\begin{align}
\begin{split}
f_j(t) &=X_j(f) + \alpha \\
dX_j(t) &=\beta(t)X_j(t)dW_t
\end{split}
\end{align}
where $\alpha$ is a real constant and $\beta$ is a deterministic function of time, $W_t$ is a standard Brownian motion.
Therefore a strighforward application of Ito's lemma and somesimple algebra gives:
\begin{align}
df_j(t) = \beta(t)(f_j(t)-\alpha)dW_t
\end{align}
thus for $t < T \leq T_{j-1}$, the forward rate $f_j$ can be written as
\begin{equation}
df_j(t) = \alpha + (f_j(t) -\alpha)\exp^{-\frac{1}{2}\int_t^T\beta^2(u)du + \int_t^T\beta(u)dW_u}
\end{equation}
The distribution of f_j(T), conditional on f_jt(t), $t < T \leq T_{j-1}$ is then a shifted lognormal distribution with density
\begin{equation}
p_{f_j(T)|f_j(t)}(x) = \frac{1}{(x-\alpha)U(t,T)\sqrt(2pi)} \exp^{-\frac{1}{2}(\frac{\ln(\frac{x-\alpha}{f_j(t)-\alpha})+\frac{1}{2}U^2(t,T)}{U(t,T})^2)
\end{equation}
for$ x > \alpha$, where
\begin{equation}
U(t,T) := \sqrt{\int_t^T \beta^2(u)du}
\end{equation}
so that for $ \alpha < K$ the caplet price \textbf{Cpl}$(t,T_{j-1},T_j,\tau_j,N,K)$ associated with equation (3.2) is given by
\begin{equation}
\textbf{Cpl}(t,T_{j-1},T_j,\tau_j,N,K)=\tau_jNP(t,T_j)\textbf{Black}(K-\alpha,f_j(t)-\alpha,U(t,T_{j-a}))
\end{equation}
Thus
\begin{equation}
(f_j(0)- \alpha)[2 \Phi(\frac{1}{2}U(0,T_{j-1}))-1] = f_j(0)[2 \Phi(\frac{1}{2}\sqrt{T_{j-1}}\hat{\sigma}(f_j(0), \alpha))-1]
\end{equation}
clearly increasing $\alpha$ ont eh left hand side needs a corresponding reduction in $\hat{\sigma}$ on the right hand side. Further differentiating a(3.6) with respect to $\alpha$ clearly produces a value which is always negative.
Therefore for $\alpha < 0$ we always havea decreasing curve. Also for $\alpha < 0$ the whole curve is moved downwards and vice versa.
\begin{figure}[h]
\centering
\includegraphics[scale=0.1]{ShiftedLognormal.eps}
\caption{Caplet Volatility Structure $\hat{\sigma}(K,\alpha)$ plotted against strike}\label{fig:3.1}
\end{figure}
\subsubsection{Constant Elasticity of Variance Model}
Following Andersen and Andreasen
\begin{equation}
df_j(t)=\phi(f_j(t))\sigma_j(t)dZ_j^j(t)
\end{equation}
where $\phi$ is a general function. Andersen and Andreasen suggest
\begin{equation}
\phi(f_j(t))=[f_j(t)]^\gamma
\end{equation}
with $0<\gamma<1$. The limiting cases of $\gamma=0$ and $\gamma=1$ which are the fully normal and lognormal dynamics respectively. For values ibn between the dynmaics are a mixture of teh two processes.
Thus the model is
\begin{equation}
\phi(f_j(t))=\sigma_j(t)[f_j(t)]^\gamma dW_t \qquad f_j=0 \text{ absorbing boundary when } 0< \gamma<\frac{1}{2}
\end{equation}
Where W is a one-dimensional Brownian motion under teh $T_j$ forward measure
For $0<\gamma<\frac{1}{2}$ (3.10) does not have a unique solution unless we set $f_j=0$ as an absorbing boundary for the above SDE. Andersen and Andreasen also consider the case of $\gamma>1$ but note that this can leas to explosions when leaving the $T_j$ measure.
Time dependence of $\sigma_j$ can be dealt with through a determinstic time change
\begin{equation}
\upsilon(\tau,T) = \int_{\tau}^T \sigma_j(s)^2ds
\end{equation}
definig
\begin{equation}
\tilde{W}(\upsilon(0,t)) := \int_0^t \sigma_j(s)dW(s)
\end{equation}
which gives us a Brownian motion $\tilde{W}$ with time parameter $\upsilon$. This is substituted in equation (3.10)
\begin{equation}
df_j(\upsilon) = f_j(\upsilon)^{\gamma}\tilde{dW}(\upsilon) \qquad f_j=0 \text{ absorbing boundary when } 0< \gamma<\frac{1}{2}
\end{equation}
This process can be transformed into a Bessel process via a change of variable. Some manipulations allow us to find teh transition density of f. Remembering the time change the continuous part of the density function of $f_j(T)$ conditional on $f_j(t), $t < T \leq T_{j-1}$ is then given by
\begin{align}
\begin{split}
p_{f_j}(T)|f_j(t) &= 2(1-\gamma)k^{1/(2-2\gamma)}(uw^{1/(4-4\gamma)\exp^{-u-w}\textbf{I}_{1/(2-2\gamma)}(2\sqrt{uw}) \\
k &=\frac{1}{2} \upsilon(t,T)(1-\gamma)^2} \\
u &= k[f_j(t)]^{2(1-\gamma)} \\
w &=kx^{2(1-\gamma)} \\
\end{split}
\end{align}
With $I_q$ denoting the modified Bessel function of the first kind of order q.
Denoting by $G(y,z) = \frac{\exp^{-z}z^{y-1}}{\Gamma(y)}$ the gamma density function and by $G(y,x)= \int_x^{+\infty} g(y,z)dz$ the complmentary gamma distribution, the probability that $f_j(T) =0$ conditional in $f_j(t)$ is $G(\frac{1}{2(1-\gamma)},u)$
From these results an explicit solution for caplet prices can be derived
\begin{equation}
\begin{split}
\textbf{Cpl}(t,T_{j-1},T_j,\tau_j,N,K) = \tau_jNP(t,T_j)[f_j(t).\sum_{n=0}^{+\infty}g(n+1,u)G(c_n,kK^{2(1-\gamma)}) \\ - K \sum_{n=0}^{+\infty}g(c_n,u)G(n+1,kK^{2(1-\gamma)})]
\end{split}
\end{equation}
where
\begin{equation}
c_n := n + 1 + \frac{1}{2(1- \gamma)}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[scale=1]{CEV.eps}
\caption{Caplet Volatility Structure $\hat{\sigma}(K,\alpha)$ plotted against strike}\label{fig:3.2}
\end{figure}
Andersen and Andreasen also propose an extension called the Limited CEV process that addresses te problem absorption at $F_j=0$. They propose a new process where
\begin{equation}
\phi(f) = f\text{ } min(\epsilon^{\gamma-1}.f^{\gamma-1})
\end{equation}
Basically the model collapses the process to a lognormal one for very low Levels of f.
\subsubsection{Comment on CEV and Shifted Lognormal Models }
From visaul inspection of Fig 3.1 and 3.2 it is quite clear that the smiles produces by the two models are qualitively similar. Indeed Rebonato (2002) shows that for a range of futures values there is a derect equivalence between the two models.
The economic intuition behind these models is that in a log normal world and in a rising term structure environment a with rates at 1y equal to 3% and 2y equal to 6%. Then a shock in rates should produce a price move twice as large in the 6% rate. In a fully normal world the would both move by the same absolute amount. Empirically it has been obsrved that although the higher rate does tend to move more it does not move twice as much. As mentioned before the CEV model can be set to work as fully lognormal when $\gamma=1$ or normal when $\gamma=0$. Choosing a value in between allows the market behavious to be matched.
Simple skews like this were witnessed inteh market as far back as 1996. It was only later after the Russia crises that full smiles developed in interest rate markets.
\begin{equation}
\hat{\sigma_{2}^2}(t_2-T_0)= \int_{T_0}^{T_2} \sigma_2^2 du = \sigma_{2,1}^2)(T_1-T_0) + \sigma_{2,2}^2)(T_2-T_1) = 20\% \\
\end{equation}
Imply from the market what you can hedge and esimate econometrically what you can not robonato et al 2009.
COVER CAP AND SWAPTION PRICING GIVING EXAMPLES OF BOTH. EXAPLIN NOT UNIQUE EVEN IN DESCRETE VOL CASE. lINK TO MARKET PRICE OF RISK WICH CAN CHANGE AND CANNOT BE LOCKED IN.
add about tacolomy of LMM very brief history of term structure models?
What is the smile see Rebonato 2002.
smile models Rebonato 2004.
LFM and SLM incompatible but are difference s material rebonato2002 no Brigo?
Brigo 204 nice example of why correaltion are material to swaption prices
How many factors for good correlation cover
\emph{this is emphasized}.
Bold text is typed like this: \textbf{this is bold}.
\subsection{A Warning or Two} % This command makes a subsection title.
If you get too much space after a mid-sentence period---abbreviations
like etc.\ are the common culprits)---then type a backslash followed by
a space after the period, as in this sentence.
Remember, don't type the 10 special characters (such as dollar sign and
backslash) except as directed! The following seven are printed by
typing a backslash in front of them: \$ \& \# \% \_ \{ and \}.
The manual tells how to make other symbols.
\bibliographystyle{plainnat}
\bibliography{volsmile}
\end{document}