the ones that you have initially used (these are the random values) to
price calls and puts with the BSM. Note that the implied volatility of
calls and puts for should match for similar input arguments. Plot the
difference to visualize that always the dollar difference is less than
“tol” (given that the algorithm iterations were adequate to secure
convergence).
8.2 Function Minimization and Plots
There is a variety of optimization algorithms that can be used to minimize (or
to maximize) a univariate or a multivariate function (note that the
maximization of a function, f(x), is the same as the minimization of –f(x)).
Two well known and widely used algorithms are the gradient descent, that
minimizes a function by relying solely on a function’s first partial derivatives
(gradient vector), and the Newton’s descent algorithm that utilizes a
function’s second order partial derivatives (Hessian matrix).
The gradient descent algorithm can be summarized as follows:
• Step #1: give an initial guess for the coordinates of the minimum point
x
0
(e.g. if the function under consideration has two unknowns x
1
and
x
2
, then x should be a two element row vector: x
0
=[
0
1
x ,
0
2
x ])
• Step #2: perform an algorithm iteration, k, based on the following
formula:
)x('faxx
kkk
−=
+1
for k=1,2,3,….
where )x(f
k
is the value of the function at k
th
iteration at point x and
)x('f
k
is a row vector that includes the first order partial derivatives of
f(.) w.r.t x’s at the k
th
iteration. The parameter a is a positive scale
function that lies between 0 and 1 and is a learning rate that sets the
step size. Large values of a force the algorithm to oscillate around the
minimum point or even diverge, whereas small values of a lead to more
Komentáře k této Příručce