Wednesday, August 24, 2022

Non-Linear Optimization Classification

Non-Linear Optimization 
Question:
Consider the following maximization problem:
max f(x) = x3 + 2x – 2x2 -0.25x4
Apply the bisection, tangent (Newton) and secant method to solve the problem. Perform, by hand, one iteration of each
method and then implement the corresponding procedures in Python or R. The choice of the initial solution, tolerance,
maximum number of iteration and stopping criterion is free. Same for the following function, this time using the gradient
method:
max f(x)= 2x1x2 – x2 – x12 – 2x22
Answer:
starting with 
max f(x) = x3 + 2x – 2x2 -0.25x4
Bisection method: 


Finding  the root

f0=03+202-0.2504 =0

f1=13+21-212-0.2514 = 0.75

f2=23+22-222-0.2524=0

f3=33+23-232-0.2534=-5.25

according to the bisection method the zero is located between the first positive an the first negative so interval values are(1, 3)

the first iteration X1 = a+b2    = 1+32=2

for the next iteration we can replace the (b) value by the new of x1 = 2 , because it's positive, but we can replace the value of

(a) in case of negativity, and so on.

The Bisection Method in R:

library(NLRoot)

func <- function(x) { x^3 + 2* x - 2* x ^2 - 0.25 *x^ 4}

curve(func, xlim=c(-1,3), col='Red', lwd=4, lty=8, ylab='f(x)')

abline(h=0)

abline(v=0)

The output of the function shows the root is located at x = 2.000002

Bisection Function According to the Number of Iterations (5):

bisection <- function(f , a, b, n = 1000, tol = 1e-5) {

  if (sign(f(a) == sign(f(b)))) { stop('f(a) and f(b) must be differents') }

  for (i in 1:n) {

    c <- (a + b) / 2 

    if ((f(c) == 0) || ((b - a) / 2) < tol) {  

return(c)  }

        ifelse(sign(f(c)) == sign(f(a)),

           a <- c,

           b <- c) }

  print('over the maximum iterations')}

bisection(func, 2, 3)

2.000008

Newton's Method

max f(x) = x3 + 2x – 2x2 -0.25x

If we consider the root values (-2,3) which have been computed by the bisection method in above, we can compute the

iterations by  Newton-Raphson method proceeds as well using (2) as an initial guess.

Xn+1= xn -  f(xn)f'(xn)

X1  = x3+ 2x – 2x2-0.25x43x2+ 2 – 0.004x3    =  23+ 2(2)-2(2)2-0.25(2)4322+2-0.004(2)3 = 013.968

 =  0

What we can conclude that we have obtain the optimal solution from the first iteration (x1)

The Newton's Method in R:

# import LnRoot library

library(NLRoot)

# define function

func2 <- function(x) { x^3 - 2* x - 2* x ^2 - 0.25 *x^ 4}

# drawing the curve of the function

curve(func2, xlim=c(-2,3), col='blue', lwd=4, lty=8, ylab='f(x)')

# vertical and horizontal zero position 

abline(h=0)

abline(v=0

>uniroot(func2, c(0,3))

$root

[1] 0

$f.root

[1] 0

$iter

[1] 0

$init.it

[1] NA

$estim.prec

[1] 0

 the root of the function is typically at the (0) point 

Newton Function According to the Number of Iterations (5):

# import LnRoot library

newton.raphson <- function(f, a, n = 1000, tol = 1e-5) {

  require(numDeriv) 

  x_i <- a

  k <- n 

  fa <- f(a)

  if (abs(fa) < tol) {

    return(a)

  }

  for (i in 1:n) {

    dx <- genD(func = f, x = x_i)$D[1]

    x_next <- x_i - (f(x_i) / dx)

    k[i] <- x_next # Store x_next

    if (abs(x_next - x_i) < tol) {

      root.approx <- tail(k, n=1)

      res <- list('root approximation' = root.approx, 'iterations' = k)

      return(res)

    }

    x_i <- x_next

  }

  print('Too many iterations in method')}

> newton.raphson(f, 1)

$`root approximation`

[1] 4.080732e-14

$iterations

[1] 1.875000e-01 2.189860e-02 4.497565e-04 2.020083e-07 4.080732e-14

Secant Method:

max f(x) = x3 + 2x – 2x2 -0.25x4

assume that we have had values of (-2,3) as initial values of intervala x0 = 0,  and x1 =3 as starting points, so 

the first iteration for x2 could be computed as bellow:

x2=fx1fx1-fx0x1-x0

X2 =  33+23–232-0.2534 33+ 23– 232-0.2534- 03+ 20– 202-0.25040-1

=-250-3=0

As we can see the result of x2 for the first iteration is showing the exact point on the  zero position, which means that the iterations cannot go forward.

To be more sure about the result we can plot the function to visualize the result 

Secant Method in R:

 func <- function(x) {  x^3 - 2* x - 2* x ^2 - 0.25 *x^ 4}

curve(expr = func, col='Green', lwd=2, lty=2, xlim=c(-2,3)) # plotting the function 

abline(h=0)  # horizontal zero position

abline(v=0)  # vertical zero position

x0 <- 0 # x0 starting point 

x1 <- 3 # x1 starting point

a <- (x0)

b <- (x1)

points (x0, a, cex=1.25 ,pch=21, bg='blue', col='blue') # the x0 point at the plot 

points  (x1, b, cex=1.25, pch=21, bg='blue', col='blue') # x1 point at the plot

segments(x0, a, x 1, b, col='red', lwd = 2) # line between x0,x1

> SMfzero(func, 0, 3)

[1] 0

[1] 0

[1] "finding root is successful""

the root of the function is typically at the (0) point 

Gradient Descent Method:

max f(x)= 2x1x2 + x2 – x12 – 2x22

the first step of this function is to estimate the initial optimal solution for f(0)= x1 = 2, x2 = 1 and step rate = 0.01

To calculate the gradient of x1 and x2 

∂f ∂x1=2+2x1       =2+22  =6

∂f∂x2=2-4 x 2=2+21 =4

Now we can start calculating the first iteration of X1 and x2

xi+1=xi-0.1 f'x

X1 = 2 – 0.1 (6) the derivative of x1 = 1.4 as first iteration of x1

X2 = 1 – 0.1 (4) the derivative of x2 = 0.4 as first iteration of x2

The calculation of the gradient could be long and so much time consuming exactly when we have a complex Solution to

optimize, if we have decided to obtain the exact optimal global solution in the stages of X+i, the reason why we can apply

the meta-heuristic to achieve the closest point but not guaranteed as an optimal point of the gradient with less time and space.

assume that we have x1= x and x2=  y

then

max f(x,y)=  2x1x2 + x2 – x12 – 2x22

Meta heuristic in R:

# defining the function 

f <- function(x,y) 2*x*y-y-x^2-2*y^2

# define different variables to be compatible with R languages

x<- seq(0,0.3, len=100)

y<- seq(0,3.3, len=100)

z <- outer(x,y,f)

persp(x,y,z, theta=30,phi=15,ticktype="detailed")

# locations of the zero points vertically and horizontally in blue color

abline(h=0,col='Blue')

abline(v=0,col='Blue')

x is a sequence of 200 components (len=200) going from  0 up to 3.

 Such a high choice of sequence length will allow us to produce a sufficiently precise and detailed graph.

The same procedure is applied to define the variable y.

Since we have a two variable function, we need to define the variable z= f(x,y) which corresponds to the values of the

function for each pair of x and y that satisfies f(x,y).

BY INFOBALAK










Life Expectancy 1990 - 2019:

  Content Abstract: 1 Introduction 1 Data Exploration 1 Missing Data Handling: 2 Comparison in each 10 years 2 Comparison in each ...