QUOTE(CallMeBin @ Mar 25 2014, 03:13 PM)
If define by results, should at least get a B+ ? No idea actually
If that's the case,

. Nothing to be worried
A good understanding of Math is required to
READ several highly theoretical engineering books. When it comes to design, engineers apply the principle and let the computer solve it. Now, let's do a little test and see to what extent you can understand it. Only a small portion of simple SPM/STPM Math is required. The rest is theoretical logic.
When engineers are to improve the design of a system, or to design a new system, a
performance index must be chosen and measured. A system is considered an optimal system when the system parameters are adjusted so that the performance index reaches an extremum, commonly a minimum value. The purpose of design is to realize a system with practical components that will provide the desired time-domain operating performance, x(t), with minimal error. In many designs, we are also concerned with the expenditure of the control action/energy, u(t). For example, in electric vehicles and aircraft, the expenditure of battery energy and fuel and must be restricted to conserve the energy for long periods of travel.
Our goal is to find an optimal control u* that minimizes the following performance index (a.k.a. cost function):

where t is the initial time, T is the terminating time, x = x(t) is the current performance state, and L(x,u) characterizes the cost objective.
The
Principle of Optimality states, if a control u* is optimal from some initial state, then it must satisfy the following property: after any initial period, the control u* for the remaining period must also be optimal with regard to the state resulting from the control of the initial period.
Now let us consider the current time t and a future time t+Δt closed to t and the control during the interval [t, t+Δt]. Clearly, we can rewrite J(x,t) as

Let J* denote the optimal (minimum) cost under optimal control action u*, then by applying the principle of optimality, we have

In the above equation, the first term, ∫ L(x,u) dτ can be approximated as

, and the second term, J*(x+Δx, t+Δt) can be approximated by its
first-order Taylor expansion:

where O(Δt²) denotes the remainder high-order terms (H.O.T.) in the Taylor expansion, which can be omitted if Δt → 0.
Therefore,

Since J*(x,t) and (∂J*/∂t) Δt are independent of u(τ) @ interval [t, t+Δt], the above equation can be written as

Rearranging the equation gives

Letting Δt → 0, then

. Therefore, we obtain the following
Hamilton–Jacobi–Bellman equation:
This post has been edited by Critical_Fallacy: Mar 26 2014, 03:37 PM