المرجع الالكتروني للمعلوماتية
المرجع الألكتروني للمعلوماتية

الرياضيات
عدد المواضيع في هذا القسم 9761 موضوعاً
تاريخ الرياضيات
الرياضيات المتقطعة
الجبر
الهندسة
المعادلات التفاضلية و التكاملية
التحليل
علماء الرياضيات

Untitled Document
أبحث عن شيء أخر المرجع الالكتروني للمعلوماتية
{افان مات او قتل انقلبتم على اعقابكم}
2024-11-24
العبرة من السابقين
2024-11-24
تدارك الذنوب
2024-11-24
الإصرار على الذنب
2024-11-24
معنى قوله تعالى زين للناس حب الشهوات من النساء
2024-11-24
مسألتان في طلب المغفرة من الله
2024-11-24

إمكان إزالة الغضب
29-9-2016
الأبعاد الاجتماعية للاتصال
19-4-2016
الطرق الصحيحة إلى معرفة الله
22-10-2014
بعث معاوية مسلم بن عقبة إلى دومة الجندل
18-10-2015
المبعث النبوي الشريف
22-11-2015
سوء التصرف مع الأطفال
2024-02-18

INTRODUCTION-EXAMPLES  
  
269   01:08 مساءاً   date: 2-10-2016
Author : Lawrence C. Evans
Book or Source : An Introduction to Mathematical Optimal Control Theory
Page and Part : 5-10

EXAMPLE 1: CONTROL OF PRODUCTION AND CONSUMPTION.

Suppose we own, say, a factory whose output we can control. Let us begin to construct a mathematical model by setting x(t) = amount of output produced at time              t ≥ 0.

We suppose that we consume some fraction of our output at each time, and likewise can reinvest the remaining fraction. Let us denote

                            α(t) = fraction of output reinvested at time t ≥ 0.

This will be our control, and is subject to the obvious constraint that

                                         0 ≤ α(t) ≤ 1 for each time t ≥ 0.

Given such a control, the corresponding dynamics are provided by the ODE

the constant k > 0 modelling the growth rate of our reinvestment. Let us take as a payoff functional

The meaning is that we want to maximize our total consumption of the output, our consumption at a given time t being (1−α(t))x(t). This model fits into our general framework for n = m = 1, once we put

           A = [0, 1], f(x, a) = kax, r(x, a) = (1 − a)x, g ≡ 0.

As we will see later , an optimal control α(.) is given by

for an appropriate switching time 0 ≤ t ≤ T. In other words, we should reinvest all the output (and therefore consume nothing) up until time t, and afterwards, we should consume everything (and therefore reinvest nothing). The switchover time t will have to be determined. We call α(.) a bang–bang control.

EXAMPLE 2: REPRODUCTIVE STATEGIES IN SOCIAL INSECTS

The next example is from (CONTROLLABILITY, BANG-BANG PRINCIPLE) of the book Caste and Ecology in Social Insects, by G. Oster and E. O. Wilson [O-W]. We attempt to model how social insects, say a population of bees, determine the makeup of their society.

Let us write T for the length of the season, and introduce the variables

w(t) = number of workers at time t

q(t) = number of queens

α(t) = fraction of colony effort devoted to increasing work force

The control α is constrained by our requiring that

                                       0 ≤ α(t) ≤ 1.

We continue to model by introducing dynamics for the numbers of workers and the number of queens. The worker population evolves according to

Here μ is a given constant (a death rate), b is another constant, and s(t) is the known rate at which each worker contributes to the bee economy.

We suppose also that the population of queens changes according to

for constants ν and c.

Our goal, or rather the bees’, is to maximize the number of queens at time T:

                                P[α(.)] = q(T).

So in terms of our general notation, we have x(t) = (w(t), q(t))T and x0 = (w0, q0)T .

We are taking the running payoff to be r ≡ 0, and the terminal payoff g(w, q) = q.

The answer will again turn out to be a bang–bang control, as we will explain later.

EXAMPLE 3: A PENDULUM.

We look next at a hanging pendulum, for which

                          θ(t) = angle at time t.

If there is no external force, then we have the equation of motion

the solution of which is a damped oscillation, provided λ > 0.

Now let α(.) denote an applied torque, subject to the physical constraint that

                         |α| ≤ 1.

Our dynamics now become

Define

Then we can write the evolution as the system

We introduce as well

for

We want to maximize P[.], meaning that we want to minimize the time it takes to bring the pendulum to rest.

Observe that this problem does not quite fall within the general framework described in 1.1, since the terminal time is not fixed, but rather depends upon the

control. This is called a fixed endpoint, free time problem.

EXAMPLE 4: A MOON LANDER

This model asks us to bring a spacecraft to a soft landing on the lunar surface,  using the least amount of fuel.

We introduce the notation

h(t) = height at time t

v(t) = velocity = h˙ (t)

m(t) = mass of spacecraft (changing as fuel is burned)

α(t) = thrust at time t

We assume that

                                              0 ≤ α(t) ≤ 1,

and Newton’s law tells us that

the right hand side being the difference of the gravitational force and the thrust of the rocket. This system is modeled by the ODE

We summarize these equations in the form

We want to minimize the amount of fuel used up, that is, to maximize the amount remaining once we have landed. Thus

                                                          P[α(.)] = m(τ ),

where

                             τ denotes the first time that h(τ ) = v(τ ) = 0.

This is a variable endpoint problem, since the final time is not given in advance.

We have also the extra constraints

                                           h(t) ≥ 0, m(t) ≥ 0.

EXAMPLE 5: ROCKET RAILROAD CAR.

Imagine a railroad car powered by rocket engines on each side. We introduce the variables

q(t) = position at time t

v(t) = q˙ (t) = velocity at time t

α(t) = thrust from rockets,

where

                                     −1 ≤ α(t) ≤ 1,

the sign depending upon which engine is firing.

We want to figure out how to fire the rockets, so as to arrive at the origin 0 with zero velocity in a minimum amount of time. Assuming the car has mass m,  the law of motion is

We rewrite by setting x(t) = (q(t), v(t))T . Then

Since our goal is to steer to the origin (0, 0) in minimum time, we take

for

            τ = first time that q(τ ) = v(τ ) = 0.

References

[B-CD] M. Bardi and I. Capuzzo-Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, Birkhauser, 1997.

[B-J] N. Barron and R. Jensen, The Pontryagin maximum principle from dynamic programming and viscosity solutions to first-order partial differential equations, Transactions AMS 298 (1986), 635–641.

[C1] F. Clarke, Optimization and Nonsmooth Analysis, Wiley-Interscience, 1983.

[C2] F. Clarke, Methods of Dynamic and Nonsmooth Optimization, CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, 1989.

[Cr] B. D. Craven, Control and Optimization, Chapman & Hall, 1995.

[E] L. C. Evans, An Introduction to Stochastic Differential Equations, lecture notes avail-able at http://math.berkeley.edu/˜ evans/SDE.course.pdf.

[F-R] W. Fleming and R. Rishel, Deterministic and Stochastic Optimal Control, Springer, 1975.

[F-S] W. Fleming and M. Soner, Controlled Markov Processes and Viscosity Solutions, Springer, 1993.

[H] L. Hocking, Optimal Control: An Introduction to the Theory with Applications, OxfordUniversity Press, 1991.

[I] R. Isaacs, Differential Games: A mathematical theory with applications to warfare and pursuit, control and optimization, Wiley, 1965 (reprinted by Dover in 1999).

[K] G. Knowles, An Introduction to Applied Optimal Control, Academic Press, 1981.

[Kr] N. V. Krylov, Controlled Diffusion Processes, Springer, 1980.

[L-M] E. B. Lee and L. Markus, Foundations of Optimal Control Theory, Wiley, 1967.

[L] J. Lewin, Differential Games: Theory and methods for solving game problems with singular surfaces, Springer, 1994.

[M-S] J. Macki and A. Strauss, Introduction to Optimal Control Theory, Springer, 1982.

[O] B. K. Oksendal, Stochastic Differential Equations: An Introduction with Applications, 4th ed., Springer, 1995.

[O-W] G. Oster and E. O. Wilson, Caste and Ecology in Social Insects, Princeton UniversityPress.

[P-B-G-M] L. S. Pontryagin, V. G. Boltyanski, R. S. Gamkrelidze and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, Interscience, 1962.

[T] William J. Terrell, Some fundamental control theory I: Controllability, observability,  and duality, American Math Monthly 106 (1999), 705–719.




الجبر أحد الفروع الرئيسية في الرياضيات، حيث إن التمكن من الرياضيات يعتمد على الفهم السليم للجبر. ويستخدم المهندسون والعلماء الجبر يومياً، وتعول المشاريع التجارية والصناعية على الجبر لحل الكثير من المعضلات التي تتعرض لها. ونظراً لأهمية الجبر في الحياة العصرية فإنه يدرّس في المدارس والجامعات في جميع أنحاء العالم. ويُعجب الكثير من الدارسين للجبر بقدرته وفائدته الكبيرتين، إذ باستخدام الجبر يمكن للمرء أن يحل كثيرًا من المسائل التي يتعذر حلها باستخدام الحساب فقط.وجاء اسمه من كتاب عالم الرياضيات والفلك والرحالة محمد بن موسى الخورازمي.


يعتبر علم المثلثات Trigonometry علماً عربياً ، فرياضيو العرب فضلوا علم المثلثات عن علم الفلك كأنهما علمين متداخلين ، ونظموه تنظيماً فيه لكثير من الدقة ، وقد كان اليونان يستعملون وتر CORDE ضعف القوسي قياس الزوايا ، فاستعاض رياضيو العرب عن الوتر بالجيب SINUS فأنت هذه الاستعاضة إلى تسهيل كثير من الاعمال الرياضية.

تعتبر المعادلات التفاضلية خير وسيلة لوصف معظم المـسائل الهندسـية والرياضـية والعلمية على حد سواء، إذ يتضح ذلك جليا في وصف عمليات انتقال الحرارة، جريان الموائـع، الحركة الموجية، الدوائر الإلكترونية فضلاً عن استخدامها في مسائل الهياكل الإنشائية والوصف الرياضي للتفاعلات الكيميائية.
ففي في الرياضيات, يطلق اسم المعادلات التفاضلية على المعادلات التي تحوي مشتقات و تفاضلات لبعض الدوال الرياضية و تظهر فيها بشكل متغيرات المعادلة . و يكون الهدف من حل هذه المعادلات هو إيجاد هذه الدوال الرياضية التي تحقق مشتقات هذه المعادلات.