Simulation-based Optimization: Parametric Optimization Techniques and Reinforcement Learning

Author:   Abhijit Gosavi
Publisher:   Kluwer Academic Publishers
Edition:   2003 ed.
Volume:   v. 25
ISBN:  

9781402074547


Pages:   554
Publication Date:   30 June 2003
Format:   Hardback
Availability:   In Print   Availability explained
Limited stock is available. It will be ordered for you and shipped pending supplier's limited stock.

Our Price $578.16 Quantity:  
Add to Cart

Share |

Simulation-based Optimization: Parametric Optimization Techniques and Reinforcement Learning


Add your own review!

Overview

This text introduces the evolving area of simulation-based optimization. Cutting-edge work in computational operations research, including non-linear programming (simultaneous perturbation), dynamic programming (reinforcement learning), and game theory (learning automata) has made it possible to use simulation in conjunction with optimization techniques. As a result, this research has given simulation added dimensions and power that it did not have in the recent past. This book's objective is two-fold: it examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques and outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: parametric (static) optimization and control (dynamic) optimization. Some of the book's special features are: an accessible introduction to reinforcement learning and parametric-optimization techniques; a step-by-step description of several algorithms of simulation-based optimization; a clear and simple introduction to the methodology of neural networks; a gentle introduction to convergence analysis of some of the methods enumerated above; and computer programs for many algorithms of simulation-based optimization.

Full Product Details

Author:   Abhijit Gosavi
Publisher:   Kluwer Academic Publishers
Imprint:   Kluwer Academic Publishers
Edition:   2003 ed.
Volume:   v. 25
Dimensions:   Width: 15.60cm , Height: 3.10cm , Length: 23.40cm
Weight:   0.995kg
ISBN:  

9781402074547


ISBN 10:   1402074549
Pages:   554
Publication Date:   30 June 2003
Audience:   College/higher education ,  Professional and scholarly ,  Undergraduate ,  Postgraduate, Research & Scholarly
Format:   Hardback
Publisher's Status:   Out of Print
Availability:   In Print   Availability explained
Limited stock is available. It will be ordered for you and shipped pending supplier's limited stock.

Table of Contents

List of Figures. List of Tables. Acknowledgements. Preface. 1. Background. 1.1. Why this book was written. 1.2. Simulation-based optimization and modern times. 1.3. How this book is organized. 2. Notation. 2.1. Chapter Overview. 2.2. Some basic conventions. 2.3. Vector notation. 2.4. Notation for matrices. 2.5. Notation for n-tuples. 2.6. Notation for sets. 2.7. Notation for sequences. 2.8. Notation for transformations. 2.9. Max, min and arg max. 2.10. Acronyms and abbreviations. 3. Probability theory: a refresher.3.1. Overview of this chapter. 3.2. Laws of probability. 3.3. Probability distributions. 3.4. Expected value of a random variable. 3.5. Standard deviation of a random variable. 3.6. Limit theorems. 3.7. Review questions. 4. Basic concepts underlying simulation. 4.1. Chapter overview. 4.2. Introductions. 4.3. Models. 4.4. Simulation modeling of random systems. 4.5. Concluding remarks. 4.6. Historical remarks. 4.7. Review questions. 5. Simulation optimization: an overview. 5.1. Chapter overview. 5.2. Stochastic parametric optimization. 5.3. Stochastic control optimization. 5.4. Historical remarks. 5.5. Review questions. 6. Response surfaces and neural nets. 6.1. Chapter overview. 6.2. RSM: an overview. 6.3. RSM: details. 6.4. Neuro-response surface methods. 6.5. Concluding remarks. 6.6. Bibliographic remarks. 6.7. Review questions. 7. Parametric optimization. 7.1. Chapter overview. 7.2. Continuous optimization. 7.3. Discrete optimization. 7.4. Hybrid solution spaces. 7.5. Concluding remarks. 7.6. Bibliographic remarks. 7.7. Review questions. 8. Dynamic programming. 8.1. Chapter overview. 8.2. Stochastic processes. 8.3. Markov processes, Markov chains and semi-Markov processes. 8.4. Markov decision problems. 8.5. How to solve an MDP using exhaustive enumeration. 8.6. Dynamic programming for average reward. 8.7. Dynamic programming and discounted reward. 8.8. The Bellman equation: an intuitive perspective. 8.9. Semi-Markov decision problems. 8.10. Modified policy iteration. 8.11. Miscellaneous topics related to MDPs and SMDPs. 8.12. Conclusions. 8.13. Bibliographic remarks. 8.14. Review questions. 9. Reinforcement learning. 9.1. Chapter overview. 9.2. The need for reinforcement learning. 9.3. Generating the TPM through straightforward counting. 9.4. Reinforcement learning: fundamentals. 9.5. Discounted reward reinforcement learning. 9.6. Average reward reinforcement learning. 9.7. Semi-Markov decision problems and RL. 9.8. RL algorithms and their DP counterparts. 9.9. Act

Reviews

Author Information

Tab Content 6

Author Website:  

Customer Reviews

Recent Reviews

No review item found!

Add your own review!

Countries Available

All regions
Latest Reading Guide

lgn

al

Shopping Cart
Your cart is empty
Shopping cart
Mailing List