Optimal learning
Learn the science of collecting information to make effective decisions Everyday decisions are made without the benefit of accurate information. Optimal Learning develops the needed principles for gathering information to make decisions, especially when collecting information is time-consuming and e...
Uložené v:
| Hlavní autori: | , |
|---|---|
| Médium: | E-kniha Kniha |
| Jazyk: | English |
| Vydavateľské údaje: |
Hoboken, NJ
Wiley
2012
John Wiley & Sons, Incorporated Wiley-Blackwell |
| Vydanie: | 1 |
| Edícia: | Wiley series in probability and statistics Wiley series in probability and statistics. |
| Predmet: | |
| ISBN: | 0470596694, 9780470596692, 9781118309858, 1118309855 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Learn the science of collecting information to make effective decisions Everyday decisions are made without the benefit of accurate information. Optimal Learning develops the needed principles for gathering information to make decisions, especially when collecting information is time-consuming and expensive. Designed for readers with an elementary background in probability and statistics, the book presents effective and practical policies illustrated in a wide range of applications, from energy, homeland security, and transportation to engineering, health, and business. This book covers the fundamental dimensions of a learning problem and presents a simple method for testing and comparing policies for learning. Special attention is given to the knowledge gradient policy and its use with a wide range of belief models, including lookup table and parametric and for online and offline problems. Three sections develop ideas with increasing levels of sophistication: Fundamentals explores fundamental topics, including adaptive learning, ranking and selection, the knowledge gradient, and bandit problems Extensions and Applications features coverage of linear belief models, subset selection models, scalar function optimization, optimal bidding, and stopping problems Advanced Topics explores complex methods including simulation optimization, active learning in mathematical programming, and optimal continuous measurements Each chapter identifies a specific learning problem, presents the related, practical algorithms for implementation, and concludes with numerous exercises. A related website features additional applications and downloadable software, including MATLAB and the Optimal Learning Calculator, a spreadsheet-based package that provides an introduction to learning and a variety of policies for learning. |
|---|---|
| AbstractList | Learn the science of collecting information to make effective decisions Everyday decisions are made without the benefit of accurate information. Optimal Learning develops the needed principles for gathering information to make decisions, especially when collecting information is time-consuming and expensive. Designed for readers with an elementary background in probability and statistics, the book presents effective and practical policies illustrated in a wide range of applications, from energy, homeland security, and transportation to engineering, health, and business. This book covers the fundamental dimensions of a learning problem and presents a simple method for testing and comparing policies for learning. Special attention is given to the knowledge gradient policy and its use with a wide range of belief models, including lookup table and parametric and for online and offline problems. Three sections develop ideas with increasing levels of sophistication: Fundamentals explores fundamental topics, including adaptive learning, ranking and selection, the knowledge gradient, and bandit problems Extensions and Applications features coverage of linear belief models, subset selection models, scalar function optimization, optimal bidding, and stopping problems Advanced Topics explores complex methods including simulation optimization, active learning in mathematical programming, and optimal continuous measurements Each chapter identifies a specific learning problem, presents the related, practical algorithms for implementation, and concludes with numerous exercises. A related website features additional applications and downloadable software, including MATLAB and the Optimal Learning Calculator, a spreadsheet-based package that provides an introduction to learning and a variety of policies for learning. |
| Author | Powell, Warren B Ryzhov, Ilya Olegovich |
| Author_xml | – sequence: 1 fullname: Powell, Warren B – sequence: 2 fullname: Ryzhov, Ilya Olegovich |
| BackLink | https://cir.nii.ac.jp/crid/1130282273112677376$$DView record in CiNii |
| BookMark | eNqNzztPwzAQAGAjKIKUSiyMSAwIiaFw53P8GGlVHlKlLog1chwHQk1S4gB_n0AYOuLhrLv7dKdL2F7d1J6xE4QrBODXRmlE1ARGp3qHTbZyrnZZAkJBaqQ0YsQSDoh9QXKzzxJMhdQoSMIBm8T4Cv2TkjSXh-x4temqNxvOgrdtXdXPR2xU2hD95O8fs6fbxeP8frpc3T3Mb5ZTm0pNNOVUAFDhnJXgndLCClPKsvA5eSqV51QWMneWCMrSkS1QaO0AU22cUH1lzC6HwTau_Vd8aUIXs8_g86ZZx2zrOoH_t1z19mKwm7Z5__Cxy36Z83XX2pAtZnPNOaSih-cDrKsqc9VPRCTgfVsRIpdKkZI9Ox2YD1XeNtmwc7GcoeZCpPQNA39w_w |
| ContentType | eBook Book |
| DBID | RYH |
| DEWEY | 006.3/1 |
| DOI | 10.1002/9781118309858 |
| DatabaseName | CiNii Complete |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISBN | 9781118309827 1118309820 9781118309841 1118309847 |
| Edition | 1 |
| ExternalDocumentID | 9781118309841 9781118309827 EBC822054 BB09093533 182445 |
| GroupedDBID | 089 20A 38. 3XJ 3XM 5VX A4J AABBV AAMRL AARDG ABARN ABBFG ABIAV ABQPQ ABQPW ACGYG ACIQC ACLGV ACNUM ADVEM AERYV AFOJC AHWGJ AJFER AKHYG AKQZE ALMA_UNASSIGNED_HOLDINGS AMYDA ASVIU AZZ BBABE CZZ GEOUK HF4 IEZ IVUIE JFSCD JJU KKBTI LQKAK LWYJN MYL OHSWP PQQKQ T71 UZ6 W1A WIIVT YPLAZ ZEEST RYH IVK |
| ID | FETCH-LOGICAL-a56833-23d003dcca60ec784a49f6fdeb3e3f7e23fd6bca330ffc3ad1488c01589c47fc3 |
| ISBN | 0470596694 9780470596692 9781118309858 1118309855 |
| IngestDate | Fri Nov 08 02:10:34 EST 2024 Fri Nov 08 06:11:45 EST 2024 Wed Dec 10 08:47:26 EST 2025 Thu Jun 26 23:38:36 EDT 2025 Wed Dec 10 04:07:46 EST 2025 |
| IsPeerReviewed | false |
| IsScholarly | false |
| LCCN | 2011047629 |
| LCCallNum | Q325.5 .P69 2012 |
| LCCallNum_Ident | Q325.5 .P69 2012 |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-a56833-23d003dcca60ec784a49f6fdeb3e3f7e23fd6bca330ffc3ad1488c01589c47fc3 |
| Notes | Includes bibliographical references (p. 366-379) and index |
| OCLC | 1546814360 795912947 |
| PQID | EBC822054 |
| PageCount | 384 |
| ParticipantIDs | askewsholts_vlebooks_9781118309841 askewsholts_vlebooks_9781118309827 proquest_ebookcentral_EBC822054 nii_cinii_1130282273112677376 elibro_books_ELB182445 |
| PublicationCentury | 2000 |
| PublicationDate | 2012. 2012 2012-04-24 2012-05-22 |
| PublicationDateYYYYMMDD | 2012-01-01 2012-04-24 2012-05-22 |
| PublicationDate_xml | – year: 2012 text: 2012 |
| PublicationDecade | 2010 |
| PublicationPlace | Hoboken, NJ |
| PublicationPlace_xml | – name: Hoboken, NJ – name: Hoboken, N.J – name: Newark |
| PublicationSeriesTitle | Wiley series in probability and statistics Wiley series in probability and statistics. |
| PublicationYear | 2012 |
| Publisher | Wiley John Wiley & Sons, Incorporated Wiley-Blackwell |
| Publisher_xml | – name: Wiley – name: John Wiley & Sons, Incorporated – name: Wiley-Blackwell |
| SSID | ssj0000663826 ssib017732437 |
| Score | 2.424333 |
| Snippet | Learn the science of collecting information to make effective decisions Everyday decisions are made without the benefit of accurate information. Optimal... |
| SourceID | askewsholts proquest nii elibro |
| SourceType | Aggregation Database Publisher |
| SubjectTerms | Artificial intelligence Machine learning |
| TableOfContents | 8.2.3 A Bayesian Interpretation -- 8.2.4 Generating a Prior -- 8.3 The Knowledge Gradient for a Linear Model -- 8.4 Application to Drug Discovery -- 8.5 Application to Dynamic Pricing -- 8.6 Bibliographic Notes -- Problems -- 9 Subset Selection Problems -- 9.1 Applications -- 9.2 Choosing a Subset Using Ranking and Selection -- 9.2.1 Setting Prior Means and Variances -- 9.2.2 Two Strategies for Setting Prior Covariances -- 9.3 Larger Sets -- 9.3.1 Using Simulation to Reduce the Problem Size -- 9.3.2 Computational Issues -- 9.3.3 Experiments -- 9.4 Very Large Sets -- 9.5 Bibliographic Notes -- Problems -- 10 Optimizing a Scalar Function -- 10.1 Deterministic Measurements -- 10.2 Stochastic Measurements -- 10.2.1 The Model -- 10.2.2 Finding the Posterior Distribution -- 10.2.3 Choosing the Measurement -- 10.2.4 Discussion -- 10.3 Bibliographic Notes -- Problems -- 11 Optimal Bidding -- 11.1 Modeling Customer Demand -- 11.1.1 Some Valuation Models -- 11.1.2 The Logit Model -- 11.2 Bayesian Modeling for Dynamic Pricing -- 11.2.1 A Conjugate Prior for Choosing Between Two Demand Curves -- 11.2.2 Moment Matching for Nonconjugate Problems -- 11.2.3 An Approximation for the Logit Model -- 11.3 Bidding Strategies -- 11.3.1 An Idea From Multi-Armed Bandits -- 11.3.2 Bayes-Greedy Bidding -- 11.3.3 Numerical Illustrations -- 11.4 Why Does It Work?* -- 11.4.1 Moment Matching for Pareto Prior -- 11.4.2 Approximating the Logistic Expectation -- 11.5 Bibliographic Notes -- Problems -- 12 Stopping Problems -- 12.1 Sequential Probability Ratio Test -- 12.2 The Secretary Problem -- 12.2.1 Setup -- 12.2.2 Solution -- 12.3 Bibliographic Notes -- Problems -- 13 Active Learning in Statistics -- 13.1 Deterministic Policies -- 13.2 Sequential Policies for Classification -- 13.2.1 Uncertainty Sampling -- 13.2.2 Query by Committee -- 13.2.3 Expected Error Reduction 17.4.3 Experiments -- 17.5 An Expected Improvement Policy -- 17.6 Bibliographic Notes -- Index Intro -- Optimal Learning -- CONTENTS -- Preface -- Acknowledgments -- 1 The Challenges of Learning -- 1.1 Learning the Best Path -- 1.2 Areas of Application -- 1.3 Major Problem Classes -- 1.4 The Different Types of Learning -- 1.5 Learning from Different Communities -- 1.6 Information Collection Using Decision Trees -- 1.6.1 A Basic Decision Tree -- 1.6.2 Decision Tree for Offline Learning -- 1.6.3 Decision Tree for Online Learning -- 1.6.4 Discussion -- 1.7 Website and Downloadable Software -- 1.8 Goals of this Book -- Problems -- 2 Adaptive Learning -- 2.1 The Frequentist View -- 2.2 The Bayesian View -- 2.2.1 The Updating Equations for Independent Beliefs -- 2.2.2 The Expected Value of Information -- 2.2.3 Updating for Correlated Normal Priors -- 2.2.4 Bayesian Updating with an Uninformative Prior -- 2.3 Updating for Non-Gaussian Priors -- 2.3.1 The Gamma-Exponential Model -- 2.3.2 The Gamma-Poisson Model -- 2.3.3 The Pareto-Uniform Model -- 2.3.4 Models for Learning Probabilities* -- 2.3.5 Learning an Unknown Variance* -- 2.4 Monte Carlo Simulation -- 2.5 Why Does It Work?* -- 2.5.1 Derivation of σ -- 2.5.2 Derivation of Bayesian Updating Equations for Independent Beliefs -- 2.6 Bibliographic Notes -- Problems -- 3 The Economics of Information -- 3.1 An Elementary Information Problem -- 3.2 The Marginal Value of Information -- 3.3 An information Acquisition Problem -- 3.4 Bibliographic Notes -- Problems -- 4 Ranking and Selection -- 4.1 The Model -- 4.2 Measurement Policies -- 4.2.1 Deterministic Versus Sequential Policies -- 4.2.2 Optimal Sequential Policies -- 4.2.3 Heuristic Policies -- 4.3 Evaluating Policies -- 4.4 More Advanced Topics* -- 4.4.1 An Alternative Representation of the Probability Space -- 4.4.2 Equivalence of Using True Means and Sample Estimates -- 4.5 Bibliographic Notes -- Problems -- 5 The Knowledge Gradient 5.1 The Knowledge Gradient for Independent Beliefs -- 5.1.1 Computation -- 5.1.2 Some Properties of the Knowledge Gradient -- 5.1.3 The Four Distributions of Learning -- 5.2 The Value of Information and the S-Curve Effect -- 5.3 Knowledge Gradient for Correlated Beliefs -- 5.4 Anticipatory Versus Experiential Learning -- 5.5 The Knowledge Gradient for Some Non-Gaussian Distributions -- 5.5.1 The Gamma-Exponential Model -- 5.5.2 The Gamma-Poisson Model -- 5.5.3 The Pareto-Uniform Model -- 5.5.4 The Beta-Bernoulli Model -- 5.5.5 Discussion -- 5.6 Relatives of the Knowledge Gradient -- 5.6.1 Expected Improvement -- 5.6.2 Linear Loss* -- 5.7 The Problem of Priors -- 5.8 Discussion -- 5.9 Why Does It Work?* -- 5.9.1 Derivation of the Knowledge Gradient Formula -- 5.10 Bibliographic Notes -- Problems -- 6 Bandit Problems -- 6.1 The Theory and Practice of Gittins Indices -- 6.1.1 Gittins Indices in the Beta-Bernoulli Model -- 6.1.2 Gittins Indices in the Normal-Normal Model -- 6.1.3 Approximating Gittins Indices -- 6.2 Variations of Bandit Problems -- 6.3 Upper Confidence Bounding -- 6.4 The Knowledge Gradient for Bandit Problems -- 6.4.1 The Basic Idea -- 6.4.2 Some Experimental Comparisons -- 6.4.3 Non-Normal Models -- 6.5 Bibliographic Notes -- Problems -- 7 Elements of a Learning Problem -- 7.1 The States of our System -- 7.2 Types of Decisions -- 7.3 Exogenous Information -- 7.4 Transition Functions -- 7.5 Objective Functions -- 7.5.1 Designing Versus Controlling -- 7.5.2 Measurement Costs -- 7.5.3 Objectives -- 7.6 Evaluating Policies -- 7.7 Discussion -- 7.8 Bibliographic Notes -- Problems -- 8 Linear Belief Models -- 8.1 Applications -- 8.1.1 Maximizing Ad Clicks -- 8.1.2 Dynamic Pricing -- 8.1.3 Housing Loans -- 8.1.4 Optimizing Dose Response -- 8.2 A Brief Review of Linear Regression -- 8.2.1 The Normal Equations -- 8.2.2 Recursive Least Squares 13.3 A Variance-Minimizing Policy -- 13.4 Mixtures of Gaussians -- 13.4.1 Estimating Parameters -- 13.4.2 Active Learning -- 13.5 Bibliographic Notes -- 14 Simulation Optimization -- 14.1 Indifference Zone Selection -- 14.1.1 Batch Procedures -- 14.1.2 Sequential Procedures -- 14.1.3 The 0-1 Procedure: Connection to Linear Loss -- 14.2 Optimal Computing Budget Allocation -- 14.2.1 Indifference-Zone Version -- 14.2.2 Linear Loss Version -- 14.2.3 When Does It Work? -- 14.3 Model-Based Simulated Annealing -- 14.4 Other Areas of Simulation Optimization -- 14.5 Bibliographic Notes -- 15 Learning in Mathematical Programming -- 15.1 Applications -- 15.1.1 Piloting a Hot Air Balloon -- 15.1.2 Optimizing a Portfolio -- 15.1.3 Network Problems -- 15.2 Learning on Graphs -- 15.3 Alternative Edge Selection Policies -- 15.4 Learning Costs for Linear Programs* -- 15.5 Bibliographic Notes -- 16 Optimizing Over Continuous Measurements -- 16.1 The Belief Model -- 16.1.1 Updating Equations -- 16.1.2 Parameter Estimation -- 16.2 Sequential Kriging Optimization -- 16.3 The Knowledge Gradient for Continuous Parameters* -- 16.3.1 Maximizing the Knowledge Gradient -- 16.3.2 Approximating the Knowledge Gradient -- 16.3.3 The Gradient of the Knowledge Gradient -- 16.3.4 Maximizing the Knowledge Gradient -- 16.3.5 The KGCP Policy -- 16.4 Efficient Global Optimization -- 16.5 Experiments -- 16.6 Extension to Higher-Dimensional Problems -- 16.7 Bibliographic Notes -- 17 Learning With a Physical State -- 17.1 Introduction to Dynamic Programming -- 17.1.1 Approximate Dynamic Programming -- 17.1.2 The Exploration vs. Exploitation Problem -- 17.1.3 Discussion -- 17.2 Some Heuristic Learning Policies -- 17.3 The Local Bandit Approximation -- 17.4 The Knowledge Gradient in Dynamic Programming -- 17.4.1 Generalized Learning Using Basis Functions -- 17.4.2 The Knowledge Gradient |
| Title | Optimal learning |
| URI | https://elibro.net/es/ereader/elibrodemo/182445 https://cir.nii.ac.jp/crid/1130282273112677376 https://ebookcentral.proquest.com/lib/[SITE_ID]/detail.action?docID=822054 https://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9781118309827&uid=none https://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9781118309841&uid=none |
| Volume | 841 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwED-xwgN92fiYKGOjQrxVhSR2bec1UzckppWHAXuLXH-sFV0yLaXa9tdzdpzQrQ_AAy-W61Q-5c753Z3tuwN4r2kSM6mnw6kVbEgt19gjFHuoXYiiRggfKHzCT0_F-Xn6JZQWq3w5AV4U4uYmvfqvosYxFLYLnf0HcbeT4gD2UejYotixfWARtz9riU_w679EjodCEBct6pXNzefv7l5uMcg-tMcst3ezcuVxYnErB5OFuSgROmbrewH-UsX6XoAHknsOYkS5q6_D6npzG3BZp191aa_QzyBRKuo06g8yUGdZlLrzUkK2YIszdHEfH48nXz-3e1nObEE3xQf8B3q0SW7U0A_5TZHix3v0utCV1Q8EdAT7ZeXCvdw-QYmqvpjPNxSk1_pnO9BxkSDP4JEpnsN2U_-iH-DwBWwHjvcbjr-Eb0fjs8NPw1BhYihHTLgqdkQjrGlcxiwyigsqaWqZ1WZKDLHcJMRqNlWSkMhaRaRG71EoNKFEqijHkV3oFGVhXkFfKZvEJFbG2hFliZSx1joaISFns8VpD96tvWi-WvjT8Cpf40bC_-JPNO7Bbs2kvH44PsnQS6R01IN9ZFqu5q6N3ZE0mn-cuCgxzlGT9OBtw87cTxwuAOfj7FC4iGv6-g8z7MHT3-vuDXSW1z_NPjxRq-W8uj4I6-IXV0Qsvw |
| linkProvider | ProQuest Ebooks |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=book&rft.title=Optimal+learning&rft.au=Powell%2C+Warren+B.&rft.au=Ryzhov%2C+Ilya+Olegovich&rft.date=2012-01-01&rft.pub=Wiley&rft.isbn=9780470596692&rft_id=info:doi/10.1002%2F9781118309858&rft.externalDocID=BB09093533 |
| thumbnail_m | http://cvtisr.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fvle.dmmserver.com%2Fmedia%2F640%2F97811183%2F9781118309827.jpg http://cvtisr.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fvle.dmmserver.com%2Fmedia%2F640%2F97811183%2F9781118309841.jpg |

