Reinforcement Learning for Demand Forecasting and Customized Services

泰米尔语 工程类 政府(语言学) 发动机部 图书馆学 管理 计算机科学 工程管理 艺术 经济 语言学 哲学 文学类
作者
Sini Raj Pulari,T. S. Murugesh,Shriram K. Vasudevan,Akshay Bhuvaneswari Ramakrishnan
标识
DOI:10.1002/9781394214068.ch6
摘要

Chapter 6 Reinforcement Learning for Demand Forecasting and Customized Services Sini Raj Pulari, Sini Raj Pulari Bahrain Polytechnic, ISA Town, BahrainSearch for more papers by this authorT. S. Murugesh, T. S. Murugesh Department of Electronics and Communication Engineering, Government College of Engineering Srirangam, Tiruchirappalli, Tamil Nadu, India (On Deputation from Annamalai University, Department of Electronics and Instrumentation Engineering, Faculty of Engineering & Technology, India)Search for more papers by this authorShriram K. Vasudevan, Shriram K. Vasudevan Lead Technical – Evangelist (Asia Pacific and Japan), Intel India Pvt. Ltd., Bengaluru, Karnataka, IndiaSearch for more papers by this authorAkshay Bhuvaneswari Ramakrishnan, Akshay Bhuvaneswari Ramakrishnan Department of Computer Science and Engineering, Sastra Deemed to be University, SASTRA University Thanjavur Campus, Thanjavur, Tamil Nadu, IndiaSearch for more papers by this author Sini Raj Pulari, Sini Raj Pulari Bahrain Polytechnic, ISA Town, BahrainSearch for more papers by this authorT. S. Murugesh, T. S. Murugesh Department of Electronics and Communication Engineering, Government College of Engineering Srirangam, Tiruchirappalli, Tamil Nadu, India (On Deputation from Annamalai University, Department of Electronics and Instrumentation Engineering, Faculty of Engineering & Technology, India)Search for more papers by this authorShriram K. Vasudevan, Shriram K. Vasudevan Lead Technical – Evangelist (Asia Pacific and Japan), Intel India Pvt. Ltd., Bengaluru, Karnataka, IndiaSearch for more papers by this authorAkshay Bhuvaneswari Ramakrishnan, Akshay Bhuvaneswari Ramakrishnan Department of Computer Science and Engineering, Sastra Deemed to be University, SASTRA University Thanjavur Campus, Thanjavur, Tamil Nadu, IndiaSearch for more papers by this author Book Editor(s):R. Elakkiya, R. Elakkiya Department of Computer Science, Birla Institute of Technology & Science Pilani, Dubai Campus, UAESearch for more papers by this authorV. Subramaniyaswamy, V. Subramaniyaswamy School of Computing, SASTRA Deemed University, Thanjavur, IndiaSearch for more papers by this author First published: 12 April 2024 https://doi.org/10.1002/9781394214068.ch6 AboutPDFPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShareShare a linkShare onEmailFacebookTwitterLinkedInRedditWechat Summary Reinforcement Learning (RL) is a strong way for machines to learn that has shown promise in areas like predicting demand and providing personalized services. This chapter investigates how strategies based on RL can improve the accuracy of demand forecasting and make it possible for businesses to provide individualized services to each of its clients. The principles of RL, its use in demand forecasting, and the implementation of individualized services are covered extensively as key components of the topic. A real-life case study of a large retail chain highlights the practical benefits of applying RL in optimizing inventory management and providing individualized product recommendations. Researchers at the University of California, Berkeley, carried out this case study. RL gives organizations the ability to dynamically respond to shifting market dynamics and the tastes of their customers by empowering them to continuously learn and adapt to new circumstances. This gives them an advantage over their rivals. This chapter offers light on the revolutionary influence that RL has had and presents a data-driven strategy to meet the demands of modern business environments. References Wang , P. , Chan , C.Y. , de La Fortelle , A. , A reinforcement learning based approach for automated lane change maneuvers , in: 2018 IEEE Intelligent Vehicles Symposium (IV) , IEEE , pp. 1379 – 1384 , 2018 Jun 26. 10.1109/IVS.2018.8500556 Google Scholar Chien , C.F. , Lin , Y.S. , Lin , S.K. , Deep reinforcement learning for selecting demand forecast models to empower Industry 3.5 and an empirical study for a semiconductor component distributor . Int. J. Prod. Res. , 58 , 9 , 2784 – 804 , 2020 May 2. 10.1080/00207543.2020.1733125 Web of Science®Google Scholar Ding , Z. , Huang , Y. , Yuan , H. , Dong , H. , Introduction to reinforcement learning . Deep Reinforcement Learning: Fundamentals, Research and Applications , 47 – 123 , 2020 . Springer , Singapore . DOI: 10.1007/978-981-15-4095-0_2 10.1007/978-981-15-4095-0_2 Google Scholar Shin , M. , Ryu , K. , Jung , M. , Reinforcement learning approach to goal-regulation in a self-evolutionary manufacturing system . Expert Syst. Appl. , 39 , 10 , 8736 – 43 , 2012 Aug 1. 10.1016/j.eswa.2012.01.207 Web of Science®Google Scholar Oh , J. , Hessel , M. , Czarnecki , W.M. , Xu , Z. , van Hasselt , H.P. , Singh , S. , Silver , D. , Discovering reinforcement learning algorithms . Adv. Neural Inf. Process. Syst. , 33 , 1060 – 70 , 2020 . Google Scholar Gupta , A. , Mendonca , R. , Liu , Y. , Abbeel , P. , Levine , S. , Meta-reinforcement learning of structured exploration strategies . Adv. Neural Inf. Process. Syst. , 31 , 1 – 10 , 2018 . Google Scholar Ishii , S. , Yoshida , W. , Yoshimoto , J. , Control of exploitation–exploration meta-parameter in reinforcement learning . Neural Networks , 15 , 4-6 , 665 – 87 , 2002 Jun 1. 10.1016/S0893-6080(02)00056-4 PubMedWeb of Science®Google Scholar Chien , C.F. , Lin , Y.S. , Lin , S.K. , Deep reinforcement learning for selecting demand forecast models to empower Industry 3.5 and an empirical study for a semiconductor component distributor . Int. J. Prod. Res. , 58 , 9 , 2784 – 804 , 2020 May 2. 10.1080/00207543.2020.1733125 Web of Science®Google Scholar Huang , W. , Li , S. , Wang , S. , Li , H. , An improved adaptive service function chain mapping method based on deep reinforcement learning . Electronics , 12 , 6 , 1307 , 2023 Mar 9. 10.3390/electronics12061307 Web of Science®Google Scholar Cognitive Analytics and Reinforcement Learning: Theories, Techniques and Applications ReferencesRelatedInformation
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
白尔德芙发布了新的文献求助10
刚刚
高处X完成签到,获得积分10
1秒前
1秒前
1秒前
NexusExplorer应助pass采纳,获得10
4秒前
华123发布了新的文献求助10
4秒前
Shuhe_Gong完成签到 ,获得积分10
4秒前
水溶c100发布了新的文献求助20
5秒前
5秒前
5秒前
5秒前
叁零完成签到,获得积分10
5秒前
爱听歌沉鱼完成签到,获得积分20
6秒前
充电宝应助phw2333采纳,获得20
6秒前
李健应助积极一德采纳,获得10
6秒前
6秒前
7秒前
7秒前
7秒前
小马甲应助宓广缘采纳,获得10
7秒前
9秒前
Hello应助Lemonade采纳,获得10
9秒前
天天快乐应助667788采纳,获得10
10秒前
10秒前
777发布了新的文献求助10
10秒前
vv发布了新的文献求助10
10秒前
Elio完成签到 ,获得积分10
10秒前
大力完成签到,获得积分10
11秒前
zhaoman完成签到,获得积分10
11秒前
11秒前
summy发布了新的文献求助10
12秒前
zhongu应助hhq采纳,获得10
12秒前
草莓冰茶完成签到,获得积分10
13秒前
咸蛋超人完成签到,获得积分20
13秒前
13秒前
酷波er应助wlei采纳,获得10
14秒前
14秒前
777完成签到,获得积分10
14秒前
Rylee关注了科研通微信公众号
15秒前
wanci应助远慕采纳,获得10
15秒前
高分求助中
The Mother of All Tableaux Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 2400
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 800
Official Methods of Analysis of AOAC INTERNATIONAL 600
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 588
T/CIET 1202-2025 可吸收再生氧化纤维素止血材料 500
Comparison of adverse drug reactions of heparin and its derivates in the European Economic Area based on data from EudraVigilance between 2017 and 2021 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3952150
求助须知:如何正确求助?哪些是违规求助? 3497551
关于积分的说明 11088037
捐赠科研通 3228178
什么是DOI,文献DOI怎么找? 1784700
邀请新用户注册赠送积分活动 868855
科研通“疑难数据库(出版商)”最低求助积分说明 801230