Reinforcement Learning for Demand Forecasting and Customized Services

泰米尔语 工程类 政府(语言学) 发动机部 图书馆学 管理 计算机科学 工程管理 艺术 语言学 文学类 哲学 经济
作者
Sini Raj Pulari,T. S. Murugesh,Shriram K. Vasudevan,Akshay Bhuvaneswari Ramakrishnan
标识
DOI:10.1002/9781394214068.ch6
摘要

Chapter 6 Reinforcement Learning for Demand Forecasting and Customized Services Sini Raj Pulari, Sini Raj Pulari Bahrain Polytechnic, ISA Town, BahrainSearch for more papers by this authorT. S. Murugesh, T. S. Murugesh Department of Electronics and Communication Engineering, Government College of Engineering Srirangam, Tiruchirappalli, Tamil Nadu, India (On Deputation from Annamalai University, Department of Electronics and Instrumentation Engineering, Faculty of Engineering & Technology, India)Search for more papers by this authorShriram K. Vasudevan, Shriram K. Vasudevan Lead Technical – Evangelist (Asia Pacific and Japan), Intel India Pvt. Ltd., Bengaluru, Karnataka, IndiaSearch for more papers by this authorAkshay Bhuvaneswari Ramakrishnan, Akshay Bhuvaneswari Ramakrishnan Department of Computer Science and Engineering, Sastra Deemed to be University, SASTRA University Thanjavur Campus, Thanjavur, Tamil Nadu, IndiaSearch for more papers by this author Sini Raj Pulari, Sini Raj Pulari Bahrain Polytechnic, ISA Town, BahrainSearch for more papers by this authorT. S. Murugesh, T. S. Murugesh Department of Electronics and Communication Engineering, Government College of Engineering Srirangam, Tiruchirappalli, Tamil Nadu, India (On Deputation from Annamalai University, Department of Electronics and Instrumentation Engineering, Faculty of Engineering & Technology, India)Search for more papers by this authorShriram K. Vasudevan, Shriram K. Vasudevan Lead Technical – Evangelist (Asia Pacific and Japan), Intel India Pvt. Ltd., Bengaluru, Karnataka, IndiaSearch for more papers by this authorAkshay Bhuvaneswari Ramakrishnan, Akshay Bhuvaneswari Ramakrishnan Department of Computer Science and Engineering, Sastra Deemed to be University, SASTRA University Thanjavur Campus, Thanjavur, Tamil Nadu, IndiaSearch for more papers by this author Book Editor(s):R. Elakkiya, R. Elakkiya Department of Computer Science, Birla Institute of Technology & Science Pilani, Dubai Campus, UAESearch for more papers by this authorV. Subramaniyaswamy, V. Subramaniyaswamy School of Computing, SASTRA Deemed University, Thanjavur, IndiaSearch for more papers by this author First published: 12 April 2024 https://doi.org/10.1002/9781394214068.ch6 AboutPDFPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShareShare a linkShare onEmailFacebookTwitterLinkedInRedditWechat Summary Reinforcement Learning (RL) is a strong way for machines to learn that has shown promise in areas like predicting demand and providing personalized services. This chapter investigates how strategies based on RL can improve the accuracy of demand forecasting and make it possible for businesses to provide individualized services to each of its clients. The principles of RL, its use in demand forecasting, and the implementation of individualized services are covered extensively as key components of the topic. A real-life case study of a large retail chain highlights the practical benefits of applying RL in optimizing inventory management and providing individualized product recommendations. Researchers at the University of California, Berkeley, carried out this case study. RL gives organizations the ability to dynamically respond to shifting market dynamics and the tastes of their customers by empowering them to continuously learn and adapt to new circumstances. This gives them an advantage over their rivals. This chapter offers light on the revolutionary influence that RL has had and presents a data-driven strategy to meet the demands of modern business environments. References Wang , P. , Chan , C.Y. , de La Fortelle , A. , A reinforcement learning based approach for automated lane change maneuvers , in: 2018 IEEE Intelligent Vehicles Symposium (IV) , IEEE , pp. 1379 – 1384 , 2018 Jun 26. 10.1109/IVS.2018.8500556 Google Scholar Chien , C.F. , Lin , Y.S. , Lin , S.K. , Deep reinforcement learning for selecting demand forecast models to empower Industry 3.5 and an empirical study for a semiconductor component distributor . Int. J. Prod. Res. , 58 , 9 , 2784 – 804 , 2020 May 2. 10.1080/00207543.2020.1733125 Web of Science®Google Scholar Ding , Z. , Huang , Y. , Yuan , H. , Dong , H. , Introduction to reinforcement learning . Deep Reinforcement Learning: Fundamentals, Research and Applications , 47 – 123 , 2020 . Springer , Singapore . DOI: 10.1007/978-981-15-4095-0_2 10.1007/978-981-15-4095-0_2 Google Scholar Shin , M. , Ryu , K. , Jung , M. , Reinforcement learning approach to goal-regulation in a self-evolutionary manufacturing system . Expert Syst. Appl. , 39 , 10 , 8736 – 43 , 2012 Aug 1. 10.1016/j.eswa.2012.01.207 Web of Science®Google Scholar Oh , J. , Hessel , M. , Czarnecki , W.M. , Xu , Z. , van Hasselt , H.P. , Singh , S. , Silver , D. , Discovering reinforcement learning algorithms . Adv. Neural Inf. Process. Syst. , 33 , 1060 – 70 , 2020 . Google Scholar Gupta , A. , Mendonca , R. , Liu , Y. , Abbeel , P. , Levine , S. , Meta-reinforcement learning of structured exploration strategies . Adv. Neural Inf. Process. Syst. , 31 , 1 – 10 , 2018 . Google Scholar Ishii , S. , Yoshida , W. , Yoshimoto , J. , Control of exploitation–exploration meta-parameter in reinforcement learning . Neural Networks , 15 , 4-6 , 665 – 87 , 2002 Jun 1. 10.1016/S0893-6080(02)00056-4 PubMedWeb of Science®Google Scholar Chien , C.F. , Lin , Y.S. , Lin , S.K. , Deep reinforcement learning for selecting demand forecast models to empower Industry 3.5 and an empirical study for a semiconductor component distributor . Int. J. Prod. Res. , 58 , 9 , 2784 – 804 , 2020 May 2. 10.1080/00207543.2020.1733125 Web of Science®Google Scholar Huang , W. , Li , S. , Wang , S. , Li , H. , An improved adaptive service function chain mapping method based on deep reinforcement learning . Electronics , 12 , 6 , 1307 , 2023 Mar 9. 10.3390/electronics12061307 Web of Science®Google Scholar Cognitive Analytics and Reinforcement Learning: Theories, Techniques and Applications ReferencesRelatedInformation

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
糖糖完成签到,获得积分10
4秒前
俭朴新之完成签到 ,获得积分10
5秒前
EROIL完成签到,获得积分10
6秒前
忧郁绝音完成签到,获得积分10
8秒前
11秒前
11秒前
忧郁绝音发布了新的文献求助10
12秒前
努力学习ing完成签到 ,获得积分10
14秒前
shime完成签到,获得积分10
14秒前
15秒前
15秒前
欣欣发布了新的文献求助10
16秒前
在水一方应助haui采纳,获得10
17秒前
博洋发布了新的文献求助10
17秒前
dd完成签到,获得积分10
18秒前
XianyunWang发布了新的文献求助10
20秒前
24秒前
Jasper应助章如豹采纳,获得10
25秒前
杳鸢举报企鹅大法师求助涉嫌违规
26秒前
29秒前
不配.应助结算采纳,获得10
31秒前
小二郎应助慧慧采纳,获得10
38秒前
热心灯泡完成签到,获得积分10
39秒前
46秒前
博洋完成签到,获得积分10
47秒前
二七完成签到 ,获得积分10
47秒前
48秒前
巽风完成签到,获得积分20
48秒前
52秒前
阿木木完成签到 ,获得积分10
56秒前
Yang发布了新的文献求助10
57秒前
毒蝎King完成签到 ,获得积分10
58秒前
59秒前
谦让友绿完成签到,获得积分10
1分钟前
1分钟前
重要的炳完成签到 ,获得积分10
1分钟前
1分钟前
Summer_Xia完成签到,获得积分10
1分钟前
guoguo发布了新的文献求助10
1分钟前
1分钟前
高分求助中
Earth System Geophysics 1000
Co-opetition under Endogenous Bargaining Power 666
Medicina di laboratorio. Logica e patologia clinica 600
Handbook of Marine Craft Hydrodynamics and Motion Control, 2nd Edition 500
Sarcolestes leedsi Lydekker, an ankylosaurian dinosaur from the Middle Jurassic of England 500
《关于整治突出dupin问题的实施意见》(厅字〔2019〕52号) 500
Language injustice and social equity in EMI policies in China 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3212387
求助须知:如何正确求助?哪些是违规求助? 2861232
关于积分的说明 8127731
捐赠科研通 2527172
什么是DOI,文献DOI怎么找? 1360782
科研通“疑难数据库(出版商)”最低求助积分说明 643322
邀请新用户注册赠送积分活动 615664