Daniel Jiang,Haipeng Luo,Chu Wang,Yingfei Wang

The areas of reinforcement learning and multi-armed bandits have recently seen significant innovation, while many application domains, such as e-commerce, are full of problems and challenges to which vanilla RL or MAB methods cannot directly apply. This workshop aims at filling this communication gap by creating a platform for researchers and practitioners from both the method/theory side and application side of the community. Having this platform now instead of at a later time is beneficial to all sides of the community: practitioners and frontline scientists are able to avoid re-inventing existing techniques; theory-oriented researchers can find motivation in industry problems, working within more realistic settings, and making real-world impact. The 1st Multi-armed Bandits and Reinforcement Learning Workshop was a full day workshop co-located with the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD 2021) in Singapore.