In case anyone's interested (and sorry if this was spam for you, I used a clear subject line so hopefully you can just ignore it if it is!), below my signature are details for this talk I'm giving: Tomorrow at 11 am EST (Fri 22, Jan): Joseph will be giving a Vector Talk on Adapting Real-World Experimentation To Balance Enhancement of User Experiences with Statistically Robust Scientific Discovery.
Or email angelina.liu@mail.utoronto.ca and cc williams@cs.toronto.edu for a copy of the recording.
Here our lab's "HCI talk" http://www.josephjaywilliams.com/prospectivestudents#TOC-HCI-Human-Computer-...
But the talk will be pretty accessible with zero background in statistics/machine learning. The key ideas are: If you run randomized experiments in the real world, how can you make them adaptive experiments? By using machine learning to rapidly use data to give people the better interventions, while also enabling reliable statistical analysis of the data?
Actually, it will be great to get feedback on this talk from HCI people, like whether we are convincing you to actually use these methods for randomized experiments you would run.
Anyone from my lab (or DGP) who joins the talk, please let me know if you are willing to use the zoom 'thumbs up' emoticon and just flash that EVERY TWO MINUTES! Even if you think my speed is fine, flash it :D. That will greatly increase the audience's comprehension and user experience!
Joseph
*INFORMATION ON TALK:*
Subject Line: Fri 22 Jan 11 am: Vector Talk on Adapting Real-World Experimentation to Balance Enhancement of User Experiences with Statistically Robust Scientific Discovery
Joseph Jay Wiliams http://www.josephjaywilliams.com/ is giving a talk in the Vector Institute for AI seminar this Friday: Jan 22nd at 11am EST. (Click to add to Calendar https://www.google.com/calendar/render?action=TEMPLATE&text=Vector%20Institute%20Seminar%3A%20Adapting%20Real-World%20Experimentation%20To%20Balance%20Enhancement%20of%20User%20Experiences%20with%20Statistically%20Robust%20Scientific%20Discovery%0A&dates=20210122T160000Z/20210122T170000Z&details=Registration%20link%3A%20https%3A%2F%2Fvectorinstitute.zoom.us%2Fmeeting%2Fregister%2FtJ0qde2uqjwoGdEcu1GeiqDS3asFFFTFL7V0%20%20%20%0A%0AJoin%20Zoom%20Meeting%0Ahttps%3A%2F%2Fvectorinstitute.zoom.us%2Fw%2F99724647235%3Ftk%3DljUeWXxozZn8COEiN82E9F0TnJ89cpx_-Ha_cmpWsts.DQIAAAAXOA1bQxZoY013ZzF5a1FHT19rY3JWOFhId21nAAAAAAAAAAAAAAAAAAAAAAAAAAAA%26pwd%3DK3QvQUJYMzYvb0RwcTZlczh4Rm41UT09%20%0A%0AMeeting%20ID%3A%20997%202464%207235%20%0APasscode%3A%20123456%0A%0ATalk%20by%20Joseph%20Jay%20Williams%20%28www.josephjaywilliams.com%29%20at%20Vector%20Institute%20Seminar%3A%0AShort%20Title%3A%20Adapting%20Real-World%20Experimentation%20To%20Balance%20Enhancement%20of%20User%20Experiences%20with%20Statistically%20Robust%20Scientific%20Discovery%0ALong%20Title%3A%20Perpetually%20Enhancing%20User%20Interfaces%20in%20Tandem%20with%20Advancing%20Scientific%20Research%20in%20Education%20%26%20Mental%20Health%3A%20Enabling%20Reliable%20Statistical%20Analysis%20of%20the%20Data%20Collected%20by%20Algorithms%20that%20Trade%20Off%20Exploration%20%26%20Exploitation%0AHow%20can%20we%20transform%20the%20everyday%20technology%20people%20use%20into%20intelligent%2C%20self-improving%20systems%3F%20For%20example%2C%20how%20can%20we%20perpetually%20enhance%20text%20messages%20for%20managing%20stress%2C%20or%20personalize%20explanations%20in%20online%20courses%3F%20Our%20work%20explores%20the%20use%20of%20randomized%20adaptive%20experiments%20that%20test%20alternative%20actions%20%28e.g.%20text%20messages%2C%20explanations%29%2C%20aiming%20to%20gain%20greater%20statistical%20confidence%20about%20the%20value%20of%20actions%2C%20in%20tandem%20with%20rapidly%20using%20this%20data%20to%20give%20better%20actions%20to%20future%20users.%20%0ATo%20help%20characterize%20the%20problems%20that%20arise%20in%20statistical%20analysis%20of%20data%20collected%20while%20trading%20off%20exploration%20and%20exploitation%2C%20we%20present%20a%20real-world%20case%20study%20of%20applying%20the%20multi-armed%20bandit%20algorithm%20TS%20%28Thompson%20Sampling%29%20to%20adaptive%20experiments.%20TS%20aims%20to%20assign%20people%20to%20actions%20in%20proportion%20to%20the%20probability%20those%20actions%20are%20optimal.%20We%20present%20empirical%20results%20on%20how%20the%20reliability%20of%20statistical%20analysis%20is%20impacted%20by%20Thompson%20Sampling%2C%20compared%20to%20a%20traditional%20experiment%20using%20uniform%20random%20assignment.%20This%20helps%20characterize%20a%20substantial%20problem%20to%20be%20solved%20%E2%80%93%20using%20a%20reward%20maximizing%20algorithm%20can%20cause%20substantial%20issues%20in%20statistical%20analysis%20of%20the%20data.%20More%20precisely%2C%20an%20adaptive%20algorithm%20can%20increase%20both%20false%20positives%20%28believing%20actions%20have%20different%20effects%20when%20they%20do%20not%29%20and%20false%20negatives%20%28failing%20to%20detect%20differences%20between%20actions%29.%20We%20show%20how%20statistical%20analyses%20can%20be%20modified%20to%20take%20into%20account%20properties%20of%20the%20algorithm%2C%20but%20that%20these%20do%20not%20fully%20address%20the%20problem%20raised.%0A%0AWe%20therefore%20introduce%20an%20algorithm%20which%20assigns%20a%20proportion%20of%20participants%20uniformly%20randomly%20and%20the%20remaining%20participants%20via%20Thompson%20sampling.%20The%20probability%20that%20a%20participant%20is%20assigned%20using%20Uniform%20Random%20%28UR%29%20allocation%20is%20set%20to%20the%20posterior%20probability%20that%20the%20difference%20between%20two%20arms%20is%20%27small%27%20%28below%20a%20certain%20threshold%29%2C%20allowing%20for%20more%20UR%20exploration%20when%20there%20is%20little%20or%20no%20reward%20to%20be%20gained%20by%20exploiting.%20The%20resulting%20data%20can%20enable%20more%20accurate%20statistical%20inferences%20from%20hypothesis%20testing%20by%20detecting%20small%20effects%20when%20they%20exist%20%28reducing%20false%20negatives%29%2C%20and%20reducing%20false%20positives.%0A%0AThe%20work%20we%20present%20aims%20to%20surface%20the%20underappreciated%20complexity%20of%20using%20adaptive%20experimentation%20to%20both%20enable%20scientific%2Fstatistical%20discovery%20and%20help%20real-world%20users%20The%20current%20work%20takes%20a%20first%20step%20towards%20computationally%20characterizing%20some%20of%20the%20problems%20that%20arise%2C%20and%20what%20potential%20solutions%20might%20look%20like%2C%20in%20order%20to%20inform%20and%20invite%20multidisciplinary%20collaboration%20between%20researchers%20in%20machine%20learning%2C%20statistics%2C%20and%20the%20social-behavioral%20sciences.%0A%0ABio%3A%20Joseph%20Jay%20Williams%20is%20an%20Assistant%20Professor%20in%20Computer%20Science%20%28and%20a%20Vector%20Institute%20Faculty%20Affiliate%2C%20with%20courtesy%20appointments%20in%20Statistics%20%26%20Psychology%29%20at%20the%20University%20of%20Toronto%2C%20leading%20the%20Intelligent%20Adaptive%20Interventions%20research%20group.%20He%20was%20previously%20an%20Assistant%20Professor%20at%20the%20National%20University%20of%20Singapore%27s%20School%20of%20Computing%20in%20the%20department%20of%20Information%20Systems%20%26%20Analytics%2C%20a%20Research%20Fellow%20at%20Harvard%27s%20Office%20of%20the%20Vice%20Provost%20for%20Advances%20in%20Learning%2C%20and%20a%20member%20of%20the%20Intelligent%20Interactive%20Systems%20Group%20in%20Computer%20Science.%20He%20completed%20a%20postdoc%20at%20Stanford%20University%20in%20Summer%202014%2C%20working%20with%20the%20Office%20of%20the%20Vice%20Provost%20for%20Online%20Learning%20and%20the%20Open%20Learning%20Initiative.%20He%20received%20his%20PhD%20from%20UC%20Berkeley%20in%20Computational%20Cognitive%20Science%20%28with%20Tom%20Griffiths%20and%20Tania%20Lombrozo%29%2C%20where%20he%20applied%20Bayesian%20statistics%20and%20machine%20learning%20to%20model%20how%20people%20learn%20and%20reason.%20He%20received%20his%20B.Sc.%20from%20University%20of%20Toronto%20in%20Cognitive%20Science%2C%20Artificial%20Intelligence%20and%20Mathematics%2C%20and%20is%20originally%20from%20Trinidad%20and%20Tobago.%20More%20information%20about%20the%20Intelligent%20Adaptive%20Intervention%20group%27s%20research%20and%20papers%20is%20at%20www.josephjaywilliams.com.%20%20%20%20%20%20&location=Registration%20link%3A%20https%3A%2F%2Fvectorinstitute.zoom.us%2Fmeeting%2Fregister%2FtJ0qde2uqjwoGdEcu1GeiqDS3asFFFTFL7V0%20%0AZoom%20%0AMeeting%20ID%3A%20997%202464%207235%0APassword%3A%20123456&sf=true&output=xml). To request a recording & slides, email angelina.liu@mail.utoronto.ca and cc williams@cs.toronto.edu.
Link to register https://vectorinstitute.zoom.us/meeting/register/tJ0qde2uqjwoGdEcu1GeiqDS3asFFFTFL7V0
Meeting ID: 997 2464 7235
Password: 123456
Short Title:Adapting Real-World Experimentation To Balance Enhancement of User Experiences with Statistically Robust Scientific Discovery
Long Title:Perpetually Enhancing User Interfaces in Tandem with Advancing Scientific Research in Education & Mental Health: Enabling Reliable Statistical Analysis of the Data Collected by Algorithms that Trade Off Exploration & Exploitation
How can we transform the everyday technology people use into intelligent, self-improving systems? For example, how can we perpetually enhance text messages for managing stress, or personalize explanations in online courses? Our work explores the use of randomizedadaptiveexperiments that test alternative actions (e.g. text messages, explanations), aiming to gain greater statistical confidence about the value of actions, in tandem with rapidly using this data to give better actions to future users.
To help characterize the problems that arise in statistical analysis of data collected while trading off exploration and exploitation, we present a real-world case study of applying the multi-armed bandit algorithm TS (Thompson Sampling) to adaptive experiments. TS aims to assign people to actions in proportion to the probability those actions are optimal. We present empirical results on how the reliability of statistical analysis is impacted by Thompson Sampling, compared to a traditional experiment using uniform random assignment. This helps characterize a substantial problem to be solved – using a reward maximizing algorithm can cause substantial issues in statistical analysis of the data. More precisely, an adaptive algorithm can increase both false positives (believing actions have different effects when they do not) and false negatives (failing to detect differences between actions). We show how statistical analyses can be modified to take into account properties of the algorithm, but that these do not fully address the problem raised.
We therefore introduce an algorithm which assigns a proportion of participants uniformly randomly and the remaining participants via Thompson sampling. The probability that a participant is assigned using Uniform Random (UR) allocation is set to the posterior probability that the difference between two arms is 'small' (below a certain threshold), allowing for more UR exploration when there is little or no reward to be gained by exploiting. The resulting data can enable more accurate statistical inferences from hypothesis testing by detecting small effects when they exist (reducing false negatives) and reducing false positives.
The work we present aims to surface the underappreciated complexity of using adaptive experimentation to both enable scientific/statistical discovery and help real-world users. The current work takes a first step towards computationally characterizing some of the problems that arise, and what potential solutions might look like, in order to inform and invite multidisciplinary collaboration between researchers in machine learning, statistics, and the social-behavioral sciences.
Bio:Joseph Jay Williams is an Assistant Professor in Computer Science (and a Vector Institute Faculty Affiliate, with courtesy appointments in Statistics & Psychology) at the University of Toronto, leading the Intelligent Adaptive Interventions research group. He was previously an Assistant Professor at the National University of Singapore's School of Computing in the department of Information Systems & Analytics, a Research Fellow at Harvard's Office of the Vice Provost for Advances in Learning, and a member of the Intelligent Interactive Systems Group in Computer Science. He completed a postdoc at Stanford University in Summer 2014, working with the Office of the Vice Provost for Online Learning and the Open Learning Initiative. He received his PhD from UC Berkeley in Computational Cognitive Science (with Tom Griffiths and Tania Lombrozo), where he applied Bayesian statistics and machine learning to model how people learn and reason. He received his B.Sc. from University of Toronto in Cognitive Science, Artificial Intelligence and Mathematics, and is originally from Trinidad and Tobago. More information about the Intelligent Adaptive Intervention group's research and papers is at www.josephjaywilliams.com http://www.josephjaywilliams.com/.
Joseph
Joseph Jay Williams www.josephjaywilliams.com Assistant Professor Department of Computer Science, University of Toronto Intelligent Adaptive Interventions (IAI) research group