Given the complexity of developing programs, services, policies, and support for e-learning, leaders may find it challenging to regularly evaluate programs to improve quality. Are there new opportunities to expand user and stakeholder input, or involve others in e-learning program evaluation? This chapter asks researchers and practitioners to rethink existing paradigms and methods for program evaluation. Crowdsourced input may help leaders and stakeholders address persistent evaluation challenges and improve e-learning quality, especially in Massive Open Online Courses (MOOCs). After reviewing selected evaluation paradigms, models, and methods, this chapter offers a possible role for crowdsourced input. This chapter examines the topics of crowd definition, affordances, and problems, to begin a taxonomical framework with possible applications for e-learning. The goal is to provide a reference for advancing the discussion and examination of crowdsourced input.