publications
a hopefully growing list of my conference publications!
2026
- An Instructor Dashboard to Support Mastery LearningShreya Mantipragada, Eldar Hasanov, Adam Hacker, and 4 more authorsIn Proceedings of the 57th ACM Technical Symposium on Computer Science Education V. 2, St. Louis, MO, USA, 2026
Mastery learning promises more equitable outcomes in large CS courses, yet instructors lack a full array of tools to support their implementation. Popular Learning Analytics Dashboards (LADs) and adaptive platforms excel at grade analytics but offer limited support for custom mastery policies. We present an open-source, instructor-focused dashboard integrated into a custom LMS to support mastery learning for high-enrollment CS courses. The system features a data pipeline that gathers scores from multiple sources and provides a view into the effectiveness of equitable grading policies, such as retakes, resubmissions, and flexible deadlines. The central interface, the Students view, provides a way to see the scores for a particular student across all assignments and exams. The system was deployed for two semesters in a large introductory CS course, where instructors reported ease of use, increased visibility into class performance, and effective support for mastery-learning policies. Based on this feedback, we are extending the platform with a Statistics view that surfaces assignment–and concept-level statistical summaries and interactive histograms to further support scalable mastery learning. To support broader adoption, we plan to release the system as an open-source platform to assist instructors and institutions seeking to implement mastery learning at scale.
2025
- A Direct Manipulation User Interface for Constructing Autogradable GraphsChristopher Rau, Eldar Hasanov, Narges Norouzi, and 2 more authorsIn Proceedings of the ACM Global on Computing Education Conference 2025 Vol 2, Gaborone, Botswana, 2025
Traditional graph-based assessments rely on hand-drawn sketches, leading to inaccuracies and limited scalability due to manual grading. These issues hinder student engagement and effective evaluation. Most existing digital tools emphasize visualization but lack support for grading and feedback elements in assessments. To address this, we introduce Graph Construction User Interface (GCUI), a web tool for creating and manipulating graphs via drag-and-drop. GCUI exports graphs in the DOT language for integration with Learning Management Systems (LMSs) and automatic grading. It also supports instant feedback and resubmissions, enabling iterative student improvement and scalable grading. The ubiquity of graph structures makes GCUI applicable across domains such as finite state machines, cryptography, and network security. In future work, we will conduct pilot studies to assess the tool’s impact on student satisfaction, performance, and completion times.
- An Interactive Tool for Randomized Autogradable Graph AssessmentsEldar Hasanov, Dev Ahluwalia, Dan Garcia, and 2 more authorsIn Proceedings of the 56th ACM Technical Symposium on Computer Science Education V. 2, Pittsburgh, PA, USA, 2025
Mastering algorithms and graph theory requires students to understand both the theoretical concepts and the practical mechanics. While most current assessments focus on the practical aspects, a deeper understanding of the theoretical concepts is often more crucial for truly grasping the material. Visualizations aim to help bridge this gap by allowing students to interact with data structures to trace traversals and outputs dynamically. We introduce an interactive tool through an online assessment platform that will enable students to click on different nodes and/or edges to dynamically change a graph model. There are numerous use cases, from introductory data structures and traversals such as depth-first and breadth-first search to more complicated algorithms such as tracing hypercube node processing. Although there are currently decorative components that display graphs and can be supplemented with submission elements, we hypothesize that by combining both features into one, students’ learning will be significantly more effective. Through such a tool, we plan to assess students’ performance in regard to (a) their score, (b) completion time, and (c) student satisfaction with the interactive assessments. We plan to analyze the types of errors students make depending on whether they are in the control or experimental groups. Further, we aim to assess how abstracting interactive assessment tools can be applied to introductory computer science courses to bridge the gap between proficiency and mastery learning.