About
This workshop will examine how increasingly proactive AI agents are reshaping human information seeking and interaction. As systems begin to anticipate intent, personalize responses, and act on users' behalf, critical questions arise around agency, trust, transparency, and control. Bringing together perspectives from information retrieval, human–computer interaction, and AI, the workshop focuses on the human experience of proactivity: when it supports exploration, sense-making, and learning, and when it risks overstepping or narrowing user goals.
Through interactive discussions and co-design activities, participants will explore social, cognitive, and ethical dimensions of proactive systems and develop frameworks for designing and evaluating human-aligned initiative. The workshop aims to reframe proactivity not as automation, but as a collaborative partnership between humans and AI in the pursuit of understanding.
This will be an in-person event at ACM CHIIR 2026.
Date: Thursday, March 26, 2026
Venue: Husky Union Building (HUB)
Important Dates
- Submission deadline: January 15, 2026 (11:59 PM AoE)
- Notification: January 25, 2026
- Workshop at CHIIR 2026: Thursday, March 26, 2026
Call for Papers
Proactive and personalized agents are rapidly moving from research prototypes to everyday interfaces for search, dialogue, and recommendation. As these systems grow more capable, they increasingly shape what users notice, learn, and decide by taking initiative—suggesting next steps, surfacing unrequested information, and adapting to a user’s goals, context, and history. This shift creates new opportunities for information fostering, sensemaking, and learning, but also introduces risks around agency, consent, trust, overreach, and unwanted personalization.
Despite rapid progress, research on proactive and personalized agents remains fragmented across communities, including interactive information retrieval, recommender systems, HCI, dialogue, responsible AI, and cognitive/behavioral science. Translating ideas into real-world systems raises practical challenges—latency, privacy, in-the-wild evaluation, user control, safety policies, and organizational accountability—that demand shared attention. Bridging these gaps is urgent.
To address this need, the workshop aims to:
(1) bring together researchers and practitioners across IR, HCI, RecSys, NLP, cognitive/behavioral science, and Responsible AI;
(2) develop a shared agenda with principled design and evaluation paradigms;
(3) surface open research questions shaping the next generation of human-centered agentic systems.
We invite 1-page extended abstracts (text-only; ~500 words), including position papers, empirical studies, surveys, and early-stage ideas. Submissions will be peer reviewed for relevance, clarity, and discussion potential. Accepted work will be presented at the workshop, and at least one author must attend in person.
Scope
Submissions should relate to proactive and/or personalized agents for interactive information access, spanning design, systems, interaction, evaluation, and responsible deployment.
Topics of interest include (but are not limited to):
- Human-centered definitions and boundaries of proactivity (initiative, timing, appropriateness)
- Personalization and user modeling for agentic systems
- Mixed-initiative interaction patterns and proactive interventions
- Proactive agents for search, recommendation, and exploratory discovery
- Evaluation approaches: metrics, user studies, benchmarks, case studies
- Trust, consent, privacy, safety, and governance
- Failure modes (overreach, manipulation, interruption) and mitigation
- Domain applications (education, health, productivity, community settings)
How to Submit
Please submit your ~500-word extended abstract:
Submission form:
Attendance & Presentation Requirement: To keep the workshop highly interactive, at least one author of each accepted submission must register, attend, and present at the workshop. Presentation details will be released shortly.
Contact
For questions, please contact the organizers at: [kaur13@cs.washington.edu]