York University, Canada
York University, Canada
Agentic AI systems mark a shift from bounded API integrations to LLM-mediated, tool-using intermediaries that act: they interpret intent, assemble context, select tools, and execute multi-step operations across services over time. This creates a distinctive class of socio-technical risks—agency risk—where practical control is lost, redistributed, or rendered opaque because actionable context is assembled, retained, retrieved, and recomposed to enable delegated action. In this session we introduce contextual agency (the assembled informational basis for action at a given moment) and show why familiar concerns about privacy, security, and assurance now cohere differently in tool-using, stateful, multi-agent systems.
Building on recent work auditing FemTech app infrastructures, supply chains, and intimate data flows, we pivot these methods to agentic environments via infrastructural disclosure: practical ways to make the capability stack behind delegated action empirically legible and contestable. We are looking to key points where agency is engineered and evidence can fail: orchestration runtimes, memory and retention, retrieval stacks (embeddings, Vector DBs, RAG/Graph RAG), tool registries, authorisation and delegation, and the observability/evidence layers that determine what can be inspected, challenged, or repaired. The session will culminate by asking students to propose trace-based and user-facing indicators of practical control—can people meaningfully interrupt, inspect, contest, reverse, and exit—and to sketch new audit methods for making agency risk visible in real agentic pipelines.