Workshop on Systems for Data-centric Agents with Human-in-the-loop
DASHSys
Co-located with VLDB 2026, Boston, USA
Sep 4, 2026 (Friday)

Large language models (LLMs) and autonomous agents are reshaping how data systems are designed and used. Modern AI applications increasingly operate as compound systems that integrate reasoning, retrieval, planning, and tool use over heterogeneous and continuously evolving data ecosystems. These systems must interact with structured and unstructured data, knowledge graphs, multimodal content, and external tools while balancing trade-offs in cost, latency, scalability, robustness, governance, and trust. Effectively deploying agentic AI in such environments requires a systems-driven approach grounded in data management principles and human oversight. Human-in-the-loop methodologies remain central to alignment, evaluation, debugging, and lifecycle management of evolving agent workflows. Ensuring that agents remain reliable, interpretable, and aligned with human intent is as much a data systems challenge as it is a modeling challenge. DASHSys is a full-day workshop dedicated to advancing the foundations, architectures, and evaluation of data-centric agentic systems.

Announcements

  • Apr 29, 2026: Excited to release the list of keynote speakers!
  • Apr 1, 2026: Call for submission for the systems track released.
  • Mar 15, 2026: First call for papers released.
  • We are excited to announce the Workshop on Systems for Data-centric Agents with Human-in-the-loop @VLDB 2026 (DASHSys).

Keynote Speakers

Erkang Eric Zhu
Erkang (Eric) Zhu
Alibaba
Fatma Özcan
Fatma Özcan
Google
Omar Khattab
Omar Khattab
MIT CSAIL
Eugene Wu
Eugene Wu
Columbia University
Shreya Shankar
Shreya Shankar
UC Berkeley
Juliana Freire
Juliana Freire
New York University
Yunyao Li
Yunyao Li
Adobe
Anbang Xu
Anbang Xu
NVIDIA

Program

Location: TBD

Coming soon...

Call for Submissions

DASHSys invites original research contributions, system papers, position papers, and real-world system reports at the intersection of:

We welcome submissions that advance the theory, systems, and practice of building data-aware, agent-driven, and human-aligned AI systems.

Topics of Interest

Topics include (but are not limited to):

Submission Categories

Research Track

  • Long Papers — up to 8 pages (excluding references). Mature research contributions with substantial technical depth.
  • Short Papers — up to 4 pages (excluding references). Focused contributions, negative results, position papers, and early-stage work.

Submissions should present original results and substantial new work not currently under review or published elsewhere. Manuscripts must be prepared following the same rules as VLDB conference papers. Papers must be submitted via the workshop's submission system in PDF format. Demo-oriented submissions are strongly encouraged. Artifact availability (code, datasets, system demos) is highly encouraged.

Evaluation Criteria and Reviewing Process

DASHSys will follow a double-anonymous review process. Each paper will be evaluated based on relevance, originality, technical quality, and clarity. Reviewers will be instructed to provide constructive feedback. Accepted papers will appear in the official VLDB workshop proceedings.

The Microsoft CMT service is used for managing the peer-reviewing process for this conference. This service is provided for free by Microsoft and they bear all expenses, including costs for Azure cloud services as well as for software development and support.

Systems Track

A competition where participants build production-grade agentic systems for performing data centric tasks spanning complex SQL queries and API calls.

We're looking for systems that go beyond demos: robust prompt and system designs that generalize across models, architectures for reliable agent-driven data systems, effective orchestration of SQL and API workflows, and practical approaches to efficiency and scalability. Deliverables include a metadata JSON with query-specific context, a filled system prompt and system prompt template, an agent output trajectory JSON, and a zipped source code archive.

  • System Papers — up to 4 pages (excluding references), VLDB format.

Evaluation Criteria

Submissions are ranked on two dimensions:

  • Correctness — accuracy of generated SQL, API calls, and final response
  • Efficiency — number of turns, tool calls, total tokens, and wall time

Winners receive monetary prizes.

Full details and dataset →


Important Dates

Research Track

  • Paper submission: May 19, 2026
  • Notification of acceptance: June 12, 2026
  • Camera-ready due: TBD
  • Workshop: Sep 4, 2026
  • Submit at CMT (AoE)

Systems Track

  • Registration deadline: May 4, 2026 (AoE)
  • Test set release: May 18, 2026
  • Submission deadline: May 19, 2026 (AoE)
  • Winners announced: May 31, 2026
  • Submit at CMT (AoE)

Organization

General Chairs

Submission Chairs

Website and Publicity Chairs

Systems Track Chairs

Keynotes and Session Chairs

Steering Committee

Program Committee


Sponsored By

Adobe