| Submission Deadline | Notification of Acceptance | Submission Email | Download |
|---|---|---|---|
| July 16, 2026 | 7-20 workdays | sympo_chicago@confcds.org | Manuscript Template |
Artificial intelligence has evolved from static predictive models to adaptive learning agents and increasingly autonomous systems capable of planning, reasoning, and interacting with external environments. Foundation models, generative AI, retrieval-augmented generation (RAG), and agentic architectures have expanded the functional scope of AI applications across industry and society. However, this evolution has also broadened the attack surface of AI systems, introducing new vulnerabilities linked to data quality, model behavior, tool integration, and autonomous decision-making.
Traditional cybersecurity frameworks are often insufficient to address these AI-specific risks. A data-centric security approach—focusing on training data integrity, model lifecycle protection, memory safety in agents, and governance alignment—has therefore emerged as a critical research direction. Understanding and mitigating these risks is essential for ensuring trustworthy and responsible AI deployment in increasingly autonomous digital ecosystems.
The rapid deployment of large-scale AI models, learning agents, and autonomous systems has introduced a new generation of security risks that extend beyond traditional cybersecurity paradigms. As AI systems increasingly rely on complex data pipelines, continual learning mechanisms, external tools, and autonomous decision-making capabilities, vulnerabilities can emerge at every stage of the data lifecycle. Threats such as dataset poisoning, prompt injection, reward hacking, model extraction, and autonomous goal manipulation challenge the reliability, safety, and trustworthiness of AI-driven infrastructures.
This symposium aims to address these emerging risks from a data-centric perspective, emphasizing that securing AI requires protecting not only models but also data flows, agent memory, learning dynamics, and system interactions. Recent advances in adversarial robustness, secure MLOps, Zero Trust architectures, red teaming for generative AI, and AI governance frameworks provide promising directions. By integrating technical safeguards with risk management and policy alignment, this research topic seeks to foster interdisciplinary solutions that enhance resilience, accountability, and secure deployment of advanced AI systems.
This symposium explores security challenges in data-centric AI systems, with particular attention to learning agents and autonomous systems. Topics include secure data ingestion and training data integrity, dataset poisoning detection, privacy risks in model training, and secure memory management in agent-based systems. We also welcome research on adversarial threats such as prompt injection, RAG-based manipulation, model extraction, reward hacking, and attacks on autonomous decision-making processes. Contributions addressing secure model lifecycle engineering, Zero Trust architectures, authorization control for AI APIs and agents, and runtime monitoring of autonomous workflows are encouraged. In addition, discussions on governance and risk management—including alignment with NIST AI RMF, dual-use risks, and responsible disclosure for agentic and self-learning systems—are highly relevant.
Accepted papers of this symposium will be published in Applied and Computational Engineering (Print ISSN: 2755-2721), and will be submitted to Conference Proceedings Citation Index (CPCI), Crossref, Portico, Inspec, Google Scholar, CNKI, and other databases for indexing. The situation may be affected by factors among databases like processing time, workflow, policy, etc.
The papers will be exported to production and publication on a regular basis. Early-registered papers are expected to be published online earlier.
This symposium is organized by CONF-CDS 2026 and will independently proceed the submission and publication process.