Advancing Neural Network Verification
VNN-COMP is the premier international competition dedicated to evaluating and advancing the state-of-the-art in neural network verification. Join researchers and practitioners worldwide in pushing the boundaries of AI safety and reliability.
About VNN-COMP
The International Verification of Neural Networks Competition (VNN-COMP) brings together researchers, practitioners, and tool developers to evaluate and compare neural network verification methods. Our mission is to advance the field by providing standardized benchmarks, fostering collaboration, and promoting best practices in neural network safety.
Since 2019, VNN-COMP has served as a central hub for the neural network verification community, facilitating knowledge exchange and driving innovation in this critical area of AI safety and trustworthiness.
VNN-COMP @ AAAI 2026 Lab Forum
Organizers:
Taylor T. Johnson,
Edoardo Manino
,
ThanhVu Nguyen
,
Christopher Brix
,
Konstantin Kaulen
,
Changliu Liu
,
Ziwei Wang
,
Matthew L. Daggitt
With additional contributions from
Stanley Bak,
Tobias Ladner
,
and other VNN-COMP organizers and participants
First Half: Foundations & Background (8:30am - 10:30am)
| Time | Topic | Speaker(s) | Resources |
|---|---|---|---|
| 8:30 - 9:00 |
Introduction to Neural Network Verification and Motivation
Overview of verification approaches for safe, secure, and trustworthy AI systems. Importance of verification for AI in robotics (such as vision language action / VLA models) and other safety-critical and security-critical applications.
|
Taylor T. Johnson, Ziwei Wang | |
| 9:00 - 9:30 |
VNN-COMP Overview: History, Rules, Results
Competition history, rules, VNN-COMP'25 results, and plans for VNN-COMP'26.
|
Konstantin Kaulen | |
| 9:30 - 10:00 |
VNN-LIB and Benchmarks Overview
Benchmark formats (VNN-LIB 1.0/2.0, ONNX), current benchmark landscape, and solicitation for new benchmarks.
|
Matthew Daggitt, Edoardo Manino | |
| 10:00 - 10:30 |
Q&A Session
|
All |
| Coffee Break (10:30am - 11:00am) |
Second Half: Hands-On Participation (11:00am - 12:30pm)
| Time | Topic | Speaker(s) | Resources |
|---|---|---|---|
| 11:00 - 11:15 |
Introduction to Neural Network Verification and VNN-COMP (Interactive Demo)
Interactive demo: introduction to interactive portion and running neural network verification tools on benchmarks for VNN-COMP.
|
Konstantin Kaulen | Google Colab |
| 11:15 - 11:45 |
How to Participate as a Benchmark Proposer (Interactive Demo)
Interactive demo: creating ONNX files, VNN-LIB properties, repository structure, and submission process.
|
Edoardo Manino | Google Colab |
| 11:45 - 12:00 |
How to Participate as a Tool Developer (Interactive Demo)
Rules for verification tools, participation advice, demo of verification tools, and parser libraries.
|
Samuel Sasaki | |
| 12:00 - 12:15 |
VNN-COMP Evaluation System (Interactive Demo)
Overview of AWS-based competition evaluation infrastructure.
|
Konstantin Kaulen | Evaluation System |
| 12:15 - 12:30 |
Conclusions, References, and Questions Answer Session
Pointers to references and learning resources and Q A.
|
Taylor T. Johnson |
VNN-COMP 2026
News & Updates
Important Dates
Participation
Tool Developers
Submit your neural network verification tool to compete against other state-of-the-art tools on standardized benchmarks.
- Prepare install, prepare, and run scripts
- Choose CPU, GPU, or balanced instance
- Optional: Submit a competition contribution paper
Benchmark Proposers
Propose benchmarks that challenge and evaluate verification tools on important real-world problems.
- Provide ONNX networks and VNN-LIB specs
- Include seed-based instance generation
- Benchmarks voted on by tool participants
Sponsors
Support the advancement of neural network verification by sponsoring VNN-COMP.
- Logo visibility on website and materials
- Support cash prizes for winners
- Contact organizers for details
Competition Contribution Papers
VNN-COMP 2026 offers the opportunity to publish short papers (4-6 pages) in the SAIV LNCS proceedings. Tool developers can submit papers describing their verification techniques, and benchmark proposers can submit papers detailing their benchmarks and case studies. Details on submission deadlines will be announced.
Organizing Committee
General Chairs
Taylor T. Johnson (Vanderbilt University), Stanley Bak (Stony Brook University)
Evaluation Chairs
Christopher Brix, Tobias Ladner, Lukas Koller, Konstantin Kaulen
Benchmark Chairs
Edoardo Manino, Thomas Flinkow
Report Chairs
Haoze Wu, Hai Duong
Papers Chair
Competition Years
VNN-COMP 2026
7th iteration co-located with SAIV @ FLoC 2026 in Lisbon, Portugal.
VNN-COMP 2025
6th iteration co-located with SAIV @ CAV 2025 in Zagreb, Croatia.
VNN-COMP 2024
Enhanced verification challenges with improved tooling and evaluation metrics.
VNN-COMP 2023
Expanded benchmark suite with focus on scalability and real-world applications.
VNN-COMP 2022
Continued innovation in verification methodologies and benchmark diversity.
VNN-COMP 2021
Significant expansion of the competition with new tracks and participants.
VNN 2019
The inaugural AAAI spring symposia workshop that established the foundation for neural network verification evaluation.
Major Repositories
Access benchmarks, tools, and resources organized by competition year.
2019
Publications
Overview Paper
C. Brix, S. Bak, C. Liu, and T. T. Johnson, "First three years of the international verification of neural networks competition (VNN-COMP)," International Journal on Software Tools for Technology Transfer, vol. 25, pp. 329-339, 2023.
BibTeX
@article{brix2023vnncomp_overview,
title={First three years of the international verification of neural networks competition ({VNN-COMP})},
author={Brix, Christopher and Bak, Stanley and Liu, Changliu and Johnson, Taylor T.},
journal={International Journal on Software Tools for Technology Transfer},
volume={25},
pages={329--339},
year={2023},
publisher={Springer},
doi={10.1007/s10009-023-00703-4}
}
@article{kaulen2025vnncomp2025,
title={The Sixth International Verification of Neural Networks Competition ({VNN-COMP} 2025): Summary and Results},
author={Kaulen, Konstantin and Ladner, Tobias and Bak, Stanley and Brix, Christopher and Duong, Hai and Flinkow, Thomas and Johnson, Taylor T. and Koller, Lukas and Manino, Edoardo and Nguyen, ThanhVu H. and Wu, Haoze},
journal={arXiv preprint arXiv:2512.19007},
year={2025}
}
@article{brix2024vnncomp2024,
title={The Fifth International Verification of Neural Networks Competition ({VNN-COMP} 2024): Summary and Results},
author={Brix, Christopher and Bak, Stanley and Johnson, Taylor T. and Wu, Haoze},
journal={arXiv preprint arXiv:2412.19985},
year={2024}
}
@article{brix2023vnncomp2023,
title={The Fourth International Verification of Neural Networks Competition ({VNN-COMP} 2023): Summary and Results},
author={Brix, Christopher and Bak, Stanley and Liu, Changliu and Johnson, Taylor T.},
journal={arXiv preprint arXiv:2312.16760},
year={2023}
}
@article{mueller2022vnncomp2022,
title={The Third International Verification of Neural Networks Competition ({VNN-COMP} 2022): Summary and Results},
author={M{\"u}ller, Mark Niklas and Brix, Christopher and Bak, Stanley and Liu, Changliu and Johnson, Taylor T.},
journal={arXiv preprint arXiv:2212.10376},
year={2022}
}
@article{bak2021vnncomp2021,
title={The Second International Verification of Neural Networks Competition ({VNN-COMP} 2021): Summary and Results},
author={Bak, Stanley and Liu, Changliu and Johnson, Taylor T.},
journal={arXiv preprint arXiv:2109.00498},
year={2021}
}
Participating Tools
Overview of neural network verification tools that have participated in VNN-COMP across the years. Tools are sorted by number of years participated.
| Tool | 2020 | 2021 | 2022 | 2023 | 2024 | 2025 | 2026 |
|---|
Get Involved
Join the Community
Interested in participating or contributing to VNN-COMP? We welcome researchers, tool developers, sponsors, and organizations committed to advancing neural network verification in the broader safe AI, repsonsible AI, and trustworthy AI domains.
- Subscribe to our mailing list for updates
- Follow us on social media
- Contribute benchmarks and tools
- Participate in annual competitions