AI Research Slump? Inside the NeurIPS/ICLR Publication Blitz (2026)

AI research is facing a quality crisis, and experts are sounding the alarm: it’s a mess. This year, one contributor claimed authorship on 113 AI papers, with 89 of them slated for presentation at a premier AI/ML conference. That has sparked questions among computer scientists about the health of AI research.

The figure in question is Kevin Zhu, who recently earned a computer science degree from the University of California, Berkeley, and now runs Algoverse, a research and mentoring organization for high school students—many of whom are listed as his co-authors. Zhu graduated from high school in 2018.

Over the past two years, Zhu’s papers cover topics from using AI to locate nomadic pastoralists in sub-Saharan Africa, to evaluating skin lesions, to translating Indonesian dialects. On LinkedIn, he claims to have published “100+ top conference papers in the past year,” with works cited by major players like OpenAI, Microsoft, Google, Stanford, MIT, and Oxford.

Hany Farid, a Berkeley computer science professor, called Zhu’s output a “disaster,” describing the entire endeavor as “vibe coding”—the impression that AI is being used to generate software with little solid technical grounding. Farid highlighted Zhu’s prolific publishing in a recent LinkedIn post, which spurred conversations among AI researchers about a deluge of low-quality research driven by career pressures and, in some cases, AI-assisted writing.

When asked by the Guardian for comment, Zhu said he supervised 131 papers as part of team projects run by Algoverse. The company charges $3,325 for a 12-week online mentoring program for high schoolers and undergraduates, which includes guidance on submitting work to conferences. Zhu stated that he helps review methodology and experimental design and reviews full drafts before submission, adding that projects in linguistics, healthcare, or education involve principal investigators or mentors with relevant expertise. The teams reportedly used standard tools like reference managers, spellcheck, and occasionally language models for copy-editing.

The broader issue is that AI research review standards vary widely. Unlike fields such as chemistry or biology, much AI/ML work does not undergo rigorous traditional peer review. Instead, papers are often presented at major conferences like NeurIPS, where Zhu is scheduled to present. This has fed concerns among researchers that the field is overwhelmed by submissions. NeurIPS saw 21,575 papers this year, up from under 10,000 in 2020. ICLR reported a roughly 70% increase in submissions for 2026, nearing 20,000 papers, compared with just over 11,000 in 2025.

Reviewers have raised concerns about the quality of papers, with some even suspecting AI-generated content. A tech blog asked why the conference review process seems to have lost some of its sharpness as submissions surged. Students and academics feel intense pressure to publish, often chasing double- and triple-digit yearly counts. Farid notes that some students have resorted to “vibe coding” to boost publication metrics.

Farid advises students to rethink AI research altogether due to the current frenzy and the prevalence of low-quality work aimed at advancing career prospects. “There’s a frenzy in AI right now,” he says, and it’s hard to keep up with the volume and maintain thoughtful, rigorous work.

Despite the chaos, valuable research does emerge. Google’s influential transformer paper, Attention Is All You Need, debuted at NeurIPS in 2017 and helped spark major AI advances. NeurIPS acknowledges the pressure from rapidly growing submissions and the heightened emphasis on peer-reviewed acceptance, which strains the review system. Zhu’s papers largely appeared in NeurIPS workshops, which have a different selection process than the main conference and are often used for early-stage work. Still, this explanation doesn’t fully justify one researcher accumulating over 100 papers.

The issue extends beyond NeurIPS. ICLR employs AI-assisted review tools to manage the large influx of submissions, which has led to issues like hallucinated citations and overly long, repetitive feedback. The prevailing sense of decline has even inspired position papers proposing reforms to tackle the surge in submissions, questionable review quality, and reviewer accountability.

Meanwhile, major tech companies and AI safety groups now post much of their work on arXiv, a preprint server that lacks formal review standards. This flood makes it increasingly difficult for journalists, the public, and even domain experts to discern what is genuinely impactful versus what is noise. One veteran researcher notes that for the average reader, signal quality is difficult to gauge, and the landscape can feel almost unreadable.

In practical terms, if the goal is to maximize publication counts, it’s surprisingly easy to surface lower-quality work. But truly thoughtful, careful research is often disadvantaged in such a climate, where quantity can trump quality and reputation can hinge on conference medals rather than impact.

AI Research Slump? Inside the NeurIPS/ICLR Publication Blitz (2026)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Dr. Pierre Goyette

Last Updated:

Views: 6096

Rating: 5 / 5 (50 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Dr. Pierre Goyette

Birthday: 1998-01-29

Address: Apt. 611 3357 Yong Plain, West Audra, IL 70053

Phone: +5819954278378

Job: Construction Director

Hobby: Embroidery, Creative writing, Shopping, Driving, Stand-up comedy, Coffee roasting, Scrapbooking

Introduction: My name is Dr. Pierre Goyette, I am a enchanting, powerful, jolly, rich, graceful, colorful, zany person who loves writing and wants to share my knowledge and understanding with you.