AI and the Future of Drug Approval: Inside the FDA's Talks With OpenAI

How cderGPT Could Reshape the Way We Evaluate Medicine, Speeding Up Innovation Without Sacrificing Safety

When you think of cutting-edge uses for artificial intelligence, your mind might go to self-driving cars, generative art, or virtual assistants. But behind the scenes, AI is beginning to make inroads into an industry where the stakes are arguably even higher: the approval of drugs that millions depend on for their health and survival.

In early 2025, news broke that the U.S. Food and Drug Administration (FDA) is in discussions with OpenAI, the company behind ChatGPT, about applying generative AI to the drug review process. The potential outcome? A purpose-built model dubbed “cderGPT”, named after the FDA’s Center for Drug Evaluation and Research (CDER).

If successful, this could mark one of the most significant shifts in how life-saving treatments reach the public. It could also redefine how we think about government transparency, efficiency, and the responsible use of AI in critical sectors. But before we get ahead of ourselves, let’s unpack the full story — the problem, the solution being explored, and what it could mean for the future of medicine.

The Drug Approval Problem: A Timeline That’s Far Too Long

Bringing a new drug to market is notoriously slow, often taking 10 to 15 years and costing upwards of $2.6 billion, according to some estimates. The reason for this extended timeline isn’t laziness or red tape for its own sake — it’s about safety. Before a drug can be sold, it needs to undergo a rigorous sequence of steps:

  1. Discovery and Preclinical Research: Lab work and animal studies to determine whether a drug might work.

  2. Clinical Trials (Phases I–III): These involve testing on humans, first to establish safety, then to evaluate efficacy.

  3. Review by the FDA: The agency’s scientists must comb through hundreds of thousands of pages of trial data, lab results, adverse event logs, and manufacturing details.

Each of these steps serves a purpose, but the final phase — regulatory review — is where time tends to drag the most. That’s where generative AI enters the picture.

The Answer: Introducing “cderGPT”

In what might be one of the most forward-thinking AI-government partnerships to date, the FDA is reportedly in talks with OpenAI and Elon Musk’s Department of Government Efficiency (DOGE) to develop cderGPT. The model would be built specifically to assist CDER in reviewing drug applications faster and with fewer human bottlenecks.

While the full scope of what cderGPT might be able to do isn’t finalized, early discussions have outlined potential capabilities:

  • Drafting internal summaries of large and complex documents.

  • Identifying missing safety data or inconsistencies in drug applications.

  • Organizing information across thousands of pages of clinical data to flag potential risks.

Think of it like an AI-powered research assistant working 24/7. Rather than replacing human reviewers, cderGPT would augment their workflow — ideally cutting down review time by weeks or even months without compromising safety or rigor.

Why This Matters: The Cost of Delay

Every delay in drug approval has a very real human cost. For patients with conditions like cancer, Alzheimer’s, or rare genetic disorders, waiting even one more year can mean the difference between life and death.

By the time the FDA finishes reviewing a new treatment — even a promising one that sailed through trials — thousands of patients may have died waiting. Speeding up the review process while maintaining high standards is not just a technical upgrade; it’s a moral imperative.

Moreover, from an economic perspective, delays in approvals cost the healthcare system billions in prolonged treatments, hospitalizations, and productivity losses. In some cases, patients continue to rely on older, less effective drugs simply because the newer ones are stuck in regulatory limbo.

What Makes cderGPT Different From ChatGPT?

cderGPT would not be just another version of ChatGPT with a new logo. Unlike general-purpose models, it would be fine-tuned on domain-specific datasets, such as anonymized FDA drug applications, clinical trial data, toxicology reports, and historical approvals and rejections.

This specialization could give the model far more context than your average chatbot. For example, it might:

  • Cross-reference safety outcomes across dozens of drugs in the same class.

  • Recognize red flags in statistical data that have historically led to rejections.

  • Suggest follow-up questions or tests based on missing inputs.

In other words, cderGPT wouldn’t “replace” a scientist or a reviewer — it would act like an expert assistant, deeply embedded in the regulatory workflow.

The People Behind the Push: A New Type of Bureaucrat

This initiative is being led by Jeremy Walsh, the FDA’s first Chief Artificial Intelligence Officer — a newly created position that speaks volumes about how seriously the agency is taking AI’s role in the future of public health.

The FDA isn’t acting alone. The project is reportedly being supported by DOGE (Department of Government Efficiency), a Musk-linked think tank pushing for better use of AI across public sector workflows. OpenAI, for its part, has not confirmed an official partnership yet, but the agency is reportedly testing various models for suitability.

The triangle of influence — OpenAI, DOGE, and the FDA — raises interesting questions about public-private collaborations and the influence of tech companies in federal decision-making.

What Could Possibly Go Wrong?

A healthy dose of skepticism is warranted. Applying AI to such a critical task raises serious challenges:

1. Data Quality

If cderGPT is trained on flawed, biased, or incomplete data, it could miss important safety signals — or worse, greenlight a dangerous drug.

2. Explainability

Regulatory decisions need to be auditable and explainable. If an AI model can’t justify its output in plain English, it’s not much use to regulators, policymakers, or the public.

3. Overreliance

The danger of human reviewers deferring too much to AI suggestions is very real. A tool like cderGPT must be an aid, not a crutch.

4. Security and Privacy

Even with redacted datasets, training on sensitive medical information comes with a host of compliance concerns — especially under HIPAA and other privacy laws.

These challenges are not insurmountable, but they demand thoughtful implementation, transparent governance, and ongoing human oversight.

A Blueprint for AI in Government?

What makes the cderGPT initiative particularly interesting is that it could become a template for how other agencies — from the EPA to the IRS — might use generative AI to streamline complex workflows.

Imagine AI models designed to assist:

  • The SEC in detecting financial fraud.

  • The EPA in analyzing environmental impact statements.

  • The IRS in auditing tax anomalies faster and more fairly.

Done right, these systems could democratize access, reduce human error, and make the government faster and smarter.

Balancing Speed With Safety: Can We Really Have Both?

One of the enduring fears about “AI in government” is that speed will come at the expense of caution. But that’s a false binary. AI is not being asked to make final decisions — at least not in the case of cderGPT.

Instead, it’s helping regulators prioritize their time, spot patterns, and surface key information so they can make better decisions, faster. This is a crucial distinction.

As Rafael Rosengarten, CEO of precision medicine company Genialis, put it in WIRED’s original report:

“You must define the quality of the data going in and set the minimum standards for the model’s performance. If you don't do that, you can’t trust the output.”

The Road Ahead: What’s Next for cderGPT?

As of now, cderGPT is still in its exploratory phase. The FDA has not signed a formal agreement with OpenAI, and the tool has not been deployed in any official capacity.

But make no mistake — the wheels are turning. Early use cases could involve pilot projects focused on narrow disease areas like diabetes, oncology, or orphan drugs. These would allow the FDA to test the model’s capabilities in a contained setting before broader rollout.

The agency is also under pressure to modernize. In recent years, it has faced criticism for being too slow to adapt to emerging therapies, such as gene editing and mRNA vaccines. AI could help the FDA keep pace with the innovation happening in biotech, pharma, and digital health.

The Opportunity and the Responsibility

The conversation around cderGPT isn’t just about speeding up bureaucracy. It’s about how we build a smarter, fairer, and more effective public health system in the age of AI.

Will this work perfectly the first time? Probably not.

But the alternative — sticking with decades-old review processes while the rest of the world accelerates — isn’t acceptable either. If we can find a way to make drug evaluation more efficient while keeping it safe and fair, we owe it to every patient, researcher, and taxpayer to give it a serious look.

The future of medicine isn’t just being written in laboratories. Increasingly, it’s being coded in AI models — and how we use them may determine how fast hope becomes reality.

Sources: