Understanding Adverse Event Rates: Percentages and Relative Risk in Clinical Trials

Understanding Adverse Event Rates: Percentages and Relative Risk in Clinical Trials
Dec, 2 2025

Adverse Event Rate Calculator

Group A (Control)

Group B (Treatment)

When you hear that a new drug caused headaches in 15% of patients, it sounds simple. But that number can be wildly misleading if you don’t know how long those patients were actually on the drug. A patient taking a medication for two weeks isn’t at the same risk as someone on it for two years. That’s why the way we calculate adverse event rates matters - not just for researchers, but for patients, doctors, and regulators like the FDA.

Why Simple Percentages Don’t Tell the Whole Story

The most common way to report adverse events is the Incidence Rate (IR). It’s just the number of people who had an event divided by the total number of people in the study. So if 15 out of 100 patients got a rash, the IR is 15%. Easy. But here’s the problem: this method ignores how long each person was exposed to the drug.

Imagine two groups in a trial. Group A takes the drug for an average of 3 months. Group B takes it for 2 years. If 10 people in each group get a headache, the IR is 10% for both. But that’s not fair. The people in Group B had 8 times more time to develop the headache. The simple percentage hides the real risk. The FDA noticed this. In 2023, they asked a drug company to resubmit safety data using a different method - not because the original data was wrong, but because it was incomplete.

Enter Exposure-Adjusted Incidence Rate (EAIR)

EAIR fixes this by counting events per patient-year. A patient-year means one person taking the drug for one full year. If 5 people had 10 headaches over a combined 25 patient-years, the EAIR is 40 headaches per 100 patient-years. That’s a rate, not just a percentage. It tells you how often the event happens over time.

This isn’t just theory. In 2023, the FDA requested EAIR for a biologics license application. That was a signal: regulators are moving away from old-school percentages. Companies like MSD found that switching to EAIR uncovered safety signals they’d missed before - especially in long-term treatments for chronic conditions. In 12% of their reviewed programs, EAIR revealed risks that IR completely buried.

But EAIR isn’t perfect. It’s harder to calculate. It needs precise start and end dates for each patient’s treatment. If a patient stops and restarts the drug, you have to account for those gaps. If you mess up the dates - and 28% of early analyses did - your numbers are garbage. That’s why pharmaceutical statisticians now spend an average of 14.7 hours to build an EAIR analysis, compared to just 4.5 hours for a simple IR.

What About Recurrent Events?

Another problem with simple percentages: they count people, not events. If one patient gets 5 nausea episodes, IR treats them the same as someone who got one. But in reality, that patient is having a much more frequent problem. That’s where Event Incidence Rate (EIR) comes in. EIR counts the total number of events - not just the number of people affected.

So if 10 patients had 30 total nausea events over 100 patient-years, EIR is 30 per 100 patient-years. That’s more useful if you’re trying to understand how often patients need anti-nausea meds. But here’s the catch: if you only look at event counts, you might think a drug is worse than it is. One patient having 10 headaches doesn’t mean 10 different people are suffering - it means one person is having a rough time. EAIR balances this by counting both events and exposure time, giving a fuller picture.

Split-panel Art Deco image contrasting simple incidence rate with exposure-adjusted rate using floating patient-years and timing devices.

Relative Risk and What It Really Means

When comparing two drugs, you don’t just want to know how many people got sick. You want to know how much riskier one is than the other. That’s where relative risk comes in. It’s calculated as the ratio of two incidence rates - say, the EAIR of the new drug divided by the EAIR of the placebo.

If the new drug has an EAIR of 45 headaches per 100 patient-years and the placebo has 15, the relative risk is 3.0. That means patients on the new drug are three times more likely to get a headache per year than those on placebo. But here’s what most people miss: confidence intervals matter just as much. A relative risk of 3.0 might sound scary - but if the 95% confidence interval goes from 0.9 to 8.1, that means the real risk could be no different from placebo. The FDA looks at these intervals closely. If the range includes 1.0, they won’t consider the difference statistically significant.

Competing Risks: When Death Gets in the Way

Here’s a tricky one: what if a patient dies before they can have a heart attack? The heart attack never happens - not because the drug didn’t cause it, but because death stopped the observation. This is called a competing risk. Traditional methods like Kaplan-Meier estimators can give you false results here. They treat death as if it’s just a lost patient, not a reason the heart attack couldn’t occur.

A 2025 study showed that using standard methods in these cases can mislead by up to 22% when competing events like death are common. The fix? Cumulative hazard ratio estimation. This method separates the risk of death from the risk of the adverse event. It doesn’t pretend the event didn’t happen - it calculates how likely it was to happen before death. The FDA hasn’t mandated this yet, but researchers say it’s the future. Especially in cancer trials, where death is a real possibility, ignoring competing risks is like ignoring the weather when predicting flooding.

Art Deco courtroom scale balancing a percentage symbol against a complex EAIR formula, with a shadowy competing risk figure nearby.

Regulatory Shifts and What They Mean for You

The FDA isn’t alone. The European Medicines Agency (EMA) still lets companies choose between IR and EAIR - but they require a clear explanation for why they picked one over the other. The International Council for Harmonisation (ICH) has been pushing for this since 2020. Their E9(R1) guidelines say safety analyses must account for treatment discontinuation and exposure time. That’s not a suggestion - it’s a requirement.

The numbers show the shift is real. In 2020, only 12% of FDA submissions included exposure-adjusted metrics. By 2023, that jumped to 47%. Companies are investing heavily. The global clinical trial safety software market hit $1.84 billion in 2023, growing 22.7% that year alone. Why? Because if you don’t use EAIR, your application might get rejected.

Even CDISC - the global standard for clinical data - now requires both IR and EAIR for serious adverse events in oncology trials. And MedDRA, the coding system used to classify adverse events, added 47 new terms in 2023 just to handle time-based reporting.

What You Should Do Now

If you’re reviewing a clinical trial report, don’t just look for percentages. Ask: What method was used? Is it IR? EIR? EAIR? If it’s IR, ask whether exposure times were similar across groups. If they weren’t, the data is probably misleading.

If you’re working on a trial, start building EAIR into your safety analyses now. Use standardized tools like the PhUSE GitHub macros - they’ve been downloaded over 1,800 times and cut programming errors by 83%. Validate your exposure times. Check that no patient’s treatment duration exceeds the study period. Make sure you’re handling treatment interruptions correctly.

And if you’re a patient or a doctor reading a drug label, understand this: the 15% headache rate might be based on short-term data. The real risk over a year could be much higher. Regulatory agencies are catching up. You should too.

What’s Next?

The FDA’s 2024 draft guidance on exposure-adjusted analysis is open for public comment. It’s expected to become final in 2025. By 2027, experts predict 92% of Phase 3 drug submissions will include EAIR. Machine learning tools are already being tested by the FDA’s Sentinel Initiative to automatically spot safety signals using EAIR - and early results show a 38% improvement in detection.

This isn’t about making statistics fancier. It’s about making safety data honest. The old way gave us numbers that looked clean but told lies. The new way gives us numbers that are messy - but true.