As the prevalence of polypharmacy increases with our aging population, the propensity for adverse drug-drug interactions arising from the altered metabolism of co-administered medicines remains an important consideration for drug development. Mechanism-based inactivation (MBI) of the cytochrome P450 enzyme system is responsible for many clinically relevant drug-drug interactions (DDIs) due to the irreversible and long-lasting effects of the enzyme inactivation. Unlike competitive inhibition, MBI persists after the inactivator has been cleared from the system, since de novo enzyme synthesis is required to restore metabolic activity. Recognizing the potential severity of DDIs arising from MBI, there is increasing need for predictive methodologies that can enable prospective risk assessment for the likelihood of a clinical DDI. Steady-state models, which simplify the system to a single inactivator concentration and assume static, equilibrium conditions, are important tools for assessing the potential for DDIs. More sophisticated, physiologically-based models offer advantages over the static models by taking into account changing inactivator concentration over time, in addition to incorporating population variability into the prediction. Despite the differences between the static and dynamic approaches, a key consideration for both is the sensitivity of the models to the input parameters. These inputs include inactivator-specific kinetic parameters describing MBI in terms of potency (K(I)) and inactivation rate (k(inact)), the unbound inactivator concentration (I(u)), and the enzyme degradation rate, (k(deg)). This commentary investigates the impact of the selection of input parameters, and the uncertainty in their assessment, on the prediction for DDIs arising from MBI and the relevance to risk-assessment.