This paper introduces a novel kernel variable importance measure (KvIM) based on the maximum mean discrepancy (MMD). KvIM can effectively measure the importance of each individual dimension in contributing to the distributional difference by constructing weighted MMD and applying perturbations to evaluate changes in MMD through assigned weights. KvIM has several notable advantages: it is nonparametric and model-free, accounts for dependencies among dimensions, and is suitable for high-dimensional data. Additionally, we establish the consistency of the empirical KvIM under general conditions, along with its theoretical properties in high-dimensional settings. Furthermore, we apply KvIM to classification problems and streaming datasets, proposing a KvIM-enhanced classification approach and the online KvIM. These applications demonstrate the practical utility of the proposed KvIM in diverse scenarios, justified via extensive numerical experiments.