Facial age serves as a socially salient cue that shapes impression formation and social cognition, yet its neurocognitive mechanism remains unclear. This study aimed to establish a three-stage model for facial age processing: structural encoding, prototype matching, and affective evaluation. We recorded electroencephalography (EEG) during age judgments of faces from four age groups (10, 30, 50, and 70 years) and combined event-related potential (ERP) analyses (component-based and mass-univariate), time-frequency analysis, and functional connectivity. ERPs showed stage-specific age effects: older faces evoked larger N170 amplitudes, reduced P2 responses, and enhanced late positive potentials (LPP). Mass-univariate analysis (MUA) further confirmed these effects, identifying three significant time bands (70-168 ms, 228-286 ms, and 342-800 ms) over occipital and temporo-occipital sensors, with strongest differentiation for the oldest versus younger faces. Time-frequency analysis revealed increased theta (4-8 Hz) and alpha (8-13 Hz) power during early encoding (∼100-200 ms), accompanied by widespread theta/alpha phase-based connectivity, indicating global coordination for initial age encoding. During prototype matching (∼200-300 ms), only local theta activity remained, suggesting localized processing without large-scale network engagement. The late stage (>300 ms) was indexed by LPP modulations, reflecting age-related affective processing. Overall, facial age processing shows a dynamic shift from early global coordination to later localized processing, providing a mechanistic account of how the brain extracts age information from faces.