Background The Alberta Stroke Program Early CT Score (ASPECTS) evaluation is a qualitative method to evaluate focal hypoattenuation at brain CT in early acute stroke. However, interobserver agreement is only moderate. Purpose To compare ASPECTS calculated by using an automatic software tool to neuroradiologist evaluation in the setting of acute stroke. Materials and Methods For this retrospective study, consensus ASPECTS were defined by two neuroradiologists based on baseline noncontrast CTs collected from January 2017 to December 2017 from patients with an occlusion in the middle cerebral artery and from an additional cohort of patients suspected of having stroke and no large vessel occlusion. Imaging data from both baseline and follow-up CT was evaluated for the consensus reading. After 6 weeks, the same two neuroradiologists again determined ASPECTS by using only the baseline CT. For comparison, ASPECTS was also calculated from baseline CT images by using a commercially available software (RAPID ASPECTS). Both methods were compared by using weighted κ statistics. Results CT scans from 100 patients with middle cerebral artery occlusion (44 women [mean age ± standard deviation, 75 years ± 14] and 56 men [mean age, 71 years ± 14]) and 52 patients suspected of having stroke and no large vessel occlusion (19 women [mean age, 69 years ± 18] and 33 men [68 years ± 15]) were evaluated. Neuroradiologists showed moderate agreement with the consensus score (κ = 0.57 and κ = 0.56). Software analysis showed substantial agreement (κ = 0.9) with the consensus score. Software analysis showed a substantial agreement (κ = 0.78) after greater than 1 hour between symptom onset and imaging, which increased to high agreement (κ = 0.92) in the time window greater than 4 hours. The neuroradiologist raters did not achieve comparable results to the software until the time interval of greater than 4 hours (κ = 0.83 and κ = 0.76). Conclusion In acute stroke of the middle cerebral artery, the Alberta Stroke Program Early CT score calculated with automated software had better agreement than that of human readers with a predefined consensus score. © RSNA, 2019 Online supplemental material is available for this article.