Investigate 3D (spatial and temporal) convolutional neural networks (CNNs) for real-time on-the-fly magnetic resonance imaging (MRI) reconstruction. In particular, we investigated the applicability of training CNNs on a patient-by-patient basis for the purpose of lung tumor segmentation. Data were acquired with our 3 T Philips Achieva system. A retrospective analysis was performed on six non-small cell lung cancer patients who received fully sampled dynamic acquisitions consisting of 650 free breathing images using a bSSFP sequence. We retrospectively undersampled the six patient's data by 5× and 10× acceleration. The retrospective data was used to quantitatively compare the CNN reconstruction to gold truth data via the Dice coefficient (DC) and centroid displacement to compare the tumor segmentations. Reconstruction noise was investigated using the normalized mean square error (NMSE). We further validated the technique using prospectively undersampled data from a volunteer and motion phantom. The retrospectively undersampled data at 5× and 10× acceleration was reconstructed using patient specific trained CNNs. The patient average DCs for the tumor segmentation at 5× and 10× acceleration were 0.94 and 0.92, respectively. These DC values are greater than the inter- and intra-observer segmentations acquired by radiation oncologist experts as reported in a previous study of ours. Furthermore, the patient specific CNN can be trained in under 6 h and the reconstruction time was 65 ms per image. The prospectively undersampled CNN reconstruction data yielded qualitatively acceptable images. We have shown that 3D CNNs can be used for real-time on-the-fly dynamic image reconstruction utilizing both spatial and temporal data in this proof of concept study. We evaluated the technique using six retrospectively undersampled lung cancer patient data sets, as well as prospectively undersampled data acquired from a volunteer and motion phantom. The reconstruction speed achieved for our current implementation was 65 ms per image.