Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for (18)F-FDG obtained from Patlak analysis.