U-Net is undoubtedly the most cited and popularized deep learning architecture in the biomedical domain. Starting with image, volume, or video segmentation in numerous practical applications, such as digital pathology, and continuing to Colony-Forming Unit (CFU) segmentation, new emerging areas require an additional U-Net reformulation to solve some inherent inefficiencies of a simple segmentation-tailored loss function, such as the Dice Similarity Coefficient, being applied at the training step. One of such areas is segmentation-driven CFU counting, where after receiving a segmentation output map one should count all distinct segmented regions belonging to different detected microbial colonies. This problem can be a challenging endeavor, as a pure segmentation objective tends to produce many irrelevant artifacts or flipped pixels. Our novel multi-loss U-Net reformulation proposes an efficient solution to the aforementioned problem. It introduces an additional loss term at the bottom-most U-Net level, which focuses on providing an auxiliary signal of where to look for distinct CFUs. In general, our experiments show that all probed multi-loss U-Net architectures consistently outperform their single-loss counterparts, which embark on the Dice Similarity Coefficient and Cross-Entropy training losses alone.