Transcatheter aortic valve implantation (TAVI) is a highly effective treatment for patients with severe aortic stenosis. Accurate valve positioning is critical for successful TAVI, and highly accurate real-time visualization—with minimal use of contrast—is especially important for patients with chronic kidney disease. Under fluoroscopic conditions, which often suffer from low contrast, high noise and artifacts, automatic segmentation of anatomical structures using convolutional neural networks (CNNs) can significantly improve the accuracy of valve positioning. This paper presents a comparative analysis of various CNN architectures for automatic aortic root segmentation on angiographic images, with the aim of optimizing the TAVI process. The experimental evaluation included models such as FPN, U-Net++, DeepLabV3+, LinkNet, MA-Net, and PSPNet, all trained and tested with optimally tuned hyperparameters. During training dynamics, DeepLabV3+ and U-Net++ showed stable convergence with median Dice scores around 0.88. However, when evaluated at the patient level, MA-Net and PSPNet outperformed all other models, achieving Dice coefficients of 0.942 and 0.936, and an average symmetric surface distance of 4.1 mm. The findings underscore the potential of incorporating automatic segmentation methods into decision-support systems for cardiac surgery—reducing contrast agent use, minimizing surgical risks, and improving valve positioning accuracy. Future work will focus on expanding the dataset, exploring additional architectures, and adapting the models for real-time application.