Abstract:
Neuroblastoma is a cancer originating from immature nerve cells that mostly occurs in infants and young children. The morphology of neuroblastoma tumors is highly complex, exhibiting variations in location, shape, and size. Furthermore, tumors are often located near critical anatomical structures, making it difficult to differentiate between the tumor and surrounding tissue. This complexity presents significant challenges in preoperative evaluation and surgical planning. To better assist clinicians in preoperative diagnosis and treatment, this paper proposes a neuroblastoma diagnostic and treatment support method based on semantic segmentation and three-dimensional (3D) transparent visualization. For semantic segmentation, we introduce an ensemble learning framework that leverages multiple pretrained nnU-Net architectures. Unlike the default nnU-Net configuration, which averages the outputs of multiple models equally during inference, our framework uses a Dice-weighted voting mechanism, where each model’s contribution to the final prediction is proportional to its Dice score on the validation set. This nonuniform ensemble strategy allows better-performing models to contribute more significantly to the result, thereby improving segmentation accuracy and boundary consistency while maintaining robustness. The proposed framework is designed to support small-sample scenarios and effectively utilize multimodal medical imaging data (e.g., T1, T2, B0, B100). To validate the method, we performed comparison experiments on the pediatric neuroblastoma dataset provided by SPPIN 2023. The results demonstrate that our method outperforms conventional baselines in terms of Dice coefficient, Hausdorff distance, and volumetric similarity. Furthermore, to evaluate the effectiveness of the proposed voting-based ensemble strategy, we applied the same weighted scheme to the BraTS 2021 brain tumor dataset, where comparable performance improvements were observed. By incorporating the semantic segmentation results, we developed a transparent visualization approach that enables clear and intuitive observation of the segmented tumor and its surrounding anatomical structures, based on a method known as stochastic point-based rendering. This rendering technique provides realistic, rapid, and semi-transparent 3D visualization of point sets by utilizing statistical algorithms to represent spatial information. Unlike conventional 3D rendering methods, which often require computationally intensive depth sorting to preserve spatial relationships, our method maintains an accurate sense of depth without relying on such processes to improve efficiency while ensuring visual fidelity. In our study, we generated the point sets by sequentially reconstructing two-dimensional semantic segmentation outputs across image slices to effectively transform planar segmentation data into a coherent 3D point cloud. The color of each point in the cloud is derived from semantic labels assigned to the tumor region, combined with the intrinsic coloration of the surrounding tissues, resulting in a composite visual output that preserves both the anatomical realism and semantic interpretability. Through stochastic point-based rendering, both the color-coded tumor areas and adjacent anatomical structures are simultaneously visualized within a single perspective image. This unified view allows clinicians to efficiently assess the patient’s condition from just one fixed angle, without the need to manipulate the model or switch perspectives. As a result, the proposed method significantly enhances preoperative spatial understanding and the perception of anatomical relationships, supporting clinicians in fully comprehending complex pathological scenarios. Overall, this visualization strategy serves as a valuable auxiliary tool in preoperative planning and decision-making, offering considerable potential for clinical application in precision diagnostics and surgical guidance.