Image-guided surgery is daily business in many hospitals nowadays. Pre-operative data and navigation systems provide different views to the task at hand, but as they are displayed on monitors surgeons need to integrate several spatial frames of reference in order to map the displayed data to the patient, which is a demanding task. Qualitative spatio-temporal representation and reasoning (QSTR) is a subfield of Artficial Intelligence which explicitly deals with formal abstract models of spatial knowledge. Based on our expertise in QSTR we argue for the integration of QSTR approaches to reduce cognitive load of surgeons regarding visual information display.