Robotic surgical systems have contributed greatly to the advancement of minimally invasive surgery (MIS). More specifically, telesurgical robots have provided enhanced dexterity to surgeons performing MIS procedures. However, current robotic teleoperated systems have only limited situational awareness of the patient anatomy and surgical environment that would typically be available to a surgeon in an open surgery. Although the endoscopic view enhances the visualization of the anatomy, perceptual understanding of the environment and anatomy is still lacking due to the absence of sensory feedback. To address these limitations, we present an algorithmic software framework to provide Complementary Situational Awareness (CSA) in a surgical assistant. This framework aims at improving the human-robot relationship by providing elaborate guidance and sensory feedback capabilities for the surgeon in complex MIS procedures. Unlike traditional teleoperation, this framework enables the user to telemanipulate the situational model in a virtual environment and uses that information to command the slave robot with appropriate admittance gains and environmental constraints. Simultaneously, the situational model is updated based on interaction of the slave robot with the task space environment. We provide various high-level and mid-level components to provide CSA and illustrate the necessary capabilities required for any robotic platform to readily incorporate CSA. We also demonstrate the use of our framework for constrained model-mediated teleoperation using the open-source da Vinci Research Kit (dVRK) hardware.