In virtual reality, when a 360 monocular video canvas surrounds virtual objects, there will be depth mismatch that creates artifacts. In this scenario, monocular depth cues provided by the canvas will override binocular depth cues on the virtual object. In this paper, I propose an algorithm to geometrically transform the virtual object in order to compensate for the mismatch. This allows natural fusion of virtual objects and 360 environments in virtual reality.