Deep Learning has revolutionized the AI field. Despite this, there is much progress needed to deploy deep learning in safety critical applications (such as autonomous aircraft). This is because current deep learning systems are not robust to real-world nuisances (e.g., viewpoint, illumination, partial occlusion). In this talk, we take a step in constructing robust deep learning systems by addressing the problem that state-of-the-art Convolution Neural Networks (CNN) classifiers and detectors are vulnerable to small perturbations, including shifts of the image or camera. While various forms of specially engineered “adversarial perturbations” that fool deep learning systems have been well documented, modern CNNs can surprisingly change classification up to 30% probability even for simple 1-pixel shifts of the image. This lack of translational stability seems to be partially the cause of “flickering” in state-of-the-art object detectors applied to video. In this talk, we introduce this phenomena, propose a solution, prove it analytically, validate it empirically, and explain why existing CNNs exhibit this phenomena.