Embodied learning systems rely on motion synthesis to enable efficient and flexible learning during continuous online deployment. Motion motivated by learning needs can be found throughout natural systems, yet there is surprisingly little known about synthesizing motion to support learning for robotic systems. Learning goals create a distinct set of control-oriented challenges, including how to choose measures as objectives, synthesize real-time control based on these objectives, impose physics-oriented constraints on learning, and produce analyses that guarantee performance and safety with limited knowledge. In this talk, I will discuss learning tasks that robots encounter, measures for information content of observations, and algorithms for generating action plans. Examples from biology and robotics will be used throughout the talk and I will conclude with future challenges.