As autonomous robots move beyond structured factory floors into open, human environments, they face an autonomy–alignment problem: how to preserve open‑ended learning (OEL) while ensuring that what is learned serves human practical purposes and values. This talk first presents a unifying purpose framework that treats purpose as a design primitive encoding what humans want a robot to learn, do, or avoid, independently of task domains. The framework formalises alignment, establishes necessary and sufficient conditions under which it holds, and decomposes the autonomy–alignment challenge into four tractable sub‑problems: aligning robot and human purposes, arbitrating among multiple purposes, grounding abstract purposes into domain‑specific goals, and acquiring the competencies to achieve them. The second half of the talk instantiates one pathway enabled by the framework: intrinsic purpose. I introduce Purpose‑Directed Open‑Ended Learning (POEL), which operationalises intrinsic purpose by speech‑based purpose, large‑language‑model reasoning, and computer vision to identify purpose-relevant objects in the scene. These estimates are then used to modulate intrinsic rewards and induce spatial exploration biases that steer the robot exploration toward user‑relevant interactions while retaining the breadth of OEL. In two simulated manipulation domains, POEL accelerates learning and achieves higher success on previously unseen, purpose‑aligned tasks than state‑of‑the‑art OEL baselines. Together, the framework and model demonstrate how ‘purpose in the loop’ can reconcile autonomy with alignment, offering a principled path to open‑ended robots that learn autonomously yet remain reliably useful and safe for people.