In this talk, I will discuss my group's recent work on state-of-the-art natural language generation (NLG) and dialogue models that are multimodal, personality-based, and knowledge-rich. First, we will discuss dialogue models which generate responses that are not only history-relevant and fluent, but also multimodal, e.g., relevant to dynamic video-based context. Next, we will present personality-based conversational agents, e.g., models that generate stylistic responses with varying levels of politeness and rudeness. Finally, we will describe several directions in making NLG models more knowledgeable, e.g., via adversarial robustness to user errors, via filling reasoning gaps in multi-hop generative-QA with external commonsense knowledge, and via multi-task and reinforcement learning with novel auxiliary-skill tasks such as entailment and saliency generation.