This chapter explores the critical role of trust and safety in financial AI systems. As artificial intelligence becomes central to credit scoring, fraud detection, trading, and compliance, it brings both efficiency and risk. The discussion highlights how trust is built through reliability, transparency, fairness, accountability, and auditability, while safety requires robustness, operational boundaries, fail-safe mechanisms, monitoring, and resilience against attacks. Key risks, including bias, data drift, operational failures, adversarial manipulation, and regulatory noncompliance, are examined alongside real-world shortcomings. A lifecycle approach to building trustworthy systems is outlined, covering data governance, model development, validation, deployment, and ongoing oversight. Regulatory frameworks and ethical practices are reviewed, and a credit-scoring case study demonstrates how fairness and explainability can be achieved in practice. The chapter concludes that responsible financial AI is both a moral obligation and a strategic advantage.