Open-source ecosystems have become indispensable in the design and deployment of trustworthy artificial intelligence (AI) systems. Community-driven development offers transparency, rapid innovation, and broad participation, but it also raises new challenges related to governance, security, and sustainability. This chapter examines how open-source practices can be leveraged to strengthen the trustworthiness of AI across three dimensions: security, scalability, and responsible use. It highlights governance models, quality assurance methods, and collaborative mechanisms that enable reproducible research, vulnerability management, and ethical adoption. The selected case study frameworks (i.e., TensorFlow, PyTorch, Hugging Face, ONNX, Kubernetes and O-RAN Alliance) are examples of community practices in the real world. The scope is then augmented to include telecommunication specific topics, including security vulnerabilities in Open RAN (O-RAN) architectures, threats to the RAN Intelligent Controller (RIC) and consideration of IPv6 vulnerabilities. Collectively, these sections illustrate that even though an open framework increases the attack surface, it enables more powerful mitigations to be developed with added community validation and testing bed approaches. The chapter concludes by recommending future directions, such as the creation of open-source testbeds for O-RAN and IPv6-enabled AI, sustainable funding models, and stronger alignment with interoperability standards. By embedding governance, security, and ethical safeguards into community-driven ecosystems, open source emerges not only as a technical enabler but also as a strategic pathway for ensuring that AI systems are responsible, resilient, and aligned with societal and national infrastructure needs.