The adoption of Large Language Models (LLMs) in businesses raises the possibility of inadvertent intellectual property (IP) and secret data leaks to public artificial intelligence systems. Organizations are using security solutions, including data loss prevention (DLP), access restrictions, monitoring systems, and many more, to lower these risks. However, source code presents a unique challenge in separating benign, non-sensitive code from sensitive, business-critical code, as both may have structural similarities, but business logic could be very different, and the current security systems are not able to handle these variations. In this paper, we propose a novel solution, Source Code Guardrail (SCG), powered by an AI classification model, that automatically categorizes source code as either sensitive (custom, production-grade, or confidential) or non-sensitive (dummy or generic code). Our solution leverages source code embedding models (unixcoder) to convert the code into a language-agnostic numerical vector representation that captures its full details, including semantic meaning, structure, and functionality. These numerical embedding vectors were used as the input feature of the dense layer classification model. In this work, the classification model and UnixCoder are trained together as a merged network so that the model can also learn the code embeddings based on classification loss. The proposed model was trained on manually annotated 8000+ source codes of different languages, which were taken from multiple sources, ensuring a diverse and representative dataset. The proposed model achieves 91.19% accuracy, 86.71% precision, and 90.41% recall in classifying the code as sensitive or non-sensitive to being exposed to public AI services. Combining this classification model with SIEM systems, API proxies, and DLP solutions will enable companies to enforce real-time source code filtering, assuring that only non-sensitive code interacts with LLMs and preventing possible proprietary data leaks. By suggesting a scalable, intelligent method to lower source code exposure concerns in AI-driven environments, our results add to the larger debate on LLM security, enterprise artificial intelligence governance, and automated data protection. We suggest that companies working on private software development use SCG products to stop source code IP from being leaked on public LLM systems.