Serverless computing offers a wide variety of event-driven integrations and cloud services, easy development and implementation frameworks, and complex balancing and control of costs. With these benefits into consideration, the growing implementation of serverless systems means that the performance of the serverless system is measured and new techniques created to maximize the potential of the software. The serverless or system runtime features have shown major performance and cost advantages for event-driven cloud applications. While serverless runtimes are limited to applications requiring lightweight data and storage, such as the prediction and inference of machine learning, these applications have been improved beyond other cloud runtimes. In this paper, we propose a machine learning model to parallelize the jobs allocated to the event queue and the dispatcher of the serverless framework. We hence use Gray Wolf Optimization (GWO) model to improve the process of task allocation. Further, to optimize GWO, we use the Reinforcement Learning (RIL) approach that simultaneously optimizes the parameters of GWO and improves the task allocation. The simulation studies show that the proposed GWO-RIL offers reduced runtimes and it adapts with varying load conditions.