Speech is a unique characteristic of humans that expresses one's emotional viewpoint to others. Speech emotion recognition (SER) identifies the speaker's emotion from the speech signal. Nowadays, (SER) plays a vital role in real-time applications such as human–machine interface, lie detection, virtual reality, security, audio mining, etc. But in SER, filtering the noise content and extracting the emotional features is complex. Moreover, incorporating digital filters increases the cost and complexity of the system. Thus, a novel hybrid firefly-based recurrent neural speech recognition (FbRNSR) was developed with preprocessing and a feature analysis module to classify human emotions based on the speech input. The extracted features from the feature extraction module are trained to classify the emotions as happy, sad, or average. Moreover, the incorporation of firefly fitness improves the classification rate. The presented model is executed in Python, and the results are estimated. The performance of the presented approach is analyzed using the confusion matrix. The designed model achieved high true positive rate of 99.34%, true negative rate of 99.12%, false positive of 99.21%, and false negative rate of 99.07%. The designed model achieved 99.2% accuracy, 98.9% recall, and precision value for the speech signal dataset. Finally, the effectiveness and robustness of the proposed approach are proved by comparing it with the existing techniques. Hence, this method is applicable in various sectors such as medicine, security, etc., to identify the state of emotions among the people.