Introduction

The oil and gas industry is not finding oil in the friendly terrains, it found them a century ago, and by implication, it is not also drilling and completing wells the way it did in the past century (Sidle 2015). It is also safe to add that the outlook of the industry would also not be the same in the next decade. The most probable reasons for this technological and methodical shift by the industry are attributable to: (1) declining reserves in conventional fields and (2) global increase in demand for oil and gas products. The new environment where the oil and gas is found is more often than not unfriendly, and the technologies used to probe the formations in these terrains are still being fine-tuned in order to get the best out of it. As a result, the oil and gas community is in constant search for ways to tackle the new challenges presented by these terrains. This is done by migrating away from the old ways of tackling them and then gravitating towards revolutionary technology, one that would propel the drilling operation in the direction that engenders greater improvements in performance, productivity and efficiency. In the drilling industry, there are myriads of challenges dotting the drilling landscape that the driller almost always grapples with while drilling ahead. Maintaining an optimal performance of the mud in downhole environments is one of these challenges. This challenge is especially unique in unforgiving environments such as those plagued by high temperatures and high pressures (HTHP) as well as in deepwaters. In general, HTHP wells are essentially wells whose static bottom hole temperature ranges between 300 °F and 500 °F and an expected shut-in pressure ranges between 10,000 psi and 25,000 psi (Conn and Roy 2004). Therefore, a good understanding of the downhole environment and how it affects mud properties gives the driller a better grip on wellbore pressure control (Ahmadi et al. 2018; Erge et al. 2016; Peng et al. 2016; Kutasov and Eppelbaum 2015). A basic property of drilling muds which the drilling fluid engineer is required to keep within allowable limits is the mud density. The drilling mud basically provides the hydrostatic pressure as a function of vertical depth to counter the pore pressures that exist in each section of the formation to be drilled. It provides the much needed assurance that no kicks, continuous mud loss into fractures or wellbore instability events occur throughout the whole spectrum of the drilling operation (Aird 2019). Problems with maintenance of this indispensable mud property are usually heightened by high downhole temperatures and pressures of the formation being drilled. While high downhole pressure increases the drilling fluid density, increased temperature results in density reduction (An et al. 2015; Hussein and Amin 2010; Babu 1996; McMordie Jr et al. 1982). If mud density is not kept at optimum condition, especially in highly porous and permeable formations, the consequences would likely range from wellbore pressure management issues which would ultimately give rise to non-productive time (Aird 2019). Hence, if drillers are going to effectively take charge of what happens downhole and make informed decisions as regards the safety of the rig and rig crew, they need accurate, measured and timely formation insights along every foot drilled. Therefore, proper planning and execution of drilling operations, particularly for HPHT wells, require complete and accurate knowledge of the behaviour of the drilling fluid density as the pressure and temperature change during the drilling operation (Ahad et al. 2019). Such information can accurately be obtained only through actual measurements of the drilling fluid density at desired pressures and temperatures. This, however, requires special equipment coupled with the fact that the procedure takes long man hours. Therefore, the best way to determine this property downhole is to create a high-fidel model that would take into account the effects of downhole temperature and pressure on mud density. Numerous models abound in the literature. Some of these models provide an estimate of mud density as it varies with pressure in conjunction with downhole temperature without considering the initial density of the muds. The phenomenal contributions by previous researchers to predictive models in this area over the past years require a historic perspective. A snapshot of these models at this time is presented in Table 1. As tried and true as some of these models in Table 1 have proven to be, there is an on-going effort to retire them due to the huge errors associated with their predictive capabilities. This too is another challenge. However, like all challenges, this one also comes with amazing and numerous opportunities too. One such opportunity is the genuine stride made in the development of new technologies to tackle it. The most current and by far the most pervasive technology that has crossover appeal across various industries is artificial intelligence. The reasons are not far-fetched. AI-based models offer numerous advantages. According to Bahiraei et al. (2019), AI-based models have the ability to learn from patterns and once learned can carry out generalization and estimation at great speed; they are fault tolerant in the sense that they are capable to handle noisy data and they are capable of finding the relationship among nonlinear parameters. A summary of the research efforts in using artificial intelligence techniques in modelling downhole mud density is presented in Table 2.

Table 1 Compositional and empirical models for predicting downhole mud density
Table 2 Summary of researches on mud density prediction using artificial intelligence

Materials and methods

Database sources and range of input and output variables

The dataset used for developing the model in this study was obtained from the work of McMordie et al. (1982). This dataset consists of 117 data points. The dataset consists of three input parameters, namely downhole pressure, downhole temperature and initial mud density. The output parameter considered is the final or downhole mud density. The minimum and maximum value of each parameter as well as the units of measurement is shown in Table 3.

Table 3 Process input and output parameters of mud density and their values

Table 4 shows the nature of collected data. It gives a statistical description of the input and output variables using statistical measures such as mean, standard deviation and range.

Table 4 Descriptive statistics of the input variables used in modelling downhole mud density

Overview of artificial neural network

Artificial neural network (ANN) is a technique of artificial intelligence derived from the neural networks found in the nervous system of humans. Simply put, ANN is set of interconnected simulated neurons which are made up of several input signals with synaptic weights. An ANN model simply sums the products of inputs and their corresponding connection weights (w) and then it passes it through a transfer or activation function to get the output of that layer and feed it as an input to the next layer. A bias term is added to the summation function in order to raise or lower the input which is received by the activation function. The activation function does the nonlinear transformation to the input making it capable to learn and perform more complex tasks. The general relationship between input and output in an ANN model can be expressed as shown in Eq. 1 (Fazeli et al. 2013).

$$y_{k} = f_{\text{o}} \left[ {\mathop \sum \limits_{j} w_{kj} \cdot f_{\text{h}} \left( {\mathop \sum \limits_{i} w_{ji} x_{i} + b_{j} } \right) + b_{k} } \right]$$
(1)

where x is an input vector; \(w_{ji}\) denotes the connection weight from the ith neuron in the input layer to the jth neuron in the hidden layer; bj represents the threshold value or bias of jth hidden neuron; \(w_{kj}\) stands for the connection weight from the jth neuron in the hidden layer to the kth neuron in the output layer; bk refers to the bias of the kth output neuron and \(f_{\text{h}}\) and \(f_{\text{o}}\) are the activation functions for the hidden and output neuron, respectively. Since brevity is the soul of wit, this work would refrain from presenting comprehensive details of the ANN methodology but would rather refer the interested reader to the work by Ghaffari et al. (2006) and the articles by Jorjani et al. (2008) and Mekanik et al. (2013) for more elaborate treatments.

Implementation of the artificial neural network

In this paper, the neural network toolbox of MATLAB 2015a mathematical software was used to predict the downhole mud density of oil-based muds in HTHP wells. The settings chosen for the ANN model are presented in Table 5. In the MATLAB software, the dataset was partitioned into three sets: the training set (60%), test set (20%) and validation set (20%). While the training data are used to adjust the weight of the neurons, the validation data are used to ensure the generalization of the network during the training stage and the testing data are used to examine the network after being finalized. The stopping criteria are usually determined by the preset error indices (such as mean square error, MSE) or when the number of epochs reaches 1000 (default setting). However, for this study, the number of epochs was set at 1000.

Table 5 Parameter settings for ANN model

Performance of the ANN model

The performance of the network architectures in terms of training, testing and validation efficacy is discussed in this section. With prediction capability being the primary objective of a trained ANN, it is felt that the performance of a particular ANN during testing with test data should be the yardstick for selecting the best ANN architecture. The number of neurons in the hidden layer influences the generalization ability of the ANN model. Hence, in order to determine the optimal architecture for the networks, a trial-and-error approach was used to select the optimum number of neurons in the hidden layer. In this direction, a series of topologies were examined, in which the number of neurons was varied from 1 to 20. The mean square error (MSE) was used as the error function. Decision on the optimum topology was based on the minimum error of testing. Each topology was repeated 25 times to avoid random correlation due to the random initialization of the weights. After repeated trials, it was found that a network with five hidden neurons in the hidden layer produced the best performance for the ANN model with a validation MSE value of 8.4 × 10−4. The optimal architecture of the ANN network is shown in Fig. 1.

Fig. 1
figure 1

Optimal architecture of the back-propagation network

For this ANN model, the training process was truncated at 130 epochs for a 3-5-1 network architecture with a validation MSE of 8.4 × 10−4. Therefore, the 3-5-1 architecture is considered the best neural network for the present problem due to its superior prediction capability. Figure 2 shows the scatter plots of ANN predicted downhole mud density of oil-based muds versus the actual mud density experimental results for the training, testing and validation sets, respectively. The predicted model fits so well to the actual values for both training, testing and validation sets as can be seen in their correlation coefficients (R) of 0.99998, 0.99995 and 0.99995 for the training, testing and validation data, respectively.

Fig. 2
figure 2

Scatter plots of the developed ANN model

The model generated by applying the Levenberg–Marquardt (LM) algorithm is given in Eq. 2.

$$\rho_{f} = \mathop \sum \limits_{j = 1}^{5} \left\{ {{\text{purelin}}\left[ {{\text{LW}}_{j,1} \left( {\mathop \sum \limits_{i = 1}^{3} \mathop \sum \limits_{j = 1}^{5} {\text{tansig}}\left( {X_{i} *{\text{IW}}_{i,j} + b_{1} } \right)} \right)} \right] + b_{2} } \right\}$$
(2)

Equation 2 represents trained ANN model correlating the three input parameters and the final downhole mud density in MATLAB. Here, ‘purelin’and‘tansig’ are MATLAB activation functions which calculate the layer’s output from its network input. Purelin gives linear relationship between the input and the output, with the algorithm being purelin(n) = n, whereas tansig is a hyperbolic tangent sigmoid transfer function and is mathematically equivalent to ‘tanh’. Tansig is faster than tanh in MATLAB simulations, thus it is used in neural networks. The tansig relation is defined by Eq. 3.

$${\text{Tansig}} = \frac{2}{{\left( {1 + \exp \left( { - 2\,{\text{network}}} \right)} \right) - 1}}$$
(3)

LW and IW are weights of connections from the input layer to the hidden layer and from the hidden layer to the input layer, respectively. In order to predict the downhole mud density using Eq. 2, the values in Table 6 are used. However, the value of xi in Eq. 2 is the individual data points for each of the input variables where x represents the input variables namely downhole pressure, downhole temperature and initial mud density; N is the number of neurons (in this case is five); j is the number of input variables, which in our case are three; b1 is bias of the hidden layer, and b2 is bias of the output layer. Table 5 lists the weights the biases of the developed empirical correlation (Eq. 2) that can be used to predict the downhole mud density.

Table 6 Weights and biases for ANN model in Eq. 2

For example, the downhole mud density are predicted using downhole pressure, downhole temperature and initial mud density, the value of W1 will be taken at j = 1 for downhole pressure, at j = 2 for downhole temperature and j = 3 for initial mud density. The xj in the previous equations are as follows; x at j = 1 is the downhole pressure, x at j = 2 is the downhole temperature, x at j = 3 is initial mud density. For example, the term \(\mathop \sum \nolimits_{j = 1}^{j} w1_{i,j} x_{j}\) for downhole mud density from Table 5 can be calculated as follows; \(\mathop \sum \nolimits_{j = 1}^{j} w1_{i,j} x_{j} = w_{1,1} x_{1} + w_{1,2} x_{2} + w_{1,3} x_{3}\) where the values of w1,1, w1,2 and w1,3 are − 2.33494, 1.6557, 3.604944, respectively. This will repeated for the 6 rows of the matrix, and the corresponding values for each row can be used from the tables. The term x represents the input variables, i.e. x1 represents downhole pressure, x2 represents downhole temperature and x3 represents initial mud density. Figure 3 shows a comparison between actual values and model predicted output values using the developed neural network model.

Fig. 3
figure 3

Comparison between model prediction and experimental data

From Fig. 3, the model output from ANN shows a good match with the experimental data. However, in order to quantify numerically how well the model’s prediction matches actual values, the following performance metrics of R2, MSE, RMSE and SSE are used to assess the model. This is summarized in Table 7.

Table 7 Summary of ANN model performance

From Table 7, the assessment is based on the testing values only. Based on this, a combination of low MSE, SSE and RMSE values coupled with high R2 value (close to 1) makes the model a good one.

Relative importance of independent variables in the ANN model

Since every model is only an approximate representation of a system under study, and coupled with the fact that the debate about the opacity of AI-based models keeps lingering, it is always vital to learn about the hidden information on the data as extracted from the modelling technique used. The aim of sensitivity analysis is to vary the input variables of the model and assess the associated changes in model output. This method is particularly useful for identifying weak points of the model (Lawson and Marion 2008). The sensitivity analysis has therefore provided ways of explaining the degree of contribution of each of the input variables to the network. The contribution of each input variable to the prediction of the dependent variable is referred to as the relative importance of that variable. Many methods abound in the literature for calculating the relative importance of input variables. Examples include Garson’s algorithm, connection weights algorithm, use of partial derivatives, Lek’s profile method, etc. For this study, the connection weights algorithm was chosen. The choice is predicated on the fact that Olden et al. (2004) made a comparison of different techniques for assessing input variable contributions in ANNs. Their work showed that the method of connection weights was the least biased among others. This position was corroborated by Watts and Worner (2008). The connection weights algorithm proposed by Olden and Jackson (2002) calculates the sum of products of final weights of the connections from input neurons to hidden neurons with the connections from hidden neurons to output neuron for all input neurons. The connection weights from input neurons to hidden neurons are presented in columns 2–4 of Table 8, while connection weights from the hidden to output neurons is presented in column 5 of Table 8. The relative importance of a given input variable can be defined as shown in Eq. 4.

$${\text{RI}}_{x} = \mathop \sum \limits_{y = 1}^{m} w_{xy} w_{yz}$$
(4)

where RIx is the relative importance of input variable x. \(\mathop \sum \nolimits_{y = 1}^{m} w_{xy} w_{yz}\) is the sum of product of final weights of the connection from input neuron to hidden neurons with the connection from hidden neurons to output neuron, y is the total number of hidden neurons, and z is the output neurons. The sum of the products of the connection weights and rank of the input variables are presented in Table 9.

Table 8 Final connection weights
Table 9 Connection weights products, relative importance and rank of inputs

From Fig. 4, we find the relative importance of the various input parameters. It is to be noted that a large sensitivity to a parameter suggests that the system’s performance can drastically change with small variation in the parameter and vice versa. Following this analogy, it is clear that the process input variable, namely the initial mud density, has the highest impact on the downhole mud density followed by the downhole temperature and then downhole pressure. However, in Fig. 4, the relative importance associated with downhole temperature has negative values. It must be said here that in using the connection weights algorithm, the absolute values are used in determining the relative importance of the input variable. The sign (positive or negative values) helps indicate the direction in which each input affects the output. The positive sign indicates the likelihood that increasing this input variable will increase the output parameter, while the negative sign indicates the possibility that increasing this input variable will decrease the output variable. In this case, Fig. 4 reveals that increases in initial mud density and downhole pressure would lead to increased downhole mud density. However, the negative sign for the temperature indicates that increasing the downhole temperature would surely lead to decreased downhole mud density. These findings are in sync and resonate with what is found in the literature.

Fig. 4
figure 4

Relative importance of input variables in the ANN model

Comparison of ANN model’s performance with existing AI models

There are existing studies published in the recent past that focus on the prediction of the density of oil-based muds using artificial intelligence. Table 10 lists out some of these prior studies, and the results of this study are also compared with them.

Table 10 Comparison of developed model with existing AI models

According to Table 10, we find that the prediction accuracy is significantly different in various studies, and the model developed in this work is superior to all the other models. This is so because the model is not complex judging by the number of neurons in the hidden layer compared to the other models. Considering the results using the accuracy indicators, the ANN model developed in this study is found to be more appropriate for prediction of downhole mud density owing to its low MAE, MAPE and high R2 compared to the other models.

Comparison of the generalization capacity of the developed model with existing models

The usefulness of any model irrespective of the modelling technique used is based on how well it can generalize. By generalization, a model should be able to predict in a consistent manner when new data are supplied to it (Kronberger 2010). Using a new, independent set of data is considered the “gold standard” for evaluating the generalization ability of models (Alexander et al. 2015). Hence, the most convincing way of testing a model is to use it to predict data which has no connection with the data used to estimate model parameters. Thus, there is every good reason for not using the same data as we used in the model development—otherwise it would make one erroneously think that the model gives better predictions than it is really capable of (Lawson and Marion 2008). In this way, we reduce to the barest minimum the possibility of obtaining a “deceitful” good match between the model predictions and the measured data. The oil-based mud density experimental results obtained from the work of Peters et al. (1990) was used as unseen/unfamiliar data to test the oil-based mud density model. This data set consisting of 34 data points was introduced to the ANN model to predict the downhole mud density. Table 11 lists out some of the existing models used for the prediction of the downhole mud density of drilling muds and their performance metric when subjected to this dataset. Five statistical-based performance metrics (R2, MSE, MAE, MAPE and RMSE) were employed to assess the generalization capacity of the developed model as well as existing models.

Table 11 Generalization capacity assessment of various models used for predicting downhole density of oil-based muds

Considering the results in Table 11 in the light of the accuracy indicators mentioned above, it is crystal clear that the compositional model by Hoberock et al. (1982) predicted the unfamiliar data best since it presents the lowest values of MSE, MAE, MAPE and RMSE with a high value of R2; however, the model evolved by ANN exhibits signs closest to the Hoberock et al.’s model. This is seen in the values of the accuracy indicators. It is also worthy to note here that the model by Kutasov (1988) also trails the ANN model in terms of performance. In addition to this, and as made clearer in Fig. 5, the models by Politte (1985) and Sorelle et al. (1982) seem to have similar predictive capabilities since both overlap. The result presented in Fig. 5 indicates that the ANN model has impressively learned the nonlinear relationship between the input variables and the downhole mud density.

Fig. 5
figure 5

Performance of various models to unfamiliar data

Comparison of developed ANN model with an existing equation of state for liquid density prediction

There exists an equation of state for estimating liquid density as a function of temperature and pressure. Furbish (1997) puts forward the equation of state for liquid density as shown in Eq. 5:

$$\rho = \rho_{\text{o}} \left[ {1 - \alpha \left( {T - T_{\text{o}} } \right) + \beta \left( {p - p_{\text{o}} } \right)} \right]$$
(5)

where \(\rho_{\text{o}}\) is the initial density of the liquid, \(\alpha \;{\text{and}}\;\beta\) are the local isobaric coefficient of thermal expansion and the local isothermal compressibility, respectively. T, To, P and Po are the final and standard temperatures and pressures of the liquid, respectively. It must be stated here that the use of this equation of state may not require a high computational overhead but may likely not yield accurate predictions since the local isobaric coefficient of thermal expansion and the local isothermal compressibility are not constants but rather a function of temperature and pressure. In this work, the values of the isobaric coefficient (α) and isothermal compressibility (β) would be taken from the work of Zamora et al. (2000) wherein they used 0.0002546/°F and 2.823 µ/°F for α and β, respectively for oil-based mud. In order to assess the predictive capability of the Equation of state for liquid density by Furbish (1997), it was subjected to the dataset in the work of Peters et al. (1990). The performance metrics for this model is shown on the sixth row of Table 11. In comparison with the ANN model’s performance, it is found that the EOS’s performance is somewhat comparable with the ANN model, though the ANN model is slightly better-off. The performance metrics for the ANN model and the EOS, respectively, are: R2 (0.9997, 0.9993); MSE (0.0159, 0.027); MAE (0.1, 0.139) and MAPE (0.7, 0.96).

Disparities between the developed ANN model and published ANN models

The following are the disparities between the ANN model developed in this study and the existing models.

  1. 1.

    The possibility to replicate and reproduce the results from published research is one of the major challenges in model development using artificial intelligence. This makes it rarely possible to re-implement AI models based on the information in the published research, let alone rerun the models because the details of the model and the simulation codes are either not presented in an understandable format or have not been made available. Beyond this, AI techniques such as ANN used in this work are more often than not tagged black box models. This work’s novelty lies in the fact that it has been able to illuminate the box such that the weights and biases that can be used for replicating the models have been presented. A close look at the ANN models by Osman and Aggour (2003), Adesina et al. (2015) and Rahmati and Tartar (2019) indicates that the vital details of the model which can make it replicable are not presented, hence limiting their application.

  2. 2.

    The use of sensitivity analysis in this work has been able to make the AI model developed in this work explainable unlike the other ANN models in the literature

  3. 3.

    There are huge concerns regarding the ability of an AI model to generalize to situations that were not represented in the data set used to train the model. To the best of my knowledge, the models in the literature were not tested for their generalization ability by using a new dataset, hence, it is difficult to ascertain how generalizable these models are in practice.

Design of a drilling process incorporating the developed ANN model for estimating downhole mud density

A synthetic drilling process has been designed and it contains no confidential information. This is done to mimic the complexity of an actual drilling process in the field. The approach followed in this design is based on replicating the life of a wellbore drilling process by including the necessary steps occurring during mud circulation. The downhole data acquisition method, its analysis and transmission to the ANN model for computation of the downhole mud density are major parts of the design. This design is exemplified in Fig. 6. An explanation of this figure is given below.

Fig. 6
figure 6

Drilling process design with the ANN model incorporated

The conventional parts of the design such as the power generation activities, the hoisting activities, the rotary activities and the well control system are well known and would not be discussed extensively. However, only the mud circulating system where the ANN model is required would be discussed in detail. In this area, the major components of the design are the sensors for trapping the downhole pressure and temperature as well as the transmission of the readings to the ANN model.

  1. (1)

    The downhole sensors The sensors for the downhole temperature and pressure measurement would essentially be attached to the logging while drilling tools. The sensors should be of the differential pressure type and should be placed in the logging while drilling (LWD) tool. The basic sensing element should be designed to detect a difference in wellbore pressure and temperature as the depth of the well increases. In this way, the differential pressure transducer interrogates the readings and transmits it to the ANN model software installed in a computer. This enables the model to instantaneously process the readings and calculate the downhole density of the mud and transmit it to the surface panel at the drillers console. This process would save rig time and reduced the time spent on manually testing for the surface density of the mud in the mud tank which may not always represent the density of the mud downhole. It must be said, however, that the data from the sensors requires some cleansing, filtering or analysis; hence, bio-inspired algorithms would be developed for this purpose.

  2. (2)

    The functionality of the developed ANN model in the design A software would be developed based on the ANN model and installed in a computer where the filtered data from the sensor would be passed. The constant in the model which is the initial density of the mud would be manually measured prior to the start of drilling and fed into the model. The downhole temperature and pressure at any given time and depth would be transmitted to the model and the downhole density calculated. Since the sensor data would be streamed and transmitted per unit time, the increase or decrease of the downhole mud density would be referenced to the initial mud density. The trend (either increase or decrease) would be shown graphically with respect to time and depth. If the density falls too low or gets too high, then an alarm would be triggered indicating a low or high mud density. This would be shown on the surface panel at the drillers console. This way, the downhole drilling fluid density that has taken into account the downhole temperature and pressure regimes in the wellbore would be monitored. When the need arises, measures to adjust the mud density would be done on the basis of this knowledge in order to assure safety and wellbore stability.

Practical implication of findings

Despite the fact that the Hoberock et al.’s model performs creditably well, one of the major drawbacks of the Hoberock et al.’s compositional model is that it requires long man hours to determine the volume fraction of oil, water, solids, chemicals, etc., required to perform the density computation. The procedure for carrying out a complete compositional analysis of mud is done using the mud retort test. Users complain the procedure is rigorous, lengthy and time consuming; with the test often taking more than an hour to complete (Salunda, online). However, with the ANN model, the man hours spent carrying out the compositional analysis can hopefully be reduced and reallocated to other high value-added tasks. Hence, the ANN model developed in this work is a valuable substitute for the Hoberock et al.’s model, especially when downhole mud density values are required in real time for critical decisions to be taken.

Conclusion

In this work, an artificial neural network model has been developed for the prediction of the downhole density of oil-based muds in wellbores. The objective of this work was to use a nature-inspired algorithm (ANN) to develop a robust and accurate model for the downhole density of oil-based muds that would be replicable and generalizable across new input datasets. The developed model in this work is robust and reliable due to its simplicity and accuracy for the application of interest. Beyond this, the model can be replicated unlike other AI models in the literature because the threshold weights and biases required for developing the model are provided. The prediction capability of the ANN model has been compared with the existing AI models as well as with other models for predicting OBM density. Based on the obtained results, the outputs of the developed ANN model are in good agreement with corresponding experimental data. In comparison with existing AI models, the developed ANN model gives more accurate estimations. Furthermore, the intelligent ANN model paves the way for rapid predictions of the downhole density of oil-based muds in HTHP wells unlike the time-consuming procedure associated with the Hoberock et al.’s model.

Recommendation

While the industry churns out exabytes of drilling data every nanosecond, and we wait patiently with enthusiasm for oil prices to rise and costs to fall, it is imperative that we use that space of time to leverage on the cost saving and value adding technology of AI to develop high-fidel models for predicting other mud related challenges that affect mud density such as barite sagging in oil-based muds.