Introduction
Materials and methods
Study site
Available data
Precipitation | Evapotranspiration | Spring discharge | Use |
---|---|---|---|
1992-01-01–1995-12-31 | 1992-01-01–1995-12-31 | 1992-09-24–1995-03-28 | Calibration |
2014-01-01–2015-12-31 | 2014-01-01–2015-12-31 | 2014-01-01–2015-12-31 | Calibration |
2016-01-01–2016-12-31 | 2016-01-01–2016-12-31 | – | Testing |
Transfer function noise model
Modeling framework
Recharge representation
Response functions
Model structure
Solving the inverse problem
General methodology
Pastas
package and the examples from the package documentation (Collenteur et al. 2020) suggest that it may have aggravating effects to simultaneously optimize all model parameters at once, starting from initial values, or to optimize model parameters together with the noise model parameter. Therefore, the least-squares calibration is divided into three consecutive steps. First, a run without a noise model is performed, having the parameters \(\omega _f = 0.1, S_{r, \max } = 250.0 \; \text {mm}, l_p = 0.25, S_{i, \max } = 2.0 \; \text {mm}, k_v = 1.0\) fixed in both parallel recharge models. This is motivated by substantial correlation between the parameters and the corresponding aggravating effects of non-uniqueness and equifinality (Beven 2006; Gupta et al. 2009). In the second step, the calibration is continued reusing the optimized parameter set from the first step but varying all model parameters, still without a noise model. In the third and final step, the results from step 2 are used again together with a noise model, where the noise model parameter is optimized.Least-squares optimization and noise modeling
Scipy
(version 1.9.0) (Virtanen et al. 2020), is used to minimize the respective series.Model fit metrics
Bayesian inversion and uncertainty quantification
Pastas
uses the emcee
package (Foreman-Mackey et al. 2013) as given by the lmfit
wrapper package (Newville et al. 2020). A parallelized affine-invariant ensemble sampler (AIES) algorithm is used (Goodman and Weare 2010), which runs multiple chains simultaneously. A uniform prior \(p(\textbf{x})\) is assumed and the parameter prior ranges are given in Table 3. It is also assumed that the residuals are independent and follow a normal distribution, leading to the log-likelihood function \(\textrm{ln} \mathcal {L}(\textbf{x}, \textbf{D}) = \textrm{ln}p(\textbf{D} \mid \textbf{x})\):Results
Least-squares calibration
DMC | \(\overline{VCC} \; [-]\) | \(NSE \; [-]\) | \(NSE_{lin} \; [-]\) | \(KGE \; [-]\) |
---|---|---|---|---|
DMC1 (1993–1994) | 3.050 | 0.206 | – 0.043 | 0.631 |
DMC2 (2014–2015) | 1.388 | 0.543 | 0.445 | 0.736 |
DMC3 (1994–1995 & 2014–2015) | 1.694 | 0.449 | 0.323 | 0.680 |
MCMC (DMC2, 2014–2015) | 1.188 | 0.598 | 0.512 | 0.779 |
Bayesian model calibration and uncertainty quantification
Parameter | Description | Units | Prior range | MLE value | Standard error |
---|---|---|---|---|---|
\(A_{2, q, 1}\) | Scale (quick Dag.) | \([10^3 \, m^3s^{-1}]\) | [0.00, 100.00] | 33.2 | \(19.0 \; (57.2 \%)\) |
\(a_{2, q, 1}\) | Shape (quick Dag.) | [s] | \([10^{-5}, 5 \cdot 10^3]\) | 0.20 | \(0.11 \; (55.1 \%)\) |
\(b_{2, q, 1}\) | Shape (quick Dag.) | \([-]\) | \([10^{-5}, 5 \cdot 10^3]\) | 1.78 | \(0.59 \; (33.2 \%)\) |
\(c_{2, q, 1}\) | Shape (quick Dag.) | \([-]\) | \([10^{-5}, 5 \cdot 10^3]\) | 5.25 | \(3.67 \; (70.0 \%)\) |
\(A_{2, d, 1}\) | Scale (diff. Dag.) | \([10^3 \, m^3s^{-1}]\) | [0.00, 100.00] | 1.54 | \(0.27 \; (17.7 \%)\) |
\(a_{2, d, 1}\) | Shape (diff. Dag.) | [s] | \([10^{-5}, 5 \cdot 10^3]\) | 0.38 | \(0.25 \; (65.4 \%)\) |
\(b_{2, d, 1}\) | Shape (diff. Dag.) | \([-]\) | \([10^{-5}, 5 \cdot 10^3]\) | 0.96 | \(0.33 \; (34.5 \%)\) |
\(c_{2, d, 1}\) | Shape (diff. Dag.) | \([-]\) | \([10^{-5}, 5 \cdot 10^3]\) | 2.23 | \(1.74 \; (77.9 \%)\) |
\(S_{r, \max , 1}\) | RS height | [mm] | \([10^{-5}, 10^4]\) | 145.2 | \(36.0 \; (24.8 \%)\) |
\(l_{p, 1}\) | RS evap. lim | \([-]\) | \([10^{-5}, 1.00]\) | 0.02 | \(0.01 \; (68.8 \%)\) |
\(K_{s, 1}\) | SHC | [mm h\(^{-1}\)] | \([1.00, 10^4]\) | 2686.6 | \(1462.5 \; (54.4 \%)\) |
\(\gamma _1\) | Flow exponent | \([-]\) | \([10^{-5}, 50.0]\) | 36.4 | \(12.4 \; (34.1 \%)\) |
\(S_{i, \max , 1}\) | IS height | [mm] | \([10^{-5}, 10.00]\) | 9.04 | \(1.53 \; (17.0 \%)\) |
\(k_{v, 1}\) | Evap. factor | \([-]\) | \([10^{-6}, 10^2]\) | 0.88 | \(0.07 \; (7.53 \%)\) |
\(\omega _{f, 1}\) | Recharge fract | \([-]\) | [0.00, 1.00] | 0.016 | \(0.01 \; (62.5 \%)\) |
\(A_{1, q, 2}\) | Scale (quick exp.) | \([10^3 \, m^3 s^{-1}]\) | \([10^{-5}, 2074.76]\) | 1.42 | \(6.55 \; (461.2 \%)\) |
\(a_{1, q, 2}\) | Shape (quick exp.) | \([-]\) | \([0.01, 10^3]\) | 5.62 | \(7.56 \; (134.5 \%)\) |
\(A_{1, d, 2}\) | Scale (diff. exp.) | \([10^3 \, m^3 s^{-1}]\) | \([10^{-5}, 1005.76]\) | 4.91 | \(0.91 \; (18.4 \%)\) |
\(a_{1, d, 2}\) | Shape (diff. exp.) | [s] | \([0.01, 10^3]\) | 69.8 | \(16.6 \; (23.8 \%)\) |
\(S_{r, \max , 2}\) | RS height | [mm] | \([10^{-5}, 5 \cdot 10^4]\) | 847.5 | \(280.5 \; (33.1 \%)\) |
\(l_{p, 2}\) | RS evap. lim | \([-]\) | \([10^{-5}, 1.00]\) | 0.03 | \(0.06 \; (217.9 \%)\) |
\(K_{s, 2}\) | SHC | [mm h\(^{-1}\)] | \([1.00, 10^4]\) | 1032.1 | \(979.5 \; (94.9 \%)\) |
\(\gamma _2\) | Flow exponent | \([-]\) | \([10^{-5}, 50]\) | 3.67 | \(1.31 \; (35.6 \%)\) |
\(S_{i, \max , 2}\) | IS height | [mm] | \([10^{-5}, 10.00]\) | 2.43 | \(3.65 \; (150.5 \%)\) |
\(k_{v, 2}\) | Evap. factor | \([-]\) | \([10^{-6}, 10^2]\) | 3.75 | \(1.11 \; (29.6 \%)\) |
\(\omega _{f, 2}\) | Recharge fract | \([-]\) | [0.00, 1.00] | 0.003 | \(0.01 \; (233.3 \%)\) |
\(Q_b\) | Base-flow | \([10^3 \, \text {m}^3 s^{-1}]\) | [0.00, 0.10] | 0.002 | \(0.01 \; (700.0 \%)\) |
\(\alpha\) | Noise decay | [h] | \([10^{-5}, 5 \cdot 10^3]\) | 30.6 | \(20.2 \; (66.1 \%)\) |
Model performance for the testing period
Study | Model | \(KGE \; [-]\) | \(\overline{VCC} \; [-]\) | \(NSE \; [-]\) | Effort | Score |
---|---|---|---|---|---|---|
This Study | Pastas | 0.86 | 1.03 | 0.73 | 1 day | 0.90 |
BRGM France | Gardenia | 0.83 | 0.85 | 0.83 | 1 day | 0.84 |
Uni-Freiburg | Varkarst | 0.80 | 0.85 | 0.79 | 1 day | 0.82 |
Study | \(\overline{VCC} \; [-]\) | \(NSE \; [-]\) | \(KGE \; [-]\) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
RL | FR | BF | UF | RL | FR | BF | UF | RL | FR | BF | UF | |
This study | 0.92 | 0.95 | 1.45 | 1.22 | 0.68 | 0.58 | 0.52 | 0.73 | 0.90 | 0.93 | 0.31 | 0.77 |
Gardenia | 0.89 | 0.83 | 0.89 | 0.85 | 0.80 | 0.71 | 0.80 | 0.83 | 0.82 | 0.82 | 0.83 | 0.80 |
Varkarst | 0.95 | 0.87 | 0.72 | 0.73 | 0.71 | 0.73 | – 0.72 | 0.51 | 0.70 | 0.76 | 0.48 | 0.66 |
Discussion
Calibration and uncertainty quantification
Least-squares calibration
Comparison to other modelling approaches from the KMC
Bayesian inversion and uncertainty quantification
Limitations and transferability of TFN models to other systems
Conclusions
Supplementary information
Jupyter Notebook
containing the Python
code necessary to run the models is available together with the adapted version of Pastas
that supports the two-component non-linear recharge model under https://doi.org/10.5281/zenodo.7715000.