Skip to main content
Log in

Approximating Unknown Mappings: An Experimental Evaluation

  • Published:
Journal of Heuristics Aims and scope Submit manuscript

Abstract

Different methodologies have been introduced in recent years with the aim of approximating unknown functions. Basically, these methodologies are general frameworks for representing non-linear mappings from several input variables to several output variables. Research into this problem occurs in applied mathematics (multivariate function approximation), statistics (nonparametric multiple regression) and computer science (neural networks). However, since these methodologies have been proposed in different fields, most of the previous papers treat them in isolation, ignoring contributions in the other areas. In this paper we consider five well known approaches for function approximation. Specifically we target polynomial approximation, general additive models (Gam), local regression (Loess), multivariate additive regression splines (Mars) and artificial neural networks (Ann).

Neural networks can be viewed as models of real systems, built by tuning parameters known as weights. In training the net, the problem is to find the weights that optimize its performance (i.e. to minimize the error over the training set). Although the most popular method for Ann training is back propagation, other optimization methods based on metaheuristics have recently been adapted to this problem, outperforming classical approaches. In this paper we propose a short term memory tabu search method, coupled with path relinking and BFGS (a gradient-based local NLP solver) to provide high quality solutions to this problem. The experimentation with 15 functions previously reported shows that a feed-forward neural network with one hidden layer, trained with our procedure, can compete with the best-known approximating methods. The experimental results also show the effectiveness of a new mechanism to avoid overfitting in neural network training.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Bishop, C.M. (1995). Neural Networks for Pattern Recognition. Oxford University Press: New York.

    Google Scholar 

  • Chambers, J. and T. Hastie. (1991). Statistical Models in S. Wadsworth/Brooks Cole: Pacific grove, CA.

    Google Scholar 

  • Cleveland, W.S. (1979). “Robust Locally Weighted Regression and Smoothing Scatterplots.” Journal of the American Statistical Association. 74, 829–836.

    Google Scholar 

  • Fahlman, S.E. (1988). “An Empirical Study of Learning Speed in Back-Propagation Networks,” In T.J. Sejnowski G.E. Hinton and D.S. Touretzky (eds.), Connectionist Models Summer School, Morgan Kaufmann: San Mateo, CA, pp. 38–51.

    Google Scholar 

  • Friedman, J.H. and B.W. Silverman. (1989). “Flexible Parsimonious Smoothing and Additive Modelling (with discussion).” Technometrics 31, 3–39.

    Google Scholar 

  • Glover, F. (1989). “Tabu Search-Part 1.” ORSA Journal on Computing 1, 190–206.

    Google Scholar 

  • Hastie, T., R. Tibshirami, and J. Friedman. (2001). The Elements of Statistical Learning. Springer: New-York.

    Google Scholar 

  • Hornik, K., M. Stinchcombe, and H. White. (1989). “Multilayer Feedforward Networks are Universal Approximators.” Neural Networks 2, 359–366.

    Article  Google Scholar 

  • Jacobs, R.A. (1988). “Increased Rates of Convergence Through Learning Rate Adaptation.” Neural Networks 1, 295–307.

    Article  Google Scholar 

  • Laguna, M. and R. Martí. (2002). “Neural Network Prediction in a System for Optimizing Simulations.” IIE Transactions 34(3), 273–282.

    Article  Google Scholar 

  • Laguna, M. and R. Martí. (2003). Scatter Search: Methodology and Implementations in C, Kluwer Academic Publishers.

  • Martí, R. and A. El-Fallahi. (2004). “Multilayer Neural Networks: An Experimental Evaluation of On-Line Training Methods.” Computers and Operations Research 31, pp. 1491–1513.

    Article  Google Scholar 

  • Press, W.H., S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. (1992). Numerical Recipes: The Art of Scientific Computing. Cambridge University Press (http://www.nr.com).

  • R Development Core Team. (2003). “R: A Language and Environment for Statistical Computing,” “http://www.R_project.org”.

  • Sexton, R.S., B. Alidaee, R.E. Dorsey, and J.D. Johnson. (1998). “Global Optimization for Artificial Neural Networks: A Tabu search Application.” European Journal of Operational Research 106, 570–584.

    Article  Google Scholar 

  • Smith, S. and L. Lasdon. (1992). “Solving Large Nonlinear Programs Using GRG.” ORSA Journal on Computing 4(1), 2–15.

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rafael Martí.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Martí, R., Montes, F. & El-Fallahi, A. Approximating Unknown Mappings: An Experimental Evaluation. J Heuristics 11, 219–232 (2005). https://doi.org/10.1007/s10732-005-1502-y

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10732-005-1502-y

Keywords

Navigation