Skip to main content
Log in

A framework for optimization under limited information

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

In many real world problems, optimisation decisions have to be made with limited information. The decision maker may have no a priori or posteriori data about the often nonconvex objective function except from on a limited number of data points. The scarcity of data may be due to high cost of observation or fast-changing nature of the underlying system. This paper presents a “black-box” optimisation framework that takes into account the information collection, estimation, and optimisation aspects in a holistic and structured manner. Explicitly quantifying the observations at each optimisation step using the entropy measure from information theory, the often nonconvex-objective function to be optimised is modelled and estimated by adopting a Bayesian approach and using Gaussian processes as a state-of-the-art regression method. The resulting iterative scheme allows the decision maker to address the problem by expressing preferences for each aspect quantitatively and concurrently.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ackley D.: A Connectionist Machine for Genetic Hillclimbing. Kluwer Boston Inc., Hingham, MA (1987)

    Book  Google Scholar 

  2. Ahmed N., Gokhale D.: Entropy expressions and their estimators for multivariate distributions. IEEE Trans. Inf. Theory 35(3), 688–692 (1989). doi:10.1109/18.30996

    Article  Google Scholar 

  3. Alpcan, T.: A risk-based approach to optimisation under limited information. In: Proceedings of the 20th International Symposium on Mathematical Theory of Networks and Systems (MTNS) (2012)

  4. Alpcan T., Fan X., Başar T., Arcak M., Wen J.T.: Power control for multicell CDMA wireless networks: a team optimization approach. Wireless Netw. 14(5), 647–657 (2008). doi:10.1007/s11276-006-0006-5

    Article  Google Scholar 

  5. Back T.: Evolutionary Algorithms in Theory and Practice. Oxford University Press, Oxford (1996)

    Google Scholar 

  6. Bellman, R.: Dynamic Programming. Dover Publications, Mineola (1957). http://www.bibsonomy.org/bibtex/29cdd821222218ded252c8ba5cd712666/pcbouman

  7. Bishop C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, New York (2006)

    Google Scholar 

  8. Boyd S., Vandenberghe L.: Convex Optimization. Cambridge University Press, New York (2004)

    Google Scholar 

  9. Boyle, P.: Gaussian Processes for Regression and Optimisation. Ph.D. thesis, Victoria University of Wellington, Wellington, New Zealand (2007). http://researcharchive.vuw.ac.nz/handle/10063/421

  10. Dixon L.C.W., Szego G.P.: The optimization problem: an introduction. In: Dixon, L.C.W., Szego, G.P. (eds) Towards Global Optimization II, North-Holland, New York (1978)

    Google Scholar 

  11. Grodzevich, O., Romanko, O.: Normalization and other topics in multi-objective optimization. In: Proceedings of the Fields—MITACS Industrial Problems Workshop (2006). http://www.maths-in-industry.org/miis/233

  12. Hansen N., Gemperle F., Auger A., Koumoutsakos P. et al.: When do heavy-tail distributions help?. In: Runarsson, T.P. (ed) Parallel Problem Solving from Nature PPSN IX, Lecture Notes in Computer Science, vol. 4193, pp. 62–71. Springer, Berlin (2006)

    Chapter  Google Scholar 

  13. Huang D., Allen T., Notz W., Zeng N.: Global optimization of stochastic black-box systems via sequential kriging meta-models. J. Glob. Optim. 34(3), 441–466 (2006). doi:10.1007/s10898-005-2454-3

    Article  Google Scholar 

  14. Jaynes, E.T.: Entropy and search-theory. In: Smith, C.R., Grandy, J.W.T., (eds.) Maximum-Entropy and Bayesian Methods in Inverse Problems, p. 443. Springer (1985). http://bayes.wustl.edu/etj/articles/search.pdf

  15. Jones D.R.: A taxonomy of global optimization methods based on response surfaces. J. Glob. Optim. 21(4), 345–383 (2001). doi:10.1023/A:1012771025575.

    Article  Google Scholar 

  16. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of of IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948 (1995). doi:10.1109/ICNN.1995.488968

  17. Lawrence N., Seeger M., Herbrich R.: Fast sparse Gaussian process methods: the informative vector machine. Adv. Neural Inf. Process. Syst. 15, 609–616 (2002)

    Google Scholar 

  18. Li M., Vitanyi P.: An Introduction to Kolmogorov Complexity and Its Applications, 2nd edn. Texts in Computer Science. Springer, New York (1997)

    Google Scholar 

  19. Lizotte, D., Wang, T., Bowling, M., Schuurmans, D.: Gaussian process regression for optimization. In: NIPS 2005 Workshop on Value of Information in Inference, Learning and Decision-Making (2005). http://domino.research.ibm.com/comm/research_projects.nsf/pages/nips05workshop.index.html

  20. MacKay D.J.C.: Information-based objective functions for active data selection. Neural Comput. 4(4), 590–604 (1992). doi:10.1162/neco.1992.4.4.590.

    Article  Google Scholar 

  21. MacKay D.J.C.: Introduction to Gaussian Processes. In: Bishop, C.M. (ed) Neural Networks and Machine Learning, NATO ASI Series, pp. 133–166. Kluwer, Dordrecht (1998)

    Google Scholar 

  22. MacKay, D.J.C.: Information Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge (2003). http://www.inference.phy.cam.ac.uk/mackay/itila

  23. Marler R.T., Arora J.S.: Survey of multi-objective optimization methods for engineering. Struct. Multidiscip. Optim. 26(6), 369–395 (2004). doi:10.1007/s00158-003-0368-6.

    Article  Google Scholar 

  24. Pan Y., Pavel L., Alpcan T.: A system performance approach to OSNR optimization in optical networks. IEEE Trans. Commun. 58(4), 1193–1200 (2010). doi:10.1109/TCOMM.2010.04.090059.

    Article  Google Scholar 

  25. Petersen, K.B., Pedersen, M.S.: The Matrix Cookbook. (2008). http://www2.imm.dtu.dk/pubdb/p.php?3274

  26. Pierce, J.G.: A New Look at the Relation Between Information Theory and Search Theory. Tech. rep., Office of Naval Research, Arlington, VA, USA (1978). http://handle.dtic.mil/100.2/ADA063845

  27. Rasmussen C.E., Williams C.K.I.: Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, USA (2005)

    Google Scholar 

  28. Rinehart, M., Dahleh, M.: The value of sequential information in shortest path optimization. In: IEEE American Control Conference (ACC), pp. 4084–4089. IEEE (2010)

  29. Royden H.L.: Real Analysis. 3rd edn. Prentice-Hall, New Jersey (1988)

    Google Scholar 

  30. Rutenbar R.: Simulated annealing algorithms: an overview. IEEE Circuits Devices Mag. 5(1), 19–26 (1989). doi:10.1109/101.17235

    Article  Google Scholar 

  31. Sahinidis, N.V.: Optimization under uncertainty: state-of-the-art and opportunities. Comput. Chem. Eng. 28(6-7):971–983 (2004). doi:10.1016/j.compchemeng.2003.09.017. http://www.sciencedirect.com/science/article/B6TFT-49YH97T-1/2/f15875aad97740410effc526416289aa. FOCAPO 2003 Special issue

  32. Scholkopf B., Smola A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2001)

    Google Scholar 

  33. Seo, S., Wallat, M., Graepel, T., Obermayer, K.: Gaussian process regression: active data selection and test point rejection. In: Proceedings of IEEE-INNS-ENNS International Joint Conference on Neural Networks IJCNN 2000, vol. 3, pp. 241–246 (2000). doi:10.1109/IJCNN.2000.861310

  34. Settles, B.: Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison (2009)

  35. Sniedovich M.: A new look at Bellman’s principle of optimality. J Optim. Theory Appl. 49(1), 161–176 (1986)

    Article  Google Scholar 

  36. Sniedovich M.: Dijkstra’s algorithm revisited: the dynamic programming connexion. Control Cybern. 35(3), 599 (2006)

    Google Scholar 

  37. Sniedovich M., Lew A.: Dynamic programming: an overview. Control Cybern. 35(3), 513 (2006)

    Google Scholar 

  38. Srikant R.: The Mathematics of Internet Congestion Control. Systems & Control: Foundations & Applications. Birkhauser, Boston, MA (2004)

    Book  Google Scholar 

  39. Tempo, R., Bai, E.W., Dabbene, F.: Probabilistic robustness analysis: explicit bounds for the minimum number of samples. Syst. Control Lett. 30(5):237–242 (1997). doi:10.1016/S0167-6911(97)00005-4. http://www.sciencedirect.com/science/article/B6V4X-3SP7DCD-4/2/3dc655107eff50f12b326ea10f58ef0a

  40. Tempo R., Calafiore G., Dabbene F.: Randomized Algorithms for Analysis and Control of Uncertain Systems. Springer, London (2005)

    Google Scholar 

  41. Thompson, K.R.: Implementation of gaussian process models for non-linear system identification. Ph.D. thesis, University of Glasgow, Glasgow, Scotland (2009)

  42. Tipping, M.E.: Bayesian inference: an introduction to principles and practice in machine learning. In: Advanced Lectures on Machine Learning, pp. 41–62 (2003). http://springerlink.metapress.com/openurl.asp?genre=article

  43. Turner, R., Deisenroth, M.P., Rasmussen, C.E.: State-space inference and learning with gaussian processes. In: Proceedings of 13th Internatioanl Conference on Artificial Intelligence and Statistics (AISTATS). Chia Laguna Resort, Sardinia, Italy (2010)

  44. Wolpert D., Macready W.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997). doi:10.1109/4235.585893

    Article  Google Scholar 

  45. Zhu, Q., Oommen, J.: On the optimal search problem: the case when the target distribution is unknown. In: Proceedings of XVII International Conference of Chilean Computer Science Society, pp. 268–277 (1997). doi:10.1109/SCCC.1997.637100

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tansu Alpcan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Alpcan, T. A framework for optimization under limited information. J Glob Optim 55, 681–706 (2013). https://doi.org/10.1007/s10898-012-9942-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-012-9942-z

Keywords

Navigation