Skip to main content
Log in

Computing occluding and transparent motions

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Computing the motions of several moving objects in image sequences involves simultaneous motion analysis and segmentation. This task can become complicated when image motion changes significantly between frames, as with camera vibrations. Such vibrations make tracking in longer sequences harder, as temporal motion constancy cannot be assumed. The problem becomes even more difficult in the case of transparent motions.

A method is presented for detecting and tracking occluding and transparent moving objects, which uses temporal integration without assuming motion constancy. Each new frame in the sequence is compared to a dynamic internal representation image of the tracked object. The internal representation image is constructed by temporally integrating frames after registration based on the motion computation. The temporal integration maintains sharpness of the tracked object, while blurring objects that have other motions. Comparing new frames to the internal representation image causes the motion analysis algorithm to continue tracking the same object in subsequent frames, and to improve the segmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adiv, G. 1985. Determining three-dimensional motion and structure from optical flow generated by several moving objects,IEEE Trans. Patt. Anal. Mach. Intell. 7(4):384–401, July.

    Google Scholar 

  • Bergen, J.R., and Adelson, E.H. 1987. Hierarchical, computationally efficient motion estimation algorithm,J. Opt. Soc. Amer. A., 4:35.

    Google Scholar 

  • Bergen, J.R., Anandan, P., Hanna, K.J., and Hingorani, R. 1992a. Hierarchical model-based motion estimation,Europ. Conf. Comput. Vis. pp. 237–252, Santa Margarita Ligure, May.

  • Bergen, J.R., Burt, P.J., Hanna, K., Hingorani, R., Jeanne, P., and Peleg, S. 1991. Dynamic multiple-motion computation. In Y.A. Feldman and A. Bruckstein, ed.,Artificial Intelligence and Computer Vision: Proceedings of the Israeli Conference, pp. 147–156. Elsevier: New York.

    Google Scholar 

  • Bergen, J.R., Burt, P.J., Hingorani, R., and Peleg, S. 1992 b. A three-frame algorithm for estimating two-component image motion,IEEE Trans. Patt. Anal. Mach. Intell. 14:886–895, September.

    Google Scholar 

  • Burt, P.J., Hingorani, R., and Kolczynski, R.J. 1991. Mechanisms for isolating component patterns in the sequential analysis of multiple motion,IEEE Workshop on Visual Motion, pp. 187–193, Princeton, October.

  • Donohoe, G.W., Hush, D.R., and Ahmed, N. 1988. Change detection for target detection and classification in video sequences,Intern. Conf. Acous. Speech Sig. Process., pp. 1084–1087, New York.

  • Darrell, T., and Pentland, A. 1991. Robust estimation of a multi-layered motion representation,IEEE Workshop on Visual Motion, pp. 173–178, Princeton, October.

  • Fleet, D.J. and Jepson, A.D. 1990. Computation of component image velocity from local phase information,Intern. J. Comput. Vis. 5 (1):77–104.

    Google Scholar 

  • Francois, E., and Bouthemy, P. 1992. Multiframe-based identification of mobile components of a scene with a moving camera,Proc. IEEE Conf. Comput. Vis. Patt. Recog., pp. 282–287, Champaign, June.

  • Heeger, D.J. 1988. Optical flow using spatiotemporal filters,Intern. J. Comput. Vis. 1:279–302.

    Google Scholar 

  • Horn, B.K.P., and Schunck, B.F. 1981. Determining optical flow,Artificial Intelligence 17:185–203.

    Google Scholar 

  • Hsu, Y.A., Nagel, H.-H., and Rekers, G. 1984. New likelihood test methods for change detection in image sequences,Comput. Vis. Graph., Image Process. 26:73–106.

    Google Scholar 

  • Irani, M., and Peleg, S. 1991. Improving resolution by image registration,Comput. Vis., Graph. Image Process. 53:231–239, May.

    Google Scholar 

  • Irani, M., and Peleg, S. 1992. Image sequence enhancement using multiple motions analysis,Proc. Conf. Comput. Vis. Patt. Recog., Champaign, June.

  • Karmann, K.P., and Brandt, A.V., 1989. Moving object recognition using an adaptive background memory,Proc. 3rd Intern. Workshop on Time-Varying Image Process. Mov. Object Recog., pp. 289–296, Florence, May.

  • Lucas, B.D., and Kanade, T. 1981. An iterative image registration technique with an application to stereo vision,Proc. Image Understanding Workshop, pp. 121–130.

  • Meyer, F., and Bouthemy, P. 1992. Region-based tracking in image sequences,Proc. 2nd Europ. Conf. Comput. Vis., pp. 476–484, Santa Margarita Ligure, May.

  • Peleg, S., and Rom, H. 1990. Motion based segmentation,Proc. Intern. Conf. Patt. Recog. 1:109–113, Atlantic City, June.

    Google Scholar 

  • Rosenfeld, A. ed. 1984.Multiresolution Image Processing and Analysis. Springer-Verlag: New York.

    Google Scholar 

  • Shizawa, M. 1992. On visual ambiguities due to transparency in motion and stereo,Proc. 2nd Europ. Conf. Comput. Vis., pp. 411–419, Santa Margarita Ligure, May.

  • Shizawa, M., and Mase, K. 1990. Simultaneous multiple optical flow estimation,Proc. 10th Intern. Conf. Patt. Recog., pp. 274–278, Atlantic City, June.

  • Shizawa, M., and Mase, K. 1991. Principle of superposition: A common computational framework for analysis of multiple motion,IEEE Workshop on Visual Motion, pp. 164–172, Princeton, October.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Irani, M., Rousso, B. & Peleg, S. Computing occluding and transparent motions. Int J Comput Vision 12, 5–16 (1994). https://doi.org/10.1007/BF01420982

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01420982

Keywords

Navigation