Desktop environments represent a powerful user interface and have been used as the
standard human-computer interaction paradigm for over 20 years. But the rising demand of 3D applications dealing with complex datasets exceeds the capabilities of traditional interaction devices and two-dimensional displays. Such applications need more immersive and intuitive interfaces. In order to be accepted by the users, technology-driven solutions that require inconvenient instrumentation, e.g., stereo glasses or tracked gloves, should be avoided. Autostereoscopic display environments equipped with tracking systems enable humans to experience virtual 3D environments more naturally, for instance via gestures, without having to use annoying devices. However, currently these approaches are used only for specially designed or adapted applications. In this paper we introduce new 3D user interface concepts for such setups which require minimal instrumentation of the user and can be integrated easily in everyday working environments. We propose an interaction framework which supports simultaneous display of and simultaneous interaction with both monoscopic as well as stereoscopic contents. We identify the challenges for combined mouse-, keyboard- and gesture-based input paradigms in such an environment and introduce novel interaction strategies.