Skip to main content

2011 | Buch

Focal-Plane Sensor-Processor Chips

insite
SUCHEN

Über dieses Buch

Focal-plane sensor-processor imager devices are sensor arrays and processor arrays embedded in each other on the same silicon chip. This close coupling enables ultra-fast processing even on tiny, low power devices, because the slow and energetically expensive transfer of the large amount of sensory data is eliminated. This technology also makes it possible to produce locally adaptive sensor arrays, which can (similarly to the human retina) adapt to the large dynamics of the illumination in a single scene This book focuses on the implementation and application of state-of-the-art vision chips. It provides an overview of focal plane chip technology, smart imagers and cellular wave computers, along with numerous examples of current vision chips, 3D sensor-processor arrays and their applications. Coverage includes not only the technology behind the devices, but also their near- and mid-term research trends.

Inhaltsverzeichnis

Frontmatter
Anatomy of the Focal-Plane Sensor-Processor Arrays
Abstract
This introductory chapter describes the zoo of the basic focal-plane sensor-processor array architectures. The typical sensor-processor arrangements are shown, the typical operators are listed in separate groups, and the processor structures are analyzed. The chapter gives a compass to the reader to navigate among the different chip implementations, designs, and applications when reading the book.
Ákos Zarándy
SCAMP-3: A Vision Chip with SIMD Current-Mode Analogue Processor Array
Abstract
In this chapter, the architecture, design and implementation of a vision chip with general-purpose programmable pixel-parallel cellular processor array, operating in single instruction multiple data (SIMD) mode is presented. The SIMD concurrent processor architecture is ideally suited to implementing low-level image processing algorithms. The datapath components (registers, I/O, arithmetic unit) of the processing elements of the array are built using switched-current circuits. The combination of a straightforward SIMD programming model, with digital microprocessor-like control and analogue datapath, produces an easy-to-use, flexible system, with high-degree of programmability, and efficient, low-power, small-footprint, circuit implementation. The SCAMP-3 chip integrates 128 ×128 pixel-processors and a flexible read-out circuitry, while the control system is fully digital, and currently implemented off-chip. The device implements low-level image processing algorithms on the focal plane, with a peak performance of more than 20 GOPS, and power consumption below 240mW.
Piotr Dudek
MIPA4k: Mixed-Mode Cellular Processor Array
Abstract
This chapter describes MIPA4k, a 64 ×64 cell mixed-mode image processor array chip. Each cell includes an image sensor, A/D/A conversion, embedded digital and analog memories, and hardware-optimized grey-scale and binary processing cores. We describe the architecture of the processor cell, go through the different functional blocks and explore its processing capabilities. The processing capabilities of the cells include programmable space-dependent neighbourhood connections, ranked-order filtering, rank identification and anisotropic resistive filtering. For example, asynchronous analog morphological reconstruction operation can be performed with MIPA4k. The image sensor has an option for locally adaptive exposure time. Also, the peripheral circuitry can highlight windows of activation, and pattern matching can be performed on these regions of interest (ROI) with the aid of parallel write operation to the active window. As the processing capabilities are complemented with global OR and global sum operations, MIPA4k is an effective tool for high-speed image analysis.
Mika Laiho, Jonne Poikonen, Ari Paasio
ASPA: Asynchronous–Synchronous Focal-Plane Sensor-Processor Chip
Abstract
This chapter describes the architecture and implementation of a digital vision chip, with asynchronous processing capabilities (asynchronous/synchronous processor array or ASPA). The discussion focuses on design aspects of cellular processor array with compact digital processing cell suitable for parallel image processing. The presented vision chip is based on an array of processing cells, each incorporating photo-sensor with a one-bit ADC and simple digital processor, which consists of 64-bit memory, arithmetic and logic unit (ALU), flag register and communication unit. The chip has two modes of operation: synchronous mode for local and nearest-neighbour operations and continuous-time mode for global operations. The speed of global image processing operations is significantly increased by using asynchronous processing techniques. In addition, the periphery circuitry enables asynchronous address extraction, fixed pattern addressing and flexible random access data I/O.
Alexey Lopich, Piotr Dudek
Focal-Plane Dynamic Texture Segmentation by Programmable Binning and Scale Extraction
Abstract
Dynamic textures are spatially repetitive time-varying visual patterns that present, however, some temporal stationarity within their constituting elements. In addition, their spatial and temporal extents are a priori unknown. This kind of pattern is very common in nature; therefore, dynamic texture segmentation is an important task for surveillance and monitoring. Conventional methods employ optic flow computation, though it represents a heavy computational load. Here, we describe texture segmentation based on focal-plane space-scale generation. The programmable size of the subimages to be analysed and the scales to be extracted encode sufficient information from the texture signature to warn its presence. A prototype smart imager has been designed and fabricated in 0.35μm CMOS, featuring a very low-power scale-space representation of used-defined subimages.
Jorge Fernández-Berni, Ricardo Carmona-Galán
A Biomimetic Frame-Free Event-Driven Image Sensor
Abstract
Conventional image sensors acquire the visual information time-quantized at a predetermined frame rate. Each frame carries the information from all pixels, regardless of whether or not this information has changed since the last frame had been acquired. If future artificial vision systems are to succeed in demanding applications such as autonomous robot navigation, high-speed motor control and visual feedback loops, they must exploit the power of the biological, asynchronous, frame-free approach to vision and leave behind the unnatural limitation of frames: These vision systems must be driven and controlled by events happening within the scene in view, and not by artificially created timing and control signals that have no relation whatsoever to the source of the visual information: the world. Translating the frameless paradigm of biological vision to artificial imaging systems implies that control over visual information acquisition is no longer being imposed externally to an array of pixels but the decision making is transferred to the single pixel that handles its own information individually. The notion of a frame has then completely disappeared and is replaced by a spatio-temporal volume of luminance-driven, asynchronous events. ATIS is the first optical sensor to combine several functionalities of the biological ‘where’- and ‘what’-systems of the human visual system. Following its biological role model, this sensor processes the visual information in a massively parallel fashion using energy-efficient, asynchronous event-driven methods.
Christoph Posch
A Focal Plane Processor for Continuous-Time 1-D Optical Correlation Applications
Abstract
This chapter describes a 1-D Focal Plane Processor, which has been designed to run continuous-time optical correlation applications. The chip contains 200 sensory processing elements, which acquire light patterns through a 2mm ×10.9μm photodiode. The photogenerated current is scaled at the pixel level by five independent 3-bit programmable-gain current scaling blocks. The correlation patterns are defined as five sets of two hundred 3-bit numbers (from 0 to 7), which are provided to the chip through a standard I2C interface. Correlation outputs are provided in current form through 8-bit programmable gain amplifiers (PGA), whose configurations are also defined via I2C. The chip contains a mounting alignment help, which consists of three rows of 100 conventional active pixel sensors (APS) inserted at the top, middle and bottom part of the main photodiode array. The chip has been fabricated in a standard 0.35μm CMOS technology and its maximum power consumption is below 30mW. Experimental results demonstrate that the chip is able to process interference patterns moving at an equivalent frequency of 500kHz.
Gustavo Liñán-Cembrano, Luis Carranza, Betsaida Alexandre, Ángel Rodríguez-Vázquez, Pablo de la Fuente, Tomás Morlanes
VISCUBE: A Multi-Layer Vision Chip
Abstract
Vertically integrated focal-plane sensor-processor chip design, combining image sensor with mixed-signal and digital processor arrays on a four layer structure is introduced. The mixed-signal processor array is designed to perform early image processing, while the role of the digital processor array is to accomplish foveal processing. The architecture supports multiscale, multifovea processing. The chip has been designed on a 0.15um feature sized 3DM2 SOI technology provided by MIT Lincoln Laboratory.
Ákos Zarándy, Csaba Rekeczky, Péter Földesy, Ricardo Carmona-Galán, Gustavo Liñán Cembrano, Soós Gergely, Ángel Rodríguez-Vázquez, Tamás Roska
The Nonlinear Memristive Grid
Abstract
Nonlinear resistive grids have been proposed previously for image smoothing that preserves discontinuities. The recent development of nonlinear memristors using nanotechnology has opened the possibility for expanding the capabilities of such circuit networks by replacing the nonlinear resistors with memristors. We demonstrate here that replacing the connections between nodes in a nonlinear resistive grid with memristors yields a network that performs a similar discontinuity-preserving image smoothing, but with significant functional advantages. In particular, edges extracted from the outputs of the nonlinear memristive grid more closely match the results of human segmentations.
Feijun Jiang, Bertram E. Shi
Bionic Eyeglass: Personal Navigation System for Visually Impaired People
Abstract
The first self-contained experimental prototype of a Bionic Eyeglass is presented here, a device that helps blind and visually impaired people in basic tasks of their everyday life by converting visual information into audio signal. The indoor and outdoor situations and tasks have been selected by a technical committee consisting of blind and visually impaired persons, considering their most important needs and potential practical benefits that an audio guide can provide. The prototype system uses a cell phone as a front-end and an embedded cellular visual computer as a computing device. Typical events have been collected in the Blind Mobile Navigation Database to validate the algorithms developed.
Kristóf Karacs, Róbert Wagner, Tamás Roska
Implementation and Validation of a Looming Object Detector Model Derived from Mammalian Retinal Circuit
Abstract
The model of a recently identified mammalian retina circuit, responsible for identifying looming or approaching objects, is implemented on mixed-signal focal-plane sensor-processor array. The free parameters of the implementation are characterized; their effects to the model are analyzed. The implemented model is calibrated with real stimuli with known kinetic and geometrical properties. As the calibration shows, the identified retina channel is responsible for last minute detection of approaching objects.
Ákos Zarándy, Tamás Fülöp
Real-Time Control of Laser Beam Welding Processes: Reality
Abstract
Cellular neural networks (CNN) are more and more attractive for closed-loop control systems based on image processing because they allow for the combination of high computational power and short feedback times. This combination enables new applications, which are not feasible for conventional image processing systems. Laser beam welding (LBW), which has been largely adopted in the industrial scenario, is an example for such processes. Concerning the latter, monitoring systems using conventional cameras are quite common, but they do a statistical postprocess evaluation of certain image features for quality control purposes. Earlier attempts to build closed-loop control systems failed due to the lack of computational power. In order to increase controlling rates and decrease false detections by a more robust evaluation of the image feature, strategies based on CNN operations have been implemented in a cellular architecture called Q-Eye. They allow enabling the first robust closed-loop control system adapting the laser power by observing the full penetration hole (FPH) in the melt. In this paper, the algorithms adopted for the FPH detection in process images are described and compared. Furthermore, experimental results obtained in real-time applications are also discussed.
Leonardo Nicolosi, Andreas Blug, Felix Abt, Ronald Tetzlaff, Heinrich Höfler, Daniel Carl
Real-Time Multi-Finger Tracking in 3D for a Mouseless Desktop
Abstract
In this chapter, we present a real-time 3D finger tracking system implemented on focal plane chip technology-enabled smart camera computers. The system utilizes visual input from two cameras, executes a model-based tracking and finally computes the 3D coordinate from the 2D projections. A reduced algorithm is also analysed, pros and cons emphasised. Measurements, robustness and auto calibration issues are also exposed.
Norbert Bérci, Péter Szolgay
Backmatter
Metadaten
Titel
Focal-Plane Sensor-Processor Chips
herausgegeben von
Ákos Zarándy
Copyright-Jahr
2011
Verlag
Springer New York
Electronic ISBN
978-1-4419-6475-5
Print ISBN
978-1-4419-6474-8
DOI
https://doi.org/10.1007/978-1-4419-6475-5

Neuer Inhalt