Pages

Sunday, May 12, 2013

Digital image processing


ABSTRACT
          Digital image processing is not a new phenomenon; techniques for the manipulation, correction and enhancement of digital images have been in practical use for over 30 years and the underlying theoretical ideas have been around far longer. The term image processing refers to manipulation and analysis of two-dimensional pictures. Digital image processing is processing of two-dimensional pictures by a digital computer. A Digital image is an array of numbers represented by finite number of bits .An image given in form is first digitized and stored as two-dimensional matrices of binary digits in computer memory. This digitized image can then be processed and/or displayed on a high-resolution monitor. Image-to-image transformation, not explicit description building .Image processing is used to control image sharpness, noise and color reproduction. It is also used to maximize the information content of images and to compress data for economical storage and rapid transmission. Image processing algorithms based on complex statistical methods and artificial intelligence may be used to perform such operations as automatic colour balancing, object and text recognition and image enhancement and manipulation.
Interest in digital image processing stems from two principle application areas:
1) Improvement of pictorial information for human interpretation
2) Processing of image data for storage, transmission and representation for autonomous machine perception.
One of the first applications of image processing techniques in the first category was in improving digitized newspaper pictures sent by submarine cable between London and New York. Introduction of Bart lane cable picture transmission system in early 1920's reduced the time required to transport the picture across the Atlantic from more than a week to less than 3 hours. Specialized printing equipment coded pictures for Cable transmission and reconstructed them at the receiving end .Some of the initial problems in improving the visual quality of these early digital pictures were related to selection of brightness levels. During this period introduction of a system for developing a plate via light beams developed by coded picture tape improved the reproduction process considerably .From that time to until now the improvements on processing methods for translation  of digital pictures are continued .
INTRODUCTION
Digital image processing is not a new phenomenon; techniques for the manipulation, correction and enhancement of digital images have been in practical use for over 30 years and the underlying theoretical ideas have been around far longer.The term image processing refers to manipulation and analysis of two dimensional pictures. Digital image processing is processing of two dimensional pictures by a digital computer. A Digital image is an array of numbers represented by finite number of bits .An image given in form is first digitized and stored as two-dimensional matrices of binary digits in computer memory. This digitized image can then be processed and/or displayed on a high-resolution monitor. Image-to-image transformation, not explicit description building .Image processing is used to control image sharpness, noise and colour reproduction. It is also used to maximize the information content of images and to compress data for economical storage and rapid transmission. Image processing algorithms based on complex statistical methods and artificial intelligence may be used to perform such operations as automatic colour balancing, object and text recognition and image enhancement and manipulation.
Interest in digital image processing stems from two principle application areas:
1) Improvement of pictorial information for human interpretation
2) Processing of image data for storage, transmission and representation for autonomous machine perception.
One of the first applications of image processing techniques in the first category was in improving digitized newspaper pictures sent by submarine cable between Landon and New York. Introduction of Bart lane cable picture transmission system in early 1920s reduced the time required to transport the picture across the Atlantic from more than a week to less than 3hours.Specialized printing equipment coded pictures for Cable transmission and reconstructed them at the receiving end .Some of the initial problems in improving the visual quality of these early digital pictures were related to selection of brightness levels. During this period introduction of a system for developing a plate via light beams developed by coded picture tape improved the reproduction process considerably .From that time to until now the improvements on processing methods for translation  of digital pictures are continued .
 IMAGE UNDERSTANDING
Image understanding involves the most basic knowledge of images. It is the science of automatically understanding, predicting and creating images from the perspective of image sources. Image source characteristics include illuminant spectral properties, object geometric properties; object reflectance’s and surface characteristics as well as numerous other factors, such as ambient lighting conditions. The essential technologies of the science include image component modeling, image creation and data visualization. Scientists at Kodak work to develop automated ways to detect various characteristics of objects in a picture (indoor/outdoor, people, faces, trees, buildings, etc.) that may be useful in future applications such have database management or automatic creation of photo albums. Let’s say you are writing an e-mail note to family or friends. You mention your recent vacation trip. Automatically several pictures from this trip are inserted in the note. Or you are creating the annual Christmas letter, and pictures related to the events described during the previous year become automatically available for insertion in the document. Making this happen requires a number of technologies to identify what's in each picture, the location of the scene, the day it was taken, who is in the picture and on and on. Kodak laboratories are working to deliver such technologies as: Image segmentation, the ability to automatically identify meaningful       regions in an image. Face detection and feature finding Image similarity (Identifying scenes that are similar in location) a picture's main subject Scene categorization (i.e., what type of scene is it?)

Image representation basics
1) Continuous versus discrete data: an image is a discrete function of 2 variables
Represented by a 2D array of brightness values f(x, y), one for each pixel (x, y), also
Called picture element , pel, image element.
2) Raster scan: the rectangular grid scanning pattern is known as a raster.
3) Digitizers: convert an image into a numerical representation
4) Image sampling: digitization of spatial coordinates, e.g., a 512 x 512 image
5) Gray-level quantization: amplitude digitization, e.g., 256 gray levels
ELEMENTS OF IMAGE PROCESSING SYSTEMS
A. Image Acquisition
Two elements are required to acquire digital images
1. SENSORS
Physical device that is sensitive to a band in the electromagnetic energy spectrum   such as x-ray, ultraviolet, visible or infrared bands and that produces an electrical signal output proportional to level of sensed.
2. Digitizer
A physical device for converting electrical output of physical sensing device into digital Form.
          i.            Scanner
A scanner, briefly, is a very skinny CCD chip that is physically dragged across the document to be scanned. As the document is scanned, light is bounced off the document, and reflected back to the CCD chip, which records the intensity, and sent to the computer.
        ii.            Video cameras:
Conventional analogue video cameras are connected to a PC with a frame grabber who performs conversion of the analogue video signal to digital images. The size of these images is typically 768 $\times$ 572 pixels which corresponds to 0.44 MB per band. These cameras are relatively cheap, and they are well-suited for real-time applications; this is why they are used for industrial and medical purposes. On the other hand, both their sensor size and their resolution are restricted. Currently, really digital video cameras are gaining increasing importance.
      iii.            Amateur cameras with CCD sensors:
CCD Sensors can be mounted in the image planes of conventional photographic cameras. In addition, such cameras need a device for data storage, e.g. a PCMCIA drive, Flash card, etc. They can then be used just like analogue cameras, the advantage being that the images can be checked immediately after they have been taken on a laptop PC, and bad photographs can be replaced by better ones. The sensor size varies considerably between different sensor models: A typical one-chip sensor may have about 2000 $\times$ 3000 pixels which correspond to 6 MB per grey scale image or to 18 MB for a true color image. The format of these sensors is about 2.4 $\times$ 1.6 cm2; thus, it is still 33% smaller than a common small format analogue photograph. These cameras can be used for architectural applications and basically for everything that can be photographed because their handling is very flexible. However, in order to achieve an economic operating cycle, camera objectives with small focal lengths have to be used which enlarge the aperture angle but bring about geometrical problems due to distortions. The latest achievement is a digital aerial camera consisting of four CCD chips delivering four perspective images which can be resample to one quasi-perspective digital image.

      iv.            Analogue metric cameras:
Photographs taken by metric cameras correspond with high accuracy to central perspective images. These cameras deliver analogue images which have to be scanned off-line. They are used for high-precision applications or if the format of the CCD sensors is too small for an economic operating cycle, which is especially true for, e.g., mapping purposes. Even the digital aerial camera cited above is not yet operational, and for high-precision applications, the resembling process required for combining the four images is not appropriate. Scanning off-line turns out to be a very time-consuming process, which is especially true for aerial images: The format of aerial images is usually 23 $\times$ 23 cm2, and due to the high demands for accuracy, they have to be scanned with high resolution, thus yielding an enormous amount of data.
        v.            CMOS Image Sensors
The new Complimentary Metal Oxide Semiconductor (CMOS) based replacement for 25-Year-old CCD technology is called “CMOS Image Sensor Technology”. It is an integrated circuit technology for realizing electronic “film”. Unlike CCD technology, which relies on specialized processes, CMOS Image Sensor uses mainstream microelectronics fabrication processes to produce the sensor chips. It is likely that CMOS
Image Sensors will replace CCDs within next few years, offering significant advantages over CCDs in cost, performance, power consumption and system size.
      vi.            Cameras
A CCD camera is used to replace conventional cameras that use photographic film. The main difference: instead of a piece of film in the focal plane, we have the CCD chip. The intensities are recorded as an array of electrons, converted to numbers, and stored in computer, to be processed later.
    vii.            X-ray imaging system:
The output of an x-ray source is directed at object and a medium sensitive to x-rays is placed on the other side of object. The medium thus acquires an image of materials (such As bones and tissues) having various degree of x-ray absorption.
  viii.            Micro densitometers:
In this sensor, the image to be digitized is in the form of transparency (such as film negatives) or photograph. This photograph or transparency is mounted on a flat bed or a Wrapped around a drum .scanning is accomplished by focusing a beam of light on the image and translating the or rotating the drum in relation to beam. In case of transparencies, beam passes through transparency; in photographs it is reflected from surface of image. In both cases beam is focused on a photo detector and detector based on intensity of beam records gray level at any point.
      ix.            Vidicon camera:
Operation of vidicon camera is based on principle of photo conductivity .An image focused on the tube surface produces a pattern of varying conductivity that matches distribution of brightness in optical image. Independently, electron beam scans the rear surface if target and produces a signal proportional to input brightness pattern. A digital image is obtained by   quantizing this signal.
        x.            Solid state arrays:
Solid state arrays are composed of discrete silicon imaging elements called photo sites. They have a voltage output proportional to intensity of incident light. They are of
Two types:
a.      Line Scan Sensors-
           Line scan sensors consist of a row of photo sites and produces a two dimensional image by relative motion between scene and detector.
b.      Area sensors-
           An area sensor composed of a matrix of photo sites and is therefore capable of capturing an image in the same manner as vidicon tube . These are used in freezing applications.
B. Storage      
An 8 bit image of size 1024x1024   pixels requires one million bytes of storage. Thus providing adequate storage is usually a challenge in the design of image processing systems.
           Digital storage for image processing applications falls into three principles Categories:
1. Short term storage: For use during processing
2. On line storage: For relatively fast recall.
3. Archival storage: For massive storage.
           One method of providing short term storage is computer memory. Another is frame buffers that store one or more images and can be accessed rapidly, usually at rapid rates. It also includes zoom, pan and scroll. In on line storage it takes form of magnetic disks. A more recent technology   called M.O. (Magneto Optical) storage to achieve close to G-byte of storage on a 5 ¼ in. Archival storage is characterized by massive storage requirements but infrequent need for access .Magnetic tapes and optical disks are usual media for archival application.
C. Processing
          Processing   of digital images involves procedures that are usually expressed   in algorithmic form. Thus, with the execution of image acquisition and display, most image processing functions can be implemented in software. The only reason for specialized image processing hardware is the need for speed in some applications or to overcome some fundamental computer limitation. In particular, the principle imaging  hardware  being added to these  computers  consist of a digitizer and  Frame buffer  combination  for image digitization & temporary storage, a   so called arithmetic/logic unit(ALU) processor for performing  arithmetic/logic operations  at frame  rates & one or more frame  buffer for fast access to image data  during processing.
          A significant amount of basic image processing software can now be obtained commercially. When combined with other software for application such as spread sheets   & graphics, it provides an excellent starting point for the solution of   specific image processing problem. Sophisticated display device & software for word processing & report generation facilitate presentation at result.
D. Communication
            Communication in digital image processing primarily involves local      communication between   image processing system and remote communication    from one   point to another, typical in connection with transmission of image data.   Hardware & software for local communication are readily available for   most    computers. Most computer networks use standard communication protocols.
            Communication across vast distances presents a more serious challenge it the intent   is to communicate image data rather than abstracted result. We know that image contain a significant amount of data. A voice-grade telephone line can transmit at a   maximum rate of 9,600 bits per second. Thus, to transmit a 512x512, 8 bit image at this rate would require nearly five minutes. Wireless links using intermediate stations, such as satellites are much faster but they also cost considerably more. The point is that transmission trivial. In this case, data compression & decompression techniques play a central role in addressing this problem.     
E. Display
The displays used for image processing--particularly the display systems used with computers--have a number of characteristics that help determine the quality of the final image.
Interlacing
To prevent the appearance of visual flicker at refresh rates below 60 images/s, the display can be interlaced as described in Section 2.3. Standard interlace for video systems is 2:1. Since interlacing is not necessary at refresh rates above 60 images/s, an interlace of 1:1 is used with such systems. In other words, lines are drawn in an ordinary sequential fashion: 1, 2, 3, 4…., N.
Refresh Rate
            The refresh rate is defined as the number of complete images that are written to the screen per second. For standard video the refresh rate is fixed at the values given in Table 3, either 29.97 or 25 images/s. For computer displays the refresh rate can vary with common values being 67 images/s and 75 images/s. At values above 60 images/s visual flicker is negligible at virtually all illumination levels. Monochrome or color TV monitors are   principle display devices used in   modern image processing systems. Monitors are   driven by outputs of a hardware image display module in the back plane of host computer or as a part of the hardware associated   with an image processor.         The signals at the output of display module can also be fed in to an image recording device that produces a hard copy (slides, photographs or transparencies) of the image being   viewed on the monitor screen. Other display media include random access cathode ray tubes (CRTS) & printing devices. Printing image display devices are useful primarily for low, resolution image processing work. Common means of recording an image directly on paper   include laser printers, heat sensitive paper devices & ink-spray systems.
1) Digital image processing
2) Fundamental steps in digital image processing:
3) Image representation and modeling
4) Image enhancement
5) Image restoration
6) Image analysis and computer vision
7) Image reconstruction
8) Image data compression

IMAGE RESTORATION
Digital image restoration system:



                  




Image Analysis and Computer vision:








Computer vision system 
IMAGE ANALYSIS TECHNIQUES
1)      Feature Extraction:
2)      Spatial Features
3)      Amplitude Features
4)      Shape Features
5)      Geometrical Features
6)      Segmentation of Image
7)      Classification and Understanding
8)      Contour Following
9)      Boundary Representation
10)  Chain Coding
11)  Image Reconstruction
12)  Image Data Compression

 REPRESENTATION
1)      Neighbours of pixel
2)      Connectivity
3)      Need of transform
4)      Image Enhancement
5)      Spatial domain method
6)      Frequency domain method
7)      Point processing method
8)      Image negatives
9)      Contrast stretching
10)  Compression stretching
11)  Gray level slicing
12)  Image subtraction
13)  Filtering Approach
14)  Spatial filtering
15)  Frequency domain filtering 
 APPLICATIONS
1) Medical diagnostic imaging
2) Remote sensing via satellite
3) Defence
4) Document processing
5) Others
6) Image representation and modelling
7) Image formation in eye
8) Image sampling and quantization

CONCLUSION
In this topic of image processing we studied every aspect related to improvement of degraded image , by using image enhancement , histogram processing ,contrast stretching,   image negative etc. We come to know that image processing is need for any type of investigation and recognition.

No comments:

Post a Comment