Jobs ,Research Paper, Engineering Colleges ,Seminars , Conferences , it conferences , Trainings , workshops ,
Wednesday, October 29, 2008
ITANIUM ARCHITECTURE
Abstract
This seminar provides as overview of the architectural details & technology used for new Itanium processor developed by Intel. The objective is to understand need of new EPIC technology used for Itanium series. Each major features of new EPIC technology used for Itanium is described with example.
The introductory section discusses in brief the origin of EPIC technology & its features those are different from traditional technologies .It also discusses the specifications of Itanium processor & versions of Itanium.
The Itanium architecture does far more than just extend the Intel architecture to 64 bit. The uniquely designed EPIC (Explicitly Parallel Instruction Computing) architecture allows the highest possible performance via new levels of parallelism for enterprise and technical applications. World-class floating-point performance enhances analytic and scientific design and visualization applications. 64bit addressing and massive resources combine to provide a platform to handle many tera-bytes of data with improved memory latency and fewer branches misses to further improve database performance. The Intel chipset E8870 supporting Itanium2 brings high-end reliability, availability and scalability (RAS) capabilities. With Itanium2 based systems, Intel will deliver on the promise of the Itanium architecture with industry leading performance on broad range of demanding enterprise computing and technical applications
BIOMETRICS AN IRIS RECOGNITION
Abstract
Biometrics refers to the automatic identification of a person based on his/her physiological or behavioral characteristics. A biometrics system is essentially a pattern recognition system, which makes a personal identification by determining the authenticity of a specific physiological or behavioral characteristic possessed by the user. An important issue in designing a practical system is to determine how an individual is identified.
Iris recognition is an biometrics technique. Iris is located in eye, a colored ring surrounding the pupil.it can be serve as living password. So, security is highly maintained. No one can hack a password & misuse it, as iris is protected internal organ whose random texture is stable throughout life it can act as a kind of living password.
Iris recognition is one of the best method of Biometrics. Though it is little costly than other methods, it is more reliable & its accuracy is high.
Intrusion Detection System
Abstract
An intruder is somebody ("hacker" or "cracker") attempting to break into or misuse your system. The word "misuse" is broad, and can reflect something severe as stealing confidential data to something minor such as misusing your email System for Spam.
An "Intrusion Detection System (IDS)" is a system for detecting
such intrusions. Network intrusion detection systems (NIDS) monitors’ packets on
the network wire and attempts to discover if a hacker/cracker is attempting to break into a system (or cause a denial of service attack). A typical example is a system that watches for large number of TCP connection requests (SYN) to many different ports on a target machine thus discovering if someone is attempting a TCP port scan. A NIDS may run either on the target machine who watches its own traffic (usually integrated with the stack and services themselves), or on an independent machine promiscuously watching all network traffic (hub, router, probe). Note that a "network" IDS monitors many machines, whereas the others monitor only a single machine system integrity verifiers (SIV) monitors system files to find when a intruder changes them (thereby leaving behind a backdoor). The most famous of such systems is "Tripwire". A SIV may watch other components as well, such as the Windows registry and chron configuration, in order to find well know signatures. log file monitors (LFM) monitor log files generated by network services. In a similar manner to NIDS, these systems look for patterns in the log files that suggest an intruder is attacking.
A typical example would be a parser for HTTP server log files that looking for intruders who try well-known security holes, such as the "phf" attack.
Example: swatch deception systems, which contain pseudo-services whose goal is to emulate well-known holes in order to entrap hackers.
Voice over IP
Voice over IP is related to two technologies. One is internet telephony and second is telephony over internet. Internet telephony is a term used primarily to refer to the use of software in conjunction with a sound card and microphone to call over internet. However, it normally relies on the use of communicating via packet data network such as IP, ATM has become a preferred strategy for network planners. Today, data networks have progressed to the [point that is now possible to support voice and multimedia application right over the internet and more and more companies are seeing the value of transporting voice over IP network to reduce costs and to set the stage for advanced multimedia application.
Support for voice communication using the internet protocol (IP), which is usually just called “voice over IP” or VoIP, has become especially attractive given the low-cost, flat-rate pricing of the public internet. In fact, toll quality telephony mver IP has now become one of the key steps leading to the convergence of the voice, video, and data communication industries. The feasibility of carrying voice and call signaling messages over the internet has already been demonstrated but delivering high quality commercial products, establishing public services, and convincing users to buy into the vision are just beginning.
Internet Protocol Version 6 (IPV6)
Abstract
The Internet has experienced a phenomenal growth in a short span with more than 450 million users connected to it worldwide. With Internet penetrating each and every home and office, the number users hooked on to the Net is expected to reach 950 million by 2005.
This growth has exposed limitations of the currently used protocol for the Internet, IPV4 (Internet Protocol Version 4). The main drawback is the dearth of IPV4 addresses. To overcome these limitations, IETF (Internet Engineering Task Force) began working on the next generation protocol called IPV6 (Internet Protocol Version 6).
INTERPLANETARY INTERNET
Abstract
Interplanetary Internet: a communication system to provide Internet-like services across interplanetary distances in support of deep space exploration. Our approach, which we refer to as bundling, builds a store-and-forward overlay network above the transport layers of underlying networks. Bundling uses many of the techniques of electronic mail, but is directed toward interprocess communication, and is designed to operate in environments that have very long speed-of-light delays. We partition
the Interplanetary Internet into IPN Regions, and discuss the implications that this has on naming and routing. We discuss the way that bundling establishes dialogs across intermittently connected internets, and go on to discuss the types of bundle nodes that exist in the interplanetary internet, followed by a discussion of security
in the IPN, a discussion of the IPN backbone network.
With the increasing pace of space exploration, Earth will distribute large numbers of robotic vehicles, landers, and possibly even humans,to asteroids and other planets in the coming decades. Possible future missions include landers/rover/orbiter sets, sample return missions, aircraft communicating with orbiters, and outposts of humans or computers remotely operating rovers. All of these missions involve clusters of entities in relatively close proximity communicating with each other in challenging environments. These clusters, in turn, will be in intermittent contact with one another, possibly across interplanetary space. This dual-mode communications environment: relatively cheap round-trips and more constant connectivity when communicating with "local" elements coupled with the very long-delay and intermittent connectivity with non-local elements has led us to development of interplanetary internet (IPN).
The new technology of the terrestrial Internet needs to be extended into space. We believe that the creation and adoption of Internet-friendly standards for space communication will enhance our ability to build a common interplanetary communication infrastructure. We think this infrastructure will be needed to support the expansion of human intelligence throughout the Solar System. The current terrestrial Internet and its technology provide a robust basis to support these missions in an efficient and scalable manner. In summary, the best way to envision the fundamental architecture of the Interplanetary Internet is to picture a network of internets.Ordinary internets (many being wireless in nature) are placed on the surface of moons and planets as well as in free-flying spacecraft.These remotely deployed internets run ordinary Internet protocols. A system of Interplanetary Gateways connected by deep-space transmission links form a backbone communication infrastructure that provides connectivity for each of the deployed internets. New long-haul protocols, some confined to the backbone network and some operating end-to-end, allow the deployed internets to communicate with each other.
HONEYPOTS
ABSTRACT
Any commander will often tell his soldiers that to secure yourself against the enemy, you have to first know who your enemy is. This military doctrine readily applies to the world of network security. Just like the military, you have resources that you are trying to protect. To help protect these resources, you need to know who is your threat and how they are going to attack.
Security professionals all around the world have been searching along this line of thought. One of the tools developed as a result of this is a Honeypot. The sole purpose of a Honeypot is to look and act like a legitimate computer but actually is configured to interact with potential hackers in such a way as to capture details of their attacks. If a honeypot is successful, the intruder will have no idea that s/he is being tricked and monitored.
A honeypot can be defined as "a security resource whose value lies in being probed, attacked or compromised". This means that whatever we designate as a honeypot, it is our expectation and goal to have the system probed, attacked, and potentially exploited. The honeypot contains no data or applications critical to the company but has enough interesting data to lure a cracker.
A honeypot may be a system that merely emulates other systems or applications, creates a jailed environment, or may be a standard built system. Regardless of how you build and use the honeypot, it's value lies in the fact that it is attacked. Honeypots are designed to mimic systems that an intruder would like to break into but limit the intruder from having access to an entire network.
In fact the use of honeypots is not very new. A report by Keith Johnson, The Wall Street Journal Online, December 18, 2000, 4:00 PM PT describes a real life example of how crackers were monitored using a honeypot. The excerpt is as follows:
When a group of suspected Pakistani crackers broke into a U.S.-based computer system in June, they thought they had found a vulnerable network to use as an anonymous launching pad to attack Web sites across India.
But what they had done was walk right into a trap known as a honeypot -- a specially equipped system deployed by security professionals to lure crackers and track every move of theirs. For a month, every keystroke they made, every tool they used, and every word of their online chat sessions was recorded and studied. The honeypot administrators learned how the crackers chose their targets, what level of expertise they had, what their favorite kinds of attacks were, and how they went about trying to cover their tracks so that they could nest on compromised systems.
GIGABIT ETHERNET
ABSTRACT
Ethernet is the world's most pervasive networking technology, since the 1970's. It is estimated that in 1996, 82% of all networking equipment shipped was Ethernet. In 1995, the Fast Ethernet standard was approved by the IEEE . Fast Ethernet provided 10 times higher bandwidth, and other new features such as full-duplex operation, and auto-negotiation. This established Ethernet as a scalable technology.
Now, with the emerging Gigabit Ethernet standard, it is expected to scale even further .The Fast Ethernet standard was pushed by an industry consortium called the Fast Ethernet Alliance.
A similar alliance, called the Gigabit Ethernet Alliance, was formed by, 11 companies in May 1996, soon after IEEE announced the formation of the 802.3z Gigabit Ethernet Standards project. At last count, there were over 95 companies in the alliance from the networking, computer and integrated circuit industries. A draft 802.3z standard was issued by, IEEE in July 1997.
The new Gigabit Ethernet standards will be fully compatible with existing Ethernet installations. It will retain Carrier Sense Multiple Access/ Collision Detection (CSMA/CD) as the access method. It will support full duplex as well as half duplex modes of operation. Initially, single-mode and multi mode fiber and short-haul coaxial cable will be supported. The standard uses physical signaling technology used in Fiber Channel to support Gigabit rates over optical fibers.
Initially, Gigabit Ethernet was deployed as a backbone in existing networks. It can be used to aggregate traffic between clients and "server farms", and for connecting Fast Ethernet switches. It can also be used for connecting workstations and servers for high - bandwidth applications such as medical imaging or CAD.
Genetic Algorithms (GAs)
Abstract
Genetic Algorithms (GAs) are adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. The basic concept of GAs is designed to simulate processes in natural system necessary for evolution, specifically those that follow the principles first laid down by Charles Darwin of “survival of the fittest”. As such they represent an intelligent exploitation of a random search within a defined search space to solve a problem.
First pioneered by John Holland in the 60s, Genetic Algorithms has been widely studied, experimented and applied in many fields in engineering worlds. Not only does GAs provide an alternative methods to solving problem, it consistently outperforms other traditional methods in most of the problems link. Many of the real world problems My seminar report_2involved finding optimal parameters, which might prove difficult for traditional methods but ideal for GAs. However, because of its outstanding performance in optimisation, GAs have been wrongly regarded as a function optimiser. In fact, there are many ways to view genetic algorithms. Perhaps most users come to GAs looking for a problem solver, but this is a restrictive view.
The Genetic Algorithm can solve problems that do not have a precisely-defined solving method, or if they do, when following the exact solving method would take far too much time. There are many such problems; actually, all still-open, interesting problems are like that. Such problems are often characterised by multiple and complex, sometimes even contradictory constraints, that must be all satisfied at the same time. Examples are crew and team planning, delivery itineraries, finding the most beneficial locations for stores or warehouses, building statistical models, etc.
GENERAL PACKET RADIO SYSTEM (GPRS)
ABSTRACT:-
Wireless communications lets people live and work in ways never before possible. With over two hundred million cellular subscribers worldwide, users have overwhelmingly embraced the concept of having a telephone that is always with them. And now business users also want a data connection with the office wherever they go, so that they can have access to e-mail, the Internet, their files, faxes and other data wherever and whenever it is needed, giving them a competitive advantage and more flexible lifestyles. A number of wireless data services are available today, but none are as exciting as a forthcoming data service for GSM networks called General Packet Radio Service (GPRS). GPRS refers to a high-speed packet data technology, which is expected to be deployed in the next two years. It is expected to profoundly alter and improve the end-user experience of mobile data computing, by making it possible and cost-effective to remain constantly connected, as well as to send and receive data at much higher speeds than today. Its main innovations are that it is packet based, that it will increase data transmission speeds from the current 9.6 Kbps to over 100 Kbps, and that it will extend the Internet connection all the way to the mobile PC -- the user will no longer need to dial up a separate ISP. GPRS will complement rather than replace the current data services available through today’s GSM digital cellular networks, such as circuit-switched data and Short Message Service. It will also provide the type of data capabilities planned for "third generation" cellular networks, but years ahead of them. In this paper, we take a look at:
Rijndael Block Cipher
Abstract
This standard specifies the Rijndael algorithm, a symmetric block cipher that can process data blocks of 128 bits, using cipher keys with lengths of 128, 192, and 256 bits. Rijndael was designed to handle additional block sizes and key lengths, however they are not adopted in this standard.
Throughout the remainder of this standard, the algorithm specified herein will be referred to as “the AES algorithm.” The algorithm may be used with the three different key lengths indicated above, and therefore these different “flavors” may be referred to as “AES-128”, “AES-192”, and “AES-256”.
Encryption is the process of transforming information from an unsecured form ("clear" or "plaintext") into coded information ("ciphertext"), which cannot be easily read by outside parties. The transformation process is controlled by an algorithm and a key. The process must be reversible so that the intended recipient can return the information to its original, readable form, but reversing the process without the appropriate encryption information should be impossible. This means that details of the key must also be kept secret.
Encryption is generally regarded as the safest method of guarding against accidental or purposeful security breaches. The strength of the encryption method is often measured in terms of work factor - the amount of force that is required to 'break' the encryption. A strong system will take longer to break, although this can be reduced by applying greater force (the more effort that is put into the attack, the less time required to break the code).
EXTREME ULTRAVIOLET LITHOGRAPHY
EXTREME ULTRAVIOLET LITHOGRAPHY
Abstract:
EUV lithography (EUVL), a relatively new form of lithography that uses extreme ultraviolet (EUV) radiation with a wavelength in the range of 10 to 14 nanometers (nm) to carry out projection imaging.
Currently, and for the last several decades, optical projection lithography has been the lithographic technique used in the high-volume manufacture of integrated circuits. It is widely anticipated that improvements in this technology will allow it to remain the semiconductor industry's workhorse through the 100 nm generation of devices. However, some time around the year 2005, so-called Next-Generation Lithographies will be required. EUVL is one such technology vying to become the successor to optical lithography. This topic provides an overview of the capabilities of EUVL, and explains how EUVL might be implemented. The challenges that must be overcome in order for EUVL to qualify for high-volume manufacture are also discussed.
Optical projection lithography is the technology used to print the intricate patterns that define integrated circuits onto semiconductor wafers. Typically, a pattern on a mask is imaged, with a reduction of 4:1, by a highly accurate camera onto a silicon wafer coated with photoresist. Continued improvements in optical projection lithography have enabled the printing of ever finer features, the smallest feature size decreasing by about 30% every two years. This, in turn, has allowed the integrated circuit industry to produce ever more powerful and cost-effective semiconductor devices. On average, the number of transistors in a state-of-the-art integrated circuit has doubled every 18 months.
Currently, the most advanced lithographic tools used in high-volume manufacture employ deep-ultraviolet (DUV) radiation with a wavelength of 248 nm to print features that have line widths as small as 200 nm. It is believed that new DUV tools, presently in advanced development, that employ radiation that has a wavelength of 193 nm, will enable optical lithography to print features as small as 100 nm, but only with very great difficulty for high-volume manufacture.
Over the next several years it will be necessary for the semiconductor industry to identify a new lithographic technology that will carry it into the future, eventually enabling the printing of lines as small as 30 nm. Potential successors to optical projection lithography are being aggressively developed. These are known as "Next-Generation Lithographies" (NGL's). EUV lithography (EUVL) is one of the leading NGL technologies; others include x-ray lithography, ion-beam projection lithography, and electron-beam projection lithography.
In many respects, EUVL may be viewed as a natural extension of optical projection lithography since it uses short wavelength radiation (light) to carry out projection imaging. In spite of this similarity, there are major differences between the two technologies. Most of these differences occur because the properties of materials in the EUV portion of the electromagnetic spectrum are very different from those in the visible and UV wavelength ranges., and to provide the reader with an understanding of the challenges that must be overcome if EUVL is to fulfill its promise in high-volume manufacture.
Enterprise Resource Planning
Abstract
In today's dynamic and turbulent business environment, there is a strong need for the organizations to become globally competitive. The survival guide to competitiveness is to be closer to the customer and deliver value added product and services in the shortest possible time. This, in turn, demands integration of business processes of an enterprise. Enterprise Resource Planning (ERP) is such a strategic tool, which helps the company to gain competitive edge by integrating all business processes and optimizing the resources available. This paper throws light on how ERP evolved, what makes up an ERP system and what it has to offer to the industries. The paper includes the role of various enabling technologies, which led to the development of such a huge system. Everyone knows at least the name, but doesn’t know where is it applied and why it’s related with the Computer Science and Engineering field. The report clears all the doubts about ERP, as well as gives the evolution of ERP and extra information about recent and future techniques that are being implemented in the industries. Also it may be helpful to the industrialist who is going to implement ERP in his industry by giving a brief idea about the hidden costs and downfalls of ERP. Related to our field, the report explains how this whole system is implemented by a software industry.
Embedded Linux
Abstract
The overall embedded market is undergoing a major transformation both in design and functionality. Networking technologies are becoming increasingly more important for embedded developers. Driven by the proliferation of the Internet and the increasing ubiquity of embedded computer systems, devices that can communicate with other devices are becoming dominant in the embedded market. The Linux operating system has been available for years for servers and desktops, and has continued to gain market share in both of these computing segments The world is being captured by the small devices. These small devices are capturing our life at a very rapid rate. You will find your day-to-day devices getting smaller and smaller, yet are capable of outperforming their ancestors in terms of performance, efficiency. In order to control these devices it is necessary to have an equally capable operating system, small enough to fit in but competent enough to provide the required functionality.
In this report we would look at such an operating system – Embedded Linux, its architecture, along with the procedure of creating one. In the end we will discuss few of the embedded systems that have been already developed, controlled by Embedded Linux.
ELLIPTIC CURVE CRYPTOGRAPHY
ABSTRACT
Since the invention of public-key cryptography in 1976 by Whitfield Diffie and Martin Hellman , numerous public-key cryptographic systems have been proposed. All of these systems rely on the difficulty of a mathematical problem for their security. One such mathematical problem which is difficult to solve is Elliptic curve discrete logarithm problem (ECDLP). Its use in public key cryptography was first proposed in 1985 by Koblitz and Miller. Since then this has been an interesting field of study.
Elliptic curves have been extensively studied for over a hundred years, and there is a vast literature on the topic. The primary advantage that elliptic curve systems have over other mathematical systems is the absence of a subexponential-time algorithm. Consequently, one can use an elliptic curve group that is smaller in size while maintaining the same level of security. The result is smaller key sizes, bandwidth savings, and faster implementations, features which are especially attractive for security applications where computational power and integrated circuit space are limited.
DOMAIN NAME SYSTEM (DNS)
ABSTRACT
The essence of DNS is the invention of a hierarchical ,domain based naming scheme and a distributed database system for implementing this naming scheme.The naming scheme is interesting for two reasons. First ,it has been used to assign machine names through out the global internet. Second ,because it uses a geographically distributed set of servers to map names to address, the implementation of the name mapping mechanism provides a large scale example og the client-server paradigm. It is primarily used for mapping host names and email destinations to IP addresses but can also be used for other puposes . DNS is defined in RFCs 1034 and 1035.
The way DNS is used is as follows. To map a name onto an IP address,an applicaton program calls a library procedure called the resolver , passing it the name as parameter. The resolver sends a UDP packet to a local DNS server, which then looks up the name and returns the IP address to the resolver , which then returns it to the caller.Armed with the IP addess,the program can then establish a TCP connection with the destination ,or send it UDP packets.
DIGITAL WATERMARK
ABSTRACT
Legalization of any physical entity like any product, research paper (traditional media) you can use logos, holograms, trademark, letterhead, serial-no.
What about digital media?
So you need some technique, some way to legalize your copy of digital material. That give birth to technique called digital watermark.
In this paper we are going to study Techniques of embedding a secret imperceptible signal, directly into the original data in such a way that always remains present, called Digital watermark Here, we can know what actually watermark is? and It’s applications. The most obvious application is to use watermarks to encode information, which can prove ownership, e.g., copyrights. However, digital watermarks can also be used to encode copy or read permissions and quality control information.
In this paper, we stick to image watermarking, an image authentication technique by embedding each image with a signature so as to discourage unauthorized copying is proposed but we found some misuse of this technique in hidden communication in branch of steganography.
DVD, or Digital Versatile Disc
Abstract
DVD, or Digital Versatile Disc,& not digital vidio disk, is the next generation optical disc storage technology. It has been long journey of storage technology. First was gramophone records, then came magnetic tapes, then came CDs-first step in optical disc storage technology. Right now are DVDs.
As per name, versatile in the sense that it can store any kind of digital data, such as high quality video databases, motion pictures, document images, audio recordings, multimedia titles, and so on.
As we all know that DVDs provide storage capacity in GBs, which is very high as compared to CDs providing storage capacity in MBs. The DVD format provides several configurations of data layers, moving from 2D storage towards 3D storage
Digital Subscriber line (DSL)
Abstract
Imagine giving employees immediate, around-the-clock access to all the data they need — the Internet, local area networks (LANs) or wide area networks (WAN) — with no more waiting for dial-up modems to connect, and no more busy signals. Imagine giving them the power to use a single phone line to access data while simultaneously talking on the phone or sending a fax. And imagine doing it without investing in a major system overhaul.
You can do it all with Digital Subscriber Line (DSL) — from BellSouth. DSL service is a new modem technology that transforms a regular copper twisted-pair telephone line into an ultra high-speed conduit for simultaneous voice and data transmissions. Now you can use DSL service to speed up your data communications to and from the Internet, Intranet and corporate network. Tasks that would normally take minutes or longer can now be completed in mere seconds, thanks to BellSouth Business DSL speeds of up to 1.5 Mbps downstream and 256 Kbps upstream for our standard product offering.
DSL uses the existing phone line and in most cases does not require an additional phone line. This gives "always-on" Internet access and does not tie up the phone line. DSL offers users a choice of speeds ranging from 144 Kbps to 1.5Mbps. This is 2.5x to 25x times faster than a standard 56Kbps dial-up modem.
This digital service can be used to deliver bandwidth-intensive applications like streaming audio/video, online games, application programs, telephone calling, video conferencing and other high-bandwidth services.
Today DSL is for the first time putting high-speed Internet access within the reach of the home, small and medium-size businesses. DSL takes existing voice cables that connect customer premises to the phone company's central office (CO) and turns them into a high-speed digital link.
Over any given line, the maximum DSL speed is determined by the distance between the customer site and the Central Office (CO). Most ISP's offer Symmetric DSL (SDSL) data services at speeds that vary from 144 Kbps to 1.54 Mbps, and now even faster up to 6.0 Mbps--so customers can choose the rate that meets their specific needs. At the customer premises, a DSL router or modem connects the DSL line to a local-area network (LAN) or an individual computer. Once installed, the DSL router provides the customer site with continuous connection to the Internet and use of the telephone at the same time
Decompiler’s
Abstract
This seminar presents an overview of Decompiler’s, a program which performs exact reverse process of compilers i.e. creating high level language code from a machine /assembly language code. Decompiler’s comes into picture when user needs source code from executables during number of occasion. Decompiler’s mainly deals with Reverse Engineering, which can be used for positive as well as negative purposes depending on application user uses it for. Hence we also needs ways to protect our code from Decompiler’s to avoid misuse. In Industry people’s are taking Decompilation quite seriously like any other department while discovering it’s usability. In this seminar I find I have a foot in two different camps, as a programmer I’m interested in understanding how others achieve interesting effects but from a business point of view I’m not too keen on someone reusing my code and selling them onto third parties as their own. This paper presents an overview of the Decompiler’s working and area of usability.
DATA CENTER - NEXT GENERATION HIGH STORAGE
Abstract
In now a days, the demand for ‘Internet’ & ‘Intranet’ is increases continually, the data transfer on the network increases to store such a large data, storage required is also high. Also as On-Line-Transaction-Process (OLTP) increases, the storage required again increases. To solve the problem of the high storage required to store the data, ‘Data center’ are developed all
over the world. The capacity of the data center depends on the size of the data center. In India also, the data center development increases continually. Also, bigger Organization or enterprises also requires a high storage that is provided by the ‘data center’. In next 10 years, the data center is growing with very fast speed.
Cyclone – A safe dialect of C
Abstract
Cyclone is a programming language based on C that is safe, meaning that it rules out programs that have buffer overflows, dangling pointers, format string attacks, and so on. High-level, type-safe languages, such as Java, Scheme, or ML also provide safety, but they don't give the same control over data representations and memory management that C does. Furthermore, porting legacy C code to these languages or interfacing with legacy C libraries is a difficult and error-prone process. The goal of Cyclone is to give programmers the same low-level control and performance of C without sacrificing safety, and to make it easy to port or interface with legacy C code.
Many software systems, including operating systems, device
drivers and file servers require fine-grained control over data representation (e.g., field layout) and resource management (e.g., memory management). The de facto language for coding such systems is C. However, in providing low-level control, C admits a wide class of dangerous — and extremely common — safety violations, such as incorrect type casts, buffer overruns, dangling-pointer dereferences, and space leaks.
"C is a very powerful language, but you can also hang yourself with that power," says Graham Hutton, an expert in computer languages at the University of Nottingham.
Higher-level, type-safe languages avoid these drawbacks, but in so doing, they often fail to give programmers the control needed in low-level systems. Moreover, porting or extending legacy code is often prohibitively expensive.
Therefore, a safe language at the C level of abstraction, with an easy porting path, would be an attractive option. Cyclone is a safe dialect of C. It has been designed to prevent the buffer overflows, format string attacks, and memory management errors that are common in C programs, while retaining C’s syntax and semantics. Inorder to have this safety, Cyclone introduces some run time checks like the Null check so that it cannot access a null location. Also it has put some restriction like free operation is a no-op and pointer arithmetic is not allowed on those pointers whose bounds information is not available with the compiler.
SECURITY ANALYSIS TOOL FOR AUDITING NETWORK (SATAN)
Abstract
SATAN is the Security Analysis Tool for Auditing Networks. In its simplest (and default) mode, it gathers as much information about remote hosts and networks as possible by examining such network services as finger, NFS, NIS, ftp and tftp, rexd, and other services. The information gathered includes the presence of various network information services as well as potential security flaws -- usually in the form of incorrectly setup or configured network services, well-known bugs in system or network utilities, or poor or ignorant policy decisions. It can then either report on this data or use a simple rule-based system to investigate any potential security problems. Users can then examine, query, and analyze the output with an HTML browser, such as Mosaic, Netscape, or Lynx. While the program is primarily geared towards analyzing the security implications of the results, a great deal of general network information can be gained when using the tool - network topology, network services running, types of hardware and software being used on the network, etc.
However, the real power of SATAN comes into play when used in exploratory mode. Based on the initial data collection and a user configurable ruleset, it will examine the avenues of trust and dependency and iterate further data collection runs over secondary hosts. This not only allows the user to analyze her or his own network or hosts, but also to examine the real implications inherent in network trust and services and help them make reasonably educated decisions about the security level of the systems involved.
CHANGING TRENDS IN RAMS
Abstract
Today almost everyone knows what is RAM.
But I suspect how many of them are knowing the history of RAM, the various types of RAMs right from the older ones.
The purpose of my seminar on ‘CHANGING TRENDS IN RAMS’ is to make aware of various types of random access memories right from the older ones to the latest technologies.
Also to compare them and recognizing their advantages and dis-advantages.
And also to give some details about each of them.
BREW Binary Runtime Environment for Wireless
Abstract
All of we knows what is the importance of the wireless networks. Today no one can apart from this technology. We can’t imagine the world without it.
The intension of this seminar is to understand whole this network, its requirement and the problems in it. This seminar gives the idea about the how the network can be used efficiently by using the term BREW.
This seminar not concerns with the the technologies like WAP, 3G,Bluetooth and lot more. But it describes all this thing in some different and in generalize manner.
Along with whole discussion, the main focus of the seminar towards the change of developers view in the wireless industry.
BIONIC AGE
Abstract:
Bionic age is basically a technology combining biology and electronics. Advanced Bionics Corporation, in sylmar, CA, announced the U.S. Food and Drug Administration (FDA) have approved Bionic Ear System. The new technology is approved for use in children and adults with profound hearing loss in both ears. An estimated 460,000 to 740,000 people in the United States who are severely or profoundly hearing impaired may benefit from bionic ear surgery.
SPEAR3 (Speech Processor for Electrical and Acoustic Research), is an advanced body-worn speech processor developed by the CRC for Cochlear Implant and Hearing Aid Innovation to enable high-level speech processing research applicable to cochlear implants and/or hearing aids. The purpose of seminar to give the idea of bionic age and detailed of bionic ear and speech processor used in cochlear implant.
In this rapidly changing technological world the question remains unanswered ‘which is the future technology? ‘Which technology will take over in 21st century? Could it be the fast advancing world of computers & microprocessors? Could it be the world of digital hardware integrated circuits, mobile and satellite communication, a world full of electronics? Or a Biological world where clones, genomes and other biological properties, which constitute this world, have changed, the world of biology?
For years together we have worked hard to make this world better place to live in, but is science a cure or a blessing remains questionable. We have found out a way to survive but as until now we have not been able to change the forces that govern us. We have learn a lot from nature and put it into practical use, but what if the basis on which we stand today change tomarrow, the vary fundamental thing that constitute the human being, change in the future? Could the living beings on this earth change? Change in such way that is irreversible? Could human machine, the god’s best gift will made into ‘machine-Human’? Could this be our future?
What if sciences wiz computer electronics, biology combined into a single science called ‘bionics’, would this science answer the question of tomorrow? Would bionics prove the answer to those questions that probably have never been answered before? Lets have the look at the world of ‘BIONICS’?
BIOCHIP
ABSTRACT
A biochip is a collection of miniaturized test sites (microarrays) arranged on a solid substrate that permits many tests to be performed at the same time in order to achieve higher throughput and speed. Typically, a biochip's surface area is no larger than a fingernail. Like a computer chip that can perform millions of mathematical operations in one second, a biochip can perform thousands of biological reactions, such as decoding genes, in a few seconds.
In addition to genetic applications, the biochip is being used in toxicological, protein, and biochemical research. Biochips can also be used to rapidly detect chemical agents used in biological warfare so that defensive measures can be taken. The biology was evolving rapidly towards an information base, and it reminded him of quantum mechanics in the 1930s,and idea that it would now be possible to calculate the properties of all materials under all conditions from these mathematical equations. That was true, he said, but it took years to advance materials science and computational chemistry to where they are today, and that this came about by transferring the practical applications of the equations from mathematicians and theoretical physicists to other disciplines where they were looked at from new perspectives. The same was now true of biology.
Important advances would now come as physicists, mathematicians and engineers joined with biologists to attain the future that in principle they now know is possible.
AUTOMATED TELLER MACHINE
ABSTRACT
In simple words, an ATM can be described as a m/c which dispenses money on reading information from a card which is inserted into the machine. ATMs can now be seen at many places.This seminar focuses on the working of an ATM. That is what happens since the card is inserted & till the money is received is explained in detail. Other features like parts of an ATM , security of an ATM are also covered.
An ATM is simply a data terminal with two input and four output devices. Like any other data terminal, the ATM has to connect to, and communicate through, a host processor.
The host processor is analogous to an Internet Service Provider (ISP) in that it is the gateway through which all the various ATM networks become available to the cardholder .
Most host processors can support either leased-line or dial-up machines.
Leased-line machines connect directly to the host processor through a four-wire, point-to-point, dedicated telephone line.
Dial-up ATMs connect to the host processor through a normal phone line or through an Internet service provider.
Leased-line ATMs are preferred for very high-volume locations because of their thru-put capability, and dial-up ATMs are preferred where cost is a greater factor than thru-put.
Leased-line machines commonly use a monochrome or color CRT (cathode ray tube) display.
Dial-up machines commonly use a monochrome or color LCD.
The host processor may be owned by a bank or financial institution, or it may be owned by an independent service provider. Bank-owned processors normally support only bank-owned machines, whereas the independent processors support merchant-owned machines.
Animatronics
Abstract
The animatronics of today are playing various roles in special effect movies and generating fabulous entertainment opportunities. At one point they are being merged into remote operations and remote surgery allowing doctors to cure patients at a distance and allow scientist to explore deep oceans and remote planets.
The future of animatronics will have many animatrons working for us performing crash tests, playing dummies in testing, safety instruments and also in entertainment.
We can expect future animatrons to have AI (artificial intelligence) and own learning model so that the director of the movie can interact with them like any other human being.
Adaptive Brain Interface.
Abstract
In simple words ABI can be defined as a human computer interface that accepts voluntary commands directly from the brain. The central aim of ABI is to extend the capabilities of physically impaired people. The brain-computer interface provides new ways for individuals to interact with their environment. The computer will continue to be a necessary component as long as detecting a brain response reliably remains a complex analytical task. In most cases, the brain response itself is not new, just the means of detecting it and applying it as a control. However, the necessary feedback associated with experimental trials frequently resulted in improved, or at least changed performance. Little is known about the long-term effects of such training either from an individual difference, or from a basic human physiology point of view.
A brain-computer interface (BCI) is a system that acquires and analyzes neural (brain) signals with the goal of creating a high bandwidth communications channel directly between the brain and the computer. The objective of the ABI project is to use EEG signals as an alternative means of interaction with computers. As such, the goal is to develop a brain-actuated mouse.
Active Directory Services in Windows 2000
Abstract
We use directory service to uniquely identify users and resources on a network. Active Directory in Microsoft Windows 2000 is a significant enhancement over the directory services provided in previous versions of Windows. Active Directory provides a single point of network management, allowing us to add, remove and relocate users and different resources easily.
Windows 2000 uses Active Directory to provide directory services. It is important to understand the overall purpose of Active Directory and the key features it provides. Understanding the interactions of Active Directory architectural components provides the basis for understanding how Active Directory stores and retrieves the data. This seminar concentrates on the Active Directory functions, its features and architecture.
DATA MINING USING NEURAL NETWORKS
Abstract
The past two decades has seen a dramatic increase in the amount of information or data being stored in electronic format. This accumulation of data has taken place at an explosive rate. It has been estimated that the amount of information in the world doubles every 20 months and the size and number of databases are increasing even faster. The increase in use of electronic data gathering devices such as point-of-sale or remote sensing devices has contributed to this explosion of available data. The problem of effectively utilizing these massive volumes of data is becoming a major problem for all enterprises.
Data storage became easier as the availability of large amounts of computing power at low cost ie the cost of processing power and storage is falling, made data cheap. There was also the introduction of new machine learning methods for knowledge representation based on logic programming etc. in addition to traditional statistical analysis of data. The new methods tend to be computationally intensive hence a demand for more processing power.
It was recognized that information is at the heart of business operations and that decision-makers could make use of the data stored to gain valuable insight into the business. Database Management systems gave access to the data stored but this was only a small part of what could be gained from the data. Traditional on-line transaction processing systems, OLTPs, are good at putting data into databases quickly, safely and efficiently but are not good at delivering meaningful analysis in return. Analyzing data can provide further knowledge about a business by going beyond the data explicitly stored to derive knowledge about the business. This is where Data Mining has obvious benefits for any enterprise.
DATA COMPRESSION
Abstract
It is very interesting that data can be compressed because it insinuates that information that we generally pass on to each other can be said in shorter information units (infos). Instead of saying ‘yes’ a simple nod can do the same work, but by transmitting lesser data. In his 1948 paper, “A Mathematical Theory of Communication”, Shannon established that there is a fundamental limit to lossless data compression. This limit, called the entropy rate, is denoted by H. The exact value of H depends on the information source --- more specifically, the statistical nature of the source. It is possible to compress the source, in a lossless manner, with compression rate close to H. It is mathematically impossible to do better than H.
Shannon also developed the theory of lossy data compression. This is better known as rate-distortion theory. In lossy data compression, the decompressed data does not have to be exactly the same as the original data. Instead, some amount of distortion, D, is tolerated. Shannon showed that, for a given source (with all its statistical properties known) and a given distortion measure; there is a function, R (D), called the rate-distortion function. The theory says that if D is the tolerable amount of distortion, then R (D) is the best possible compression rate.
When the compression is lossless (i.e., no distortion or D=0), the best possible compression rate is R(0)=H (for a finite alphabet source). In other words, the best possible lossless compression rate is the entropy rate. In this sense, rate-distortion theory is a generalization of lossless data compression theory, where we went from no distortion (D=0) to some distortion (D>0).
Lossless data compression theory and rate-distortion theory are known collectively as source coding theory. Source coding theory sets fundamental limits on the performance of all data compression algorithms. The theory, in itself, does not specify exactly how to design and implement these algorithms. It does, however, provide some hints and guidelines on how to achieve optimal performance.