Jobs ,Research Paper, Engineering Colleges ,Seminars , Conferences , it conferences , Trainings , workshops ,
Saturday, February 28, 2009
URBAN REGENERATION: THE ROLE OF SUSTAINABLE CONSTRUCTION
Abstract
The size and number of cities are rapidly expanding and creating pressure on natural
resources, worldwide. Simultaneously there are aspirations to regenerate the urban
environment and ensure improvements. The construction industry has a major
responsibility in achieving these requirements in a sustainable manner. Indeed, it is
recognised that modern construction needs to preserve and manage effectively all
resources. This presentation discusses the role civil engineers play in creating desirable
urban environments, and gives some practical examples of sustainable construction. The
presentation will be based on research carried out by Professor Dhir and his colleagues at
the University of Dundee and he will also take the opportunity to discuss civil engineering
study in the UK, in general, and the University of Dundee, in particular.
MODEL-BASED ENGINEERINGADVANTAGES AND CHALLANGES
AND CHALLANGES
Abstract
Model-based engineering started as a vision about 20 years ago and is approaching reality
today. The basic technology is an object-oriented product model, also known as Building
Information Model (BIM). Knowledge based extentions and web- and GRID-service based
usage are cutting edge technology implementations of BIM. 4D, the modern working
methodology in construction planning is one of its major spin-offs. In the seminar key
concepts, modelling requirements, examples and best practice application will be
discussed. Topics of the course are:
• The Vision of the Building Information Model
• Integration and Interoperability
• Collaborative Working (Mapping, Matching and Merging of design data)
• Compare algorithms detecting design changes
• Delta-based version management for storing design changes
• Mobile Working using BIM
• Best practice projects
Application Of Geotubes In Civil And Environmental Engineering
Abstract
Geotubes have been successfully used for shoreline protection, stabilization of dredged
materials and dewatering of slurry since the early 1980s.They have found extensive use in
wide variety of applications because of their simplicity in placement and constructability,
effective volume containment, cost effectiveness, and minimum impact on environment.
Using geotubes to dewater high water content material is still relatively new and there are
no standard criteria available for the selection of the geotube materials. This presentation
will provide an overview of current analytical and experimental research on geotubes.
Joint time –frequency analysis in structural dynamics
Abstract
Traditional Fourier analysis has been an important tool in engineering
applications for many years. However, it does not capture readily nonstationary
and local features, which are inherently present in many structural
dynamics problems. The seminar will focus on modern time-frequency
analysis techniques for capturing localized effects and evolutionary frequency
content by using wavelets, chirplets, and signal intrinsic modes. These
techniques will be presented in context with earthquake engineering
applications and they will be used for analyzing both historic ground
accelerograms and linear/nonlinear seismic responses of benchmark
structures. However, they are applicable as well to a plethora of other
structural engineering, and civil engineering in general, themes.
Minimising carbon footprints with Concrete construction
Abstract
A recent review has concluded that climate change could cost the world between 5% and
20% of GDP, and result in famine, floods, mass movement of people and destruction of
species. Climate change is linked to the significant increase in CO2 emissions to which
construction is a major contributor; its energy sources being inextricably carbon-based and
non-renewable.
Civil engineers can play an important role in minimising climate change, through their
choice of construction methods / materials, and adopting appropriate technologies. This
presentation will discuss how the carbon footprints associated with construction can be
minimised through the appropriate use of concrete; the most widely-used construction
material. The presentation is based on research carried out by Professor Dhir and his
colleagues at the University of Dundee, and will highlight the importance of utilising
recycled materials / industrial by-products, sequestrating / capturing CO2 and utilising
thermal mass of concrete. He will also take the opportunity to discuss civil engineering
study in the UK, in general, and the University of Dundee, in particular.
Friday, February 27, 2009
Enterprise Resource Planning
In today's dynamic and turbulent business environment, there is a strong need for the organizations to become globally competitive. The survival guide to competitiveness is to be closer to the customer and deliver value added product and services in the shortest possible time. This, in turn, demands integration of business processes of an enterprise. Enterprise Resource Planning (ERP) is such a strategic tool, which helps the company to gain competitive edge by integrating all business processes and optimizing the resources available. This paper throws light on how ERP evolved, what makes up an ERP system and what it has to offer to the industries. The paper includes the role of various enabling technologies, which led to the development of such a huge system. Everyone knows at least the name, but doesn’t know where is it applied and why it’s related with the Computer Science and Engineering field. The report clears all the doubts about ERP, as well as gives the evolution of ERP and extra information about recent and future techniques that are being implemented in the industries. Also it may be helpful to the industrialist who is going to implement ERP in his industry by giving a brief idea about the hidden costs and downfalls of ERP. Related to our field, the report explains how this whole system is implemented by a software industry.
Microfluidics for Bioartificial Livers- How Semiconductor Technology Can Improve Biomedical Devices
Bioartificial Liver (BAL) is a term for medical devices designed to replace natural liver functions. The idea behind the use of artificial livers is to either externally support an injured liver to recovery or bridge a patient with a failing liver to transplantation. Central to all BAL systems is a bioreactor for culturing liver cells. The main function of this reactor is to provide a cell adhesion matrix and supply the necessary nutrient solution. A high cellular oxygen uptake rate combined with low solubility in aqueous media makes oxygen supply to the liver cells the most constraining factor in current reactor designs.
Parallel-plate geometry BALs promise high efficiency for blood detoxification and liver metabolism. These devices can be manufactured in highly parallel fashion using semiconductor technology with dimensions close to those found naturally in the liver. However, due to the specific flow regime at this size-scale oxygen depletion in the medium remains a major problem.
In this seminar I will discuss the current research to overcome transport constraints in parallel-plate BAL devices. Custom oxygen concentration on the channel surface can be generated by adjusting channel geometry and thus prevent cell death along the channel length. After introducing the underlying transport model, I will describe how semiconductor technology is used for the fabrication of a prototype fluidic device. To evaluate the model, oxygen sensors are integrated into the bioreactor and used to measure in-situ dissolved oxygen concentration. Results demonstrating the applicability of the sensors and system scale-up will be shown.
Mechanisms of Laser-Material Interaction in Microsecond Laser Ablation using a CW Single-Mode Fibre
Laser ablation with pulse durations in the microsecond range is a viable solution for applications that require large MRR with moderate hole quality. The purpose of this investigation is to examine, both experimentally and theoretically, the laser-material interaction mechanisms during microsecond laser ablation using a 300W, CW single-mode fibre laser with modulation control. The experimental portion of the investigation includes improved laser control and ablation tests. Many in-process monitoring such as photodiode measurements of the vapour intensity, high-speed photography, and spectroscopic plasma measurements were conducted. The theoretical portion is a thermo-hydrodynamic model that considers beam absorption due to multiple reflections, vaporization-induced pressure, and heat convection in the melt, and the free surface at the liquid-vapour interface. Due to very high irradiance of the fibre laser beam, the absorbed energy not only is sufficient to melt and vaporize the material, but also is able to dissociate vapour into intense plasma. The hole drilling by microsecond laser ablation is due to a combination of adiabatic evaporation and ejection of fine droplets. This paper describes the interactions between the bulk material, the laser beam, and the vapour/plasma and a process anatomy is presented to describe the temporal behaviours of these interactions.
A Computational Rod Model to Simulate the Mechanics of DNA Looping
It is well known that the structural deformations (stressed states) of DNA molecule play a crucial role in its biological functions including gene expression. For instance, looping in DNA (often mediated by protein binding) is a crucial step in many gene regulatory mechanisms. Functional involvement of DNA and/or proteins in several diseases is key to their diagnosis and treatment. Therefore the fundamental knowledge of the structure-function relationship may one day pave the way to new discoveries in medical research including future drug therapies.
In this talk, I will focus on an example of protein-mediated looping of DNA that is also widely studied experimentally (see Figure 1 below). We use the ‘mechanical rod' model of DNA molecules to simulate its structural interactions with proteins/ enzymes during gene expression. Our rod model can simulate the nonlinear dynamics of loop/supercoil formation in DNA on “long length scales”. The formulation accounts for the structural stiffness of the DNA strand, its intrinsic curvature, chiral (right-handed helical) construction and has provisions for its physical interactions with the surrounding medium. The simulations of protein-mediated DNA looping illustrate how the mechanical properties of DNA may affect the chemical kinetics of DNA-protein interactions (as depicted in the Figure 1 below) and thereby regulate gene expression.
Structure, property and processing relationships of all-cellulose composites"
Abstract
Cellulose is the main load-bearing component in plant fibre due to its covalent β-1→4-link that bonds glucose molecules into a flat ribbon and tight network of intra- and intermolecular hydrogen bonds. It is possible to manipulate the intra- and intermolecular hydrogen bonds in order to embed highly crystalline cellulose in a matrix of non-crystalline cellulose, thereby creating self-reinforced cellulose composites. Cellulose is an excellent choice of raw material for the production of sustainable and high-strength composites by self-consolidation of cellulose since it is readily biodegradable and widely available. Nowadays, the cellulose industry makes extensive use of solvents. A multitude of solvents for cellulose is available but only a few have been explored up to the semi-industrial scale and can qualify as “sustainable” processes. An effective solvent for cellulose is a mixture of the LiCl salt and organic solvent N,N-Dimethylacetamide (DMAc). Once cellulose has been dissolved, the cellulose/LiCl/DMAc mixture can be precipitated in water. Preliminary results showed that a solution of 1 wt.% kraft cellulose in 8 wt.% LiCl/DMAc that was precipitated in water formed an hydrogel where cellulose chains were held in their amorphous state and in which no crystalline phase was detected by wide angle X-ray diffraction (WAXD).
The initially amorphous cellulose started crystallizing by cross-linking of hydrogen bonds between the hydroxyl groups of the cellulose chains when the cellulose gel was dried and the water to cellulose ratio reached 7 g/g. The final form was poorly crystalline but distinct from amorphous cellulose. In order to study all-cellulose composites at a fundamental level, model all-cellulose composite films were prepared by partly dissolving microcrystalline cellulose (MCC) powder in an 8% LiCl/DMAc solution. Cellulose solutions were precipitated and the resulting gels were dried by vacuum-bagging to produce films approximately 0.2-0.3 mm thick. Wide-angle X-ray scattering (WAXS) and solid-state 13C NMR spectra were used to characterize molecular packing. The MCC was transformed to relatively slender crystallites of cellulose I in a matrix of paracrystalline and amorphous cellulose. Paracrystalline cellulose was distinguished from amorphous cellulose by a displaced and relatively narrow WAXS peak, by a 4 ppm displacement of the C-4 13C NMR peak, and by values of T2(H) closer to those for crystalline cellulose than disordered cellulose. Cellulose II was not formed in any of the composites studied. The ratio of cellulose to solvent was varied, with greatest transformation observed for c < 15%, where c is the weight of cellulose expressed as percentage of the total weight of cellulose, LiCl and DMAc. The dissolution time was varied between 1 and 48 h, with only slight changes occuring beyond 4 h. Transmission electron microscopy (TEM) was employed to assess the morphology of the composites. During dissolution, MCC in the form of fibrous fragments were split into thinner cellulose fibrils. The composites were tested in tension and fracture surfaces were inspected by scanning electron microscopy (SEM).
It was found that the mechanical properties and final morphology of all-cellulose composites is primarily controlled by the rate of precipitation, initial cellulose concentration and dissolution time. All-cellulose composites were produced with a tensile strength of up to 106 MPa, modulus up to 7.6 GPa and strain-to-failure around 6%. The precipitation conditions were found to play a large role in the optimisation of the mechanical properties by limiting the amount of defects induced by differential shrinkage.Dynamic mechanical analysis was used to study the viscoelasticity of all-cellulose composites over temperatures ranging from -150°C to 370°C. A β relaxation was found between -72 and -45°C and was characterized by an activation energy of ~77.5±9.9 kJ/mol, which is consistent with the relaxation of the main chain through co-operative inter- and intramolecular motion. The damping at the β peak generally decreases with an increase in the crystallinity due to enhanced restriction of the molecular motion. For c ≤ 15%, the crystallinity index and damping generally decreased with an increasing dissolution time, whereas the size distribution of the mobile entities increases. A simple model of crystallinity-controlled relaxation does not explain this phenomenon.
It is proposed that the enhanced swelling of the cellulose in solution after higher dissolution times provides a more uniform distribution of the crystallites within the matrix resulting in enhanced molecular constriction of the matrix material. For c = 20%, however, the trend was the opposite when the dissolution time was increased. In this case, a slight increase in crystallinity and an increasing damping were observed along with a decrease in the size distribution of the mobile entities. This phenomenon corresponds to a re-crystallisation accompanied with a poor consolidation of the composite. A relaxation α2 at ~200°C is attributed to the micro-Brownian motion of cellulose chains and is believed to be the most important glass transition for cellulose. The temperature of α2 decreased with an increase in crystallinity supposedly due to enhanced restriction of the mobile molecular phase. A high temperature relaxation which exhibited two distinct peaks, α1,2 at ~300°C and α1,1 at ~320°C, were observed. α1,2 is prevalent in the cellulose with a low crystallinity. A DMA scan performed at a slow heating rate enabled the determination of the activation energy for this peak as being negative. Consequently, α1,2 was attributed to the thermal degradation onset of the surface exposed cellulose chains. α1,1 was prevalent in higher crystallinity cellulose and accordingly corresponds to the relaxation of the crystalline chains once the amorphous portion starts degrading, probably due to slippage between crystallites. The relative α1,1/α1,2 peak intensity ratio was highly correlated to the amount of exposed chains on the surface of the cellulose crystallites.Novel aerogels (or aerocellulose) based on all-cellulose composites were also prepared by partially dissolving microcrystalline cellulose (MCC) in an 8 wt.% LiCl/DMAc solution.
Cellulose gels were precipitated and then processed by freeze-drying to maintain the openness of the structure. The density of aerocellulose increased with the initial cellulose concentration and ranged from 116 to 350 kg.m-3. Aerocellulose with relatively high mechanical properties were successfully produced. The flexural strength and modulus of the aerocellulose was measured up to 8.1 MPa and 280 MPa, respectively.
"Models for Emergency Logistics Planning"
Abstract
Emergency logistics planning is increasingly becoming a crucial issue due to the increase in the occurrence of natural disasters and other crisis situations. An adequate level of mitigation measures and a coordinated post-disaster relief logistics management may help to reduce the loss of both human lives and economic damage. Logistics planning in emergencies involves dispatching commodities to affected areas and evacuation of wounded people to emergency shelters. The number of vehicles involved may be very large during on-going relief operations. Furthermore, time plays a critical role in the logistic plan, and it directly affects the survival rate in affected areas. This makes the task of logistics planning more complex than conventional distribution problems. As a result, a modeling approach that enables massive dynamic routing of people and commodities is required. In this research study, a dynamic network flow model is developed. A solution framework is presented exploiting the currently efficient simplex implementation, together with a two-stage algorithm proposed to disaggregate the flow of variables and generate routes information.
"Model-based Therapeutics in Type 1 Diabetes"
Abstract
The incidence of Type 1 diabetes is growing yearly. Worryingly, the aetiology of the disease is inconclusive. What is known is that the total number of affected individuals, as well as the severity and number of associated complications are growing for this chronic disease. With increasing complications due to severity, length of exposure and poor control, the disease is beginning to consume an increasingly major portion of healthcare costs to the extent that it poses major economic risks in several nations.
Since the 1970s, the artificial endocrine pancreas has been heralded as just this type of solution. However, no commercial product currently exists, and on-going limitations in sensors and pumps have resulted in, at best, modest clinical advantages over conventional methods of insulin administration or multiple daily injections. With high upfront costs, high costs of consumables, significant complexity, and the extensive infrastructure and support required, these systems and devices are only useful for 2-15% of individuals with Type 1 diabetes. Clearly, there is an urgent need to address the large majority of the Type 1 diabetes population using conventional glucose measurement and insulin administration. However, these individuals, current conventional or intensive therapies are also failing to deliver recommended levels of glycaemic control.
This presentation presents a model-based approach using virtual patients to develop new therapies and approaches. These approaches are well validated in other clinical environments. The models are derived, and physiologically based. The resulting models are used to develop and analyse current and new methods for treatment, as well as new insulin types used in treatment. The results are validated against current clinical results and thinking. The overall outcome is an in-depth analysis of existing treatment, a definition and clear analysis of how much effort (measurement and injection) is required to achieve ADA specified control levels, and new approaches to best achieve that goal.
POWER-LAW CREEP BEHAVIOUR IN MAGNESIUM AND ITS ALLOYS
Creep is a time-dependent deformation phenomenon of materials under stress at elevated temperatures. The phenomenon of creep allows materials to plastically deform gradually over time, even at well below its yield point or its transformation temperature. The issues of creep are especially significant for magnesium alloys, since they are susceptible to creep from as low as 100 ºC, which inhibits their potential application in areas such as automotive engines.The University of Canterbury has developed a significant level of experience and infrastructure in the area of Electron Backscatter Diffraction (EBSD). EBSD allows microstructures to be characterized by imaging the crystal structure and its orientation at a given point on a specimen surface, whereby the process can be automated to construct a crystallographic “orientation map” of a given surface.In light of this, an opportunity to study creep phenomena by a novel technique was identified. The technique involves performing a tensile creep test on a specimen, but interrupting the process at periodic intervals, at which EBSD was used to acquire crystallographic orientation maps repeatedly on a same surface location at each interruption. This technique allows simultaneous measurement of creep rate and observation of creep microstructure, bringing further insight into the actual mechanisms of creep deformation.
An Integrated Approach to Model and Analyze Mechatronic Systems
A mechatronic system is an integrated electro-mechanical system. It will typically consist of many different types of interconnected components and elements. In view of dynamic coupling between components, an accurate design of the system should consider the entire system concurrently rather than using traditional design methodologies which are single-criterion and sequential. As well, the modelling of a mechatronic system should use a multi-domain approach, where all domains (mechanical, electrical, thermal, fluid, etc.) are treated in a unified manner. The presentation will first introduce the field of Mechatronics. Then it will present some useful considerations of multi-domain modelling, with examples that justify an integrated approach to modelling. The approach of linear graphs will be presented as an appropriate method for mechatronic modelling. In this context how the familiar techniques of Thevenin and Norton equivalent circuits may be extended to mechanical systems will be presented. Several industrial applications of mechatronics have been designed and developed in the Industrial Automation Laboratory at the University of British Columbia under the direction of the speaker. Representative applications involving robotic cutting, inspection, and grading of products will be presented.
Some Uninvestigated Directions of Computational Fluid Dynamics Studies for Flow Phenomena in the Human Body
Image based CFD has been widely investigated in blood flow studies and respiratory flow studies. Their major interests lied in so-called patient specific modeling and analysis for diagnosis and treatment planning in clinical medicine. Since geometry of flow tracts in the body is so complex, no one could build an all-in-one generic model for them. Fluid flow is extremely susceptible by global shape, physiological conditions, and even very minute irregularities of cardiovascular and respiratory flow tracts. Moreover, we have to place a stress on the biological side of the flow phenomena. Living system responds, adapts and even remodels itself according to mechanical conditions as well as changes of them. It is therefore necessary to consider those altering conditions and parameters when we try to draw physiologically and pathologically significant information from computational studies. In the seminar, I would like to show and discuss some uninvestigated aspects of CFD studies of flow phenomena in the body. They are pathological flows particularly due to infectious diseases such as malaria, parsimonious modelling of multiple branching flow tracts of respiratory system, and survey of the pathogenesis of cerebral aneurysm.
Conclusively, I would like to place a stress on the points that image based modelling in the CFD application to the biological flow field is undoubtedly the mandatory technology for the clinical diagnosis and treatment planning, though there remain so many problems particularly related physiological and/or pathological process. Extension of the application is necessary to targets that have never been intensively studied by computational mechanical method.
Robust Dynamic Orientation Sensing Using Accelerometers: Model-based Methods for Head Tracking in AR
Abstract
Augmented reality (AR) systems that use head mounted displays to overlay synthetic imagery on the user's view of the real world require accurate viewpoint tracking for quality applications. However, achieving accurate registration is one of the most significant unsolved problems within AR systems, particularly during dynamic motions in unprepared environments. As a result, registration error is a major issue hindering the more widespread growth of AR applications.
The main objective for this thesis was to improve dynamic orientation tracking of the head using low cost inertial sensors. The approach taken was to extend the excellent static orientation sensing abilities of accelerometers to a dynamic case by utilising a model of head motion. The inverted pendulum model utilised consists of an unstable coupled set of differential equations which cannot be solved by conventional solution approaches. A unique method is presented and validated experimentally with data collected using accelerometers and a physical inverted pendulum apparatus.
The key advantage of this accelerometer model-based method is that the orientation remains registered to the gravitational vector, providing a drift free orientation solution that outperforms existing, state of the art, gyroscope based methods. This proof of concept uses low-cost accelerometer sensors to show significant potential to improve head tracking in dynamic AR environments, such as outdoors.
MODELLING AND SIMULATION OF AN ORGANIC RANKINE CYCLE GEOTHERMAL POWER PLANT
Abstract
Popularity of small-scale binary cycle geothermal power plant is noticeable in the new fields of Mokai and Rotokawa, New Zealand. Steady state model of similar design is often noticed in the literature. However, literature of dynamic model of such kind of plant is limited. A dynamic model is very useful to the plant's operators to predict plant performance, as they need to commit their power output to buyers in advance. It also gives them the ability to analyse any room for improvement. In this project, a steady state model of Mokai 1 geothermal power plant of Taupo, New Zealand, was developed. Dynamic components are added to the system to convert it from a steady state model to a dynamic model. The dynamic model takes into account effect of various internal and external variables, which are necessary to simulate the plant performance with reasonable accuracy. There are two types of Organic Rankine Cyle (ORC) units used in the plant; brine ORC and bottoming ORC. In this presentation dynamic model of a brine ORC is presented. This unit uses pentane as the motive fluid and it is powered by the separated brine from the geothermal fluid. It is found that the inlet brine properties including brine mass flow rate, and the ambient air temperature are the two most important parameters influencing the plant performance. Specifically, the plant performance is highly dependent on ambient air temperature as the ORC uses air-cooled condenser. On the other hand, the inlet brine property changes less significantly with respect to time compared to the ambient air temperature. Simulation has been carried out for 1000 hours operation where, inlet brine properties and ambient air temperature are fed as inputs to the computer model. The simulated plant performance has been compared with the actual plant performance data.
Level set based explicit and implicit representation reconstruction schemes in electromagnetic shape-tomography
Abstract
The area of tomographic subsurface imaging is of interest in important applications such as geophysical prospecting, biomedical imaging and landmine detection to name only a few. This talk is about a couple of methodologies to solve an ``approximate'' inverse scattering problem especially useful in limited data situations, wherein the object shape, location and an approximate (as against a more exact) estimate of the object's interior physical parameter values are reconstructed. A Helmholtz-equation modelled electromagnetic tomographic nonlinear reconstruction problem is solved for the object boundary and inhomogeneity parameters in a Tikhonov-regularizedGauss-Newton (DTRGN) solution framework. In the present work, the electromagnetic parameter is the normalized (w.r.t the squared ambient wave-number) difference of the squared wave-numbers between the object and the ambient half-space, and is represented in a suitable global basis, while the boundary is expressed as the zero level set of a signed-distance function. The iterative ``shape based'' approximate reconstruction schemes broadly fall into two categories.
The objective functional minimized in the first class has as unknowns the coefficients in an explicit parametric representation for the boundary curve(s), while in the latter class, the unknowns are the values of a set function representing the image, with the zero level set of that function representing the boundary. While the first (explicit representation) class of schemes has the advantage of fewer unknowns which is useful in potential three-dimensional reconstructions, the second (implicit representation) class is better suited to handle topological changes in the evolving shape of the boundary.We present two approaches to the solution of this shape based reconstruction problem, one each in the explicit and implicit classes of schemes. The objective functional w.r.t boundary and electromagnetic parameters is set up and required Frechet derivatives are calculated. Reconstructions using a Tikhonov regularized Gauss-Newton scheme for this almost rank-deficient problem are presented for 2D test cases of subsurface landmine-like dielectric objects under noisy data conditions.In an explicit B-spline boundary-representation based reconstruction scheme, we evaluate the Frechet derivatives of the scattered fields measured above the surface w.r.t the control points of the spline, and minimize the objective functional using a Tikhonov regularization method.
On the other hand, with the objective of having an implicit scheme with few unknowns, we use an implicit Hermite interpolation based radial basis function (RBF) representation of the boundary curve. An object's boundary is defined implicitly as the zero level set of an RBFfitted to boundary parameters comprising the locations of few points on the curve (the RBF centers) and the normal vectors at those points. The required Frechet derivatives are calculated.
MICROFABRICATION BY MICRO-MACHINING AND MICRO-JETTING
Abstract:
With the continuing trends towards product miniaturization, there is corresponding demand for microfabrication processes to produce micro-components. Fabrication processes, such as micromachining and layer manufacturing, have potential for appropriate downscaling as alternative microfabrication processes to those commonly used in the fabrication of semiconductor devices. Micromachining is a material removal process while layer manufacturing is a material additive process. Some of the challenges and achievements in the development of the control system and processes for microfabrication by micromachining based on EDM, and layer manufacturing based on micro-jetting will be presented. A multi-process micromachining system has been developed to primarily employ µ-EDM, but in combination with other ‘tool-based processes’ such as µ-turning and µ-milling, including wire-EDM. On-machine measuring and verification devices are incorporated to achieve high dimensional accuracy of the fabricated micro-structures. With computer numerical control, complex micro-structures can be produced. Components with high aspect ratio, such as micro-cylindrical features of shafts and holes (e.g. less than 6µm diameter and 1500 µm length) through non-metallic and metallic components, micro-grooves and lenses at nano-finish on glass, and other complex micro components have been successfully manufactured by the micro-process microfabrication. Another development involves layer-by-layer 3D microfabrication. It is centered on multi-material micro-jetting, using multi-nozzle drop-on-demand approach. Different types of actuators are employed to impart the micro-jetting to suit the microfabrication requirements. A computer-controlled system with modular multi-nozzle units for dispensing different types of materials and on-machine vision-based system has been developed. Studies have been conducted on developing and understanding different types of micro-jetting nozzles through simulation and experimental studies, particularly on the behaviors of micro-droplets produced. Application developments include microfabrication of micro-devices, scaffold and cell printing, and drug patch.
The adoption of ubiquitous supply chain management
Ubiquitous computing is gradually being recognized as the emerging computing paradigm, which will eventually bring an integration of physical space and electronic space by reducing the gap between them. The technological characteristics of ubiquitous computing tend to be smart, networked and mobile. However, these characteristics alone seem to be insufficient in understanding the various types of u-business models enabled by ubiquitous computing.
We attempt to address useful information on how to successfully adopt ubiquitous supply chain management (U-SCM). In relation to this, we briefly outline the design of this research. The two different analysis levels - the macro and micro levels - are applied in examining the major success factors influencing the U-SCM adoption. The macro-level research provides a general understanding, a unified view and a theoretical background for the next step of this research, while the micro-level research provides a specific and more focused analysis which is the main research objective.
Developing methods for robust distributed data fusion
Distributed data fusion is a key enabling technology for distributed tracking and sensing. Such networks consist of a set of nodes - some equipped with sensors to collect data, others to fuse data and communicate it to other nodes in the network. By communicating fused estimates rather than raw sensor data, substantial reductions in required bandwidth can be obtained together with robust, scalable and modular operations. However, a critical limitation of Bayesian data fusion algorithms is that the probability density function - or at least the correlations - between different estimates must be known. If naive assumptions of independence are used, highly inaccurate estimates can result. However, developing a full description of the probability distribution requires all nodes in the network to have full knowlege of the entire state of the network, thus undermining all practical advantages of a distributed fusion system.
One means of overcoming these difficulties is to develop a robust data fusion system which trades optimality for robustness. In this talk I shall describe a technique which replaces the normal product in Bayes rule by an exponential mixture of the densities. Preliminary results suggest that the algorithm works extremely well, but the full theoretical reason for its robustness is still unclear. I shall discuss related results in alpha divergence and robust two-class detection problems.
Developing methods for robust distributed data fusion
Developing methods for robust distributed data fusion
Computational neuromodulation
Neuromodulators such as dopamine, serotonin, acetylcholine and norepinephrine are signals in the brain that appear to act over broad spatial and temporal scales to regulate fast ongoing neural activity and influence changes in the strengths of connections between neurons. They are known to play critical roles in a wealth of normal function, and are the source of problems and therapies in a wide range of neurological and psychiatric conditions.
Computational treatments of the neuromodulators have suggested that they report specific information associated with optimal prediction, inference and control. For instance, the phasic activity of dopamine neurons is consistent with a model in which they report errors in ongoing predictions of future rewards.
Efficient algorithms for computing the best subset regression models for large-scale problems
Several strategies for computing the best subset regression models are proposed. Some of the algorithms are modified versions of existing regression-tree methods, while others appear for the first time. The first algorithm selects the best subset models within a given size range. It uses a reduced search space and is found to computationally outperform the existing branch-and-bound algorithm. The properties and computational aspects of the proposed algorithm are discussed in detail. The second new algorithm preorders the variables inside the regression tree. A radius is defined in order to measure the distance of a node from the root of the tree. The algorithm applies the
preordering to all nodes which have a smaller distance than a a priori given radius. An efficient approach to preorder the variables is employed. The experimental results indicate that the algorithm performs best when preordering is employed on a radius lying in between one quarter and one third of the number of variables. The algorithm has been applied with such a radius to tackle large scale subset selection problems that are considered computationally infeasible by conventional exhaustive selection methods. A class of new heuristic strategies are also proposed. The most important is the one that assigns a different tolerance value to each subset model size. This strategy covers all exhaustive and heuristic subset selection strategies. In addition it can be used to investigate submodels having noncontiguous size ranges. The implementation of this strategy provides a flexible tool for tackling large scale models.
Informatic Support to Urban Water System's Modelling and Real Time Control
The lecture will address the issue of integrated informatic support to diagnosis, monitoring, testing, planning and design, real time operation and management of Urban Water Systems consisting of water abstraction, treatment and distribution, wastewater collection and treatment, storm water (and flood) management, urban streams and water amenities. Following the brief introduction of the global water issues, the emphasis will be placed on data need, data sources, data acquisition, processing. Recent advances in remote sensing, spatial data management, wireless data communication and informatic support to predicting and managing of natural disasters. Examples of wireless data communication for real time operation of water distribution systems and urban flood prediction and management will be used as an illustration of the current research in which there strong interaction (cooperation) of water (environmental) and computer science specialists is a prerequisite for success.
Pair Programming, Pair Teaching - and beyond
Abstract
In this seminar we will look at Pair Programming, that is one of the practices of eXtreme Programming. We will explain what motivated Kent Beck to introduce it and address the objections that others have brought forth - mainly that it is not cost- effective. We then move on to Pair Teaching, that is one of the practices of eXtreme Teaching, and see how it was inspired from Pair Programming - but also what has to be different in the teaching context. We will give some examples of how and in which situations Pair Teaching can be used. Finally, we will suggest that Pair Studying might be a technique that students could use with good results. We might look at some advantages and drawbacks and examples.
Introduction to Software Configuration Management
Abstract
Software Configuration Management (SCM) is a set of techniques that help you to manage changes to software such that you don't end up in chaos and problems. In this lecture, we will look at SCM from the company's perspective. We will see how SCM provides techniques to improve the business values of a company's software production. How it can lead to faster development, better quality and greater reliability. We also look at what the company needs from its developers in order to be able to obtain consistent quality in the product it delivers.
Constraint Programming Approach to the Protein Structure Prediction Problem
Abstract
Protein Structure Prediction is one of the challenges for modern science. In the talk is presented an unusual methodology, based on constraint (logic) programming, to predict ab-initio the tertiary structure of a protein sequence, given only the primary sequence. Constraint techniques allow to balance accuracy and speed of the method and provide an interesting framework for the developement of new competitive prediction tools.
Typing Linear Constraints for Moding CLP(R) Programs
Abstract
We present a type system for linear constraints over reals. The type system is designed to reason about the properties of definiteness, lower and upper bounds of variables of a linear constraint. Two proof procedures are presented for checking validity of type assertions. The first one considers lower and upper bound types, and it relies on solving homogeneous linear programming problems. The second procedure, which deals with definiteness as well, relies on computing the Minkowski's form of a parameterized polyhedron. The two procedures are sound and complete.
Verification of Logic Programs with Delay Declarations
Abstract
Program verification has to exploit a knowledge of the program behaviour to ensure that the program executes terminates normally with correct answers. Prolog, although based on first order logic and has an implementation based on resolution, has a number of practical features that may cause a program to terminate abnormally. Moreover, most Prolog systems have some form of delay declarations (eg the Block declarations of SICStus Prolog) that allow the programmer to insist that certain atomic calls are delayed until they are sufficiently instantiated. We have considered a number of aspects of verification relevant to Prolog programs. These include occur-check freedom, flounder freedom, freedom from errors related to built-ins, and termination. The occur-check in unification procedures which avoids the generation of infinite data-structures is expensive and often omitted. Floundering means that the program stops without any useful solution and is caused when all the atoms to be executed are postponed because of the delay declarations. Arithmetic operators provided in Prolog often require the arguments to be ground and if they are not sufficiently instantiated, the program terminates with an error message. In this talk, I will describe existing techniques for verifying Prolog programs without delay declarations and then show how this work can be lifted to programs with delays such as the Block declarations of SICStus Prolog.
Abstract Domains for Universal and Existential Properties
Abstract
For both compiler optimisation and program debugging, we need a knowledge of the expected behaviour of the program, preferably, before execution. The theory of abstract interpretation (AI) has successfully been used for constructing provably correct algorithms that can analyse the program code and determine its run-time properties. Central to this is the notion of an abstract domain describing certain program properties. For any program property, the abstract domain (which is a lattice) has to be a compromise between many factors including the precision and cost of the analysis. Thus, an important line of research in AI is the design of abstract domains for capturing specific concrete properties of the program. Recently, there have been several proposals for constructing new abstract domains from existing ones and, in this talk, I will describe one particular technique illustrating the ideas and results with examples taken from logic programming.
Thursday, February 26, 2009
Multi-kernel approach to signature verification
Signatures can be considered on-line, as multi-component discrete signals, or off-line, as gray-scale images. The problem of signature verification is considered within the framework of the kernel-based methodology for machine learning, more specifically, Support Vector Machine (SVM) approach. A kernel for a set of signatures can be defined in many ways and, at this moment, there is no method for finding the best kernel. We propose an approach of fusing several on-line and off-line kernels into a novel method embracing the whole training and verification process. Experiments with the public signature database SVC2004 have shown that our multi-kernel approach outperforms both single kernel and classifier fusion methods
Research on Intelligent Transportation Systems in Taiwan. IEEE Systems, Man and Cybernetics Society Distinguished Lecture hosted by SCSIS
In the 21st century, the mainstream of technology development is the interdisciplinary integration, together with the human-centred technologies (HT) that emphasizes on friendly service for human rather than the forced adaptation by human. Intelligent Transportation Systems (ITS), an integrated discipline of sensing, controls, information technology, electronics, communications and traffic management with transportation systems, represents a typical human-centred large-scale and highly complex dynamic system. It is aimed to provide the traveler information to increase safety, efficiency, and reduce traffic jam, and includes following major studies.
1. Smart Vision: Biological-inspired Intelligent Vision Technology for ITS: Combining brain science and intelligent engineering to develop biological-inspired computer vision (electronic eye) techniques for ITS applications.
2. Smart Interfacing: Intelligent Dialogue System for ITS Information Access: Combining speech recognition and language processing to develop intelligent spoken dialogue system for accessing ITS information.
3. Smart Car: Intelligent Control and Intelligent Wheels (I-Wheels) for Next-Generation Smart Cars: Integrating intelligent control, power electronics and network control techniques to develop intelligent wheels (I-Wheels) and intelligent adaptive cruise control systems for ITS applications.
4. Smart Networking: High-Capacity Communication Networks for ITS: Developing and integrating the wireless communication networks and transportation networks technologies into the broadband wireless ITS network.
5. Smart Agent: Agent-based Software Engineering for ITS: Developing a systematic methodology for building multi-agent systems in an incremental manner based on the notion of trade-off analysis of agents’ goals for ITS applications.
Automated learning of operational requirements
Abstract
Requirements Engineering involves the elicitation of high-level stakeholders goals and their refinement into operational system requirements. A key difficulty is that stakeholders typically convey their goals indirectly through intuitive narrative-style scenarios of desirable and undesirable system behaviour, whereas goal refinement methods usually require goals to be expressed declaratively using, for instance, a temporal logic. Currently, the extraction of formal requirements from scenario-based descriptions is a tedious and error-prone process that would benefit from automated tool support.
We present an ILP methodology for inferring requirements from a set of scenarios and an initial but incomplete requirements specification. The approach is based on translating the specification and scenarios into an event-based logic programming formalismand using a non-monotonic ILP system to learn a set of missing event preconditions. We then show how this learning process can be integrated with model checking to provide a general framework for the elaboration of operational requirements specifications that are complete with respect tohigh-level system goals. The contribution of this work is twofold: a novel application of ILP to requirements engineering, which demonstrates also the need for non-monotonic learning, and a novel integration of model checking and inductive logic programming.
Industry specific XML Processing and Storage
Abstract
Many industries have created consortia to define XML formats that are used for exchanging messages. These messages are transmitted in a variety of ways including file transfer, Web services, message queues, etc. This talk will provide an overview of these message exchange formats, and how they are increasingly being used in client side processing as well as for storage - creating an XML end-to-end style of application. Related topics such as XML schema evolution and the application of constraints through XML schema validation and through languages such as schematron will be described, along with their advantages and disadvantages.
Wireless Epidemic Spread in Time-Dependent Networks
Abstract
Increasing numbers of mobile computing devices form dynamic networks in everyday life. In such environments, nodes (i.e. laptops, PDAs, smart phones) are sparsely distributed forming a network. Such networks are often partitioned due to geographical separation or node movement. New communication paradigms using dynamic interconnectedness between people and urban infrastructure lead towards a world where digital traffic flows in small leaps as people pass each other. Efficient forwarding algorithms for such networks are emerging, mainly based on epidemic routing protocols where messages are simply flooded when a node encounters another node. To reduce the overhead of epidemic routing, we attempt to uncover a hidden stable network structure such as social networks, which consist of a group of people forming socially meaningful relationships. I will describe our study of patterns or information flow during epidemic spread in dynamic human networks, which shares many issues with network-based epidemiology. Properties of human contact networks such as community and weight of interactions are important aspects of epidemic spread. I will consider a model for space-time paths based on graph evolution: time-dependent networks where links between nodes are time-windows dependent. I will explore epidemic change by exploiting device connectivity traces from the real world and demonstrate the characteristics of information propagation. I will show clustering nodes with traces that form human communities. An experimental rather than theoretical approach is used in this study. This research work is part of the EU Haggle project (for details, see my web page: http://www/cl.cam.ac.uk/~ey204)
Understanding the requirements and limitations of reputation based systems
Abstract
Reputation-management, as proposed for dynamic and open systems aims at providing mechanism for analysing the behaviour of nodes/agents, and distributing this information so that those judged to be acting against the interests of a community can be caught in time, and the impact of their actions limited. We study the assumptions that underpin this decision-making role for reputation-management and highlight its limitations with regard to the incentives required to realise the benefits claimed.
Moreover, we show that such benefits may not be realisable without enforcing tight constraints on the behaviour and the expectations of agents, with respect to the definition of the interaction model in environment, and the incentives such a model presents.
A framework for detecting and correcting contention in Home Networks
Abstract
In this talk, I will describe HomeMaestro, a distributed system for monitoring and instrumentation of home networks. By performing extensive measurements at the host level HomeMaestro infers application network requirements, and identifies network-related problems. By sharing and correlating information across hosts in the home network, HomeMaestro automatically detects and resolves contention over network resources among applications based on predefined policies. Finally, our system implements a distributed virtual queue to enforce those policies by prioritizing applications without additional assistance from network equipment such as routers or access points. In the talk, I will outline the challenges in managing home networks, describe the design choices and the architecture of our system, and highlight the performance og HomeMaestro components in typical home scenarios.
Characterising Web Site Link Structure
Abstract
The topological structures of the Internet and the Web have received considerable attention. However, there has been little research on the topological properties of individual web sites. In this paper, we consider whether web sites (as opposed to the entire Web) exhibit structural similarities. To do so, we exhaustively crawled 18 web sites as diverse as governmental departments, commercial companies and university departments in different countries. These web sites consisted of as little as a few thousand pages to millions of pages. Statistical analysis of these 18 sites revealed that the internal link structure of the web sites are significantly different when measured with first and second- order topological properties, i.e. properties based on the connectivity of an individual or a pairs of nodes. However, examination of a third-order topological property that consider the connectivity between three nodes that form a triangle, revealed a strong correspondence across web sites, suggestive of an invariant. Comparison with the Web, the AS Internet, and a citation network, showed that this third- order property is not shared across other types of networks. Nor is the property exhibited in generative network models such as that of Barabdsi and Albert.
Scaling Soft Clustering to Very Large Data Sets
Abstract
There are an increasing number of large unlabeled data sets available. Some of these may have billions of examples or feature vectors. Partitioning such data sets into groups can be done by clustering algorithms. However, classical clustering algorithms do not scale well to tens of thousands of examples much less millions or billions. This talk introduces algorithms that can scale to arbitrarily large data sets. They can be used for data that flows as a stream or for online clustering. We show adaptations that are based on fuzzy c-means, possibilistic c-means and the Gustafson-Kessel clustering algorithms. Approaches applied to the EM algorithm and k-means are also covered. Results on real data sets showing that it is possible to obtain a final data partition which is almost identical to that obtained with the original algorithms on data that will fit in the memory of one computer are given.
Triangular clustering in document networks
Abstract
Document networks are characteristic in that a document node, e.g. a webpage or an article, carries meaningful content.
Properties of document networks are not only affected by topological connectivity between nodes, but also strongly influenced by the semantic relation between content of the nodes. We observe that document networks have a large number of triangles and a high value of clustering coefficient. And there is a strong correlation between the probability of formation of a triangle and the content similarity among the three nodes involved. We propose the degree-similarity product (DSP) model which well reproduces these properties. The model achieves this by using a preferential attachment mechanism which favours the linkage between nodes that are both popular and similar.
FUZZY KNOWLEDGE REPRESENTATION AND ENVIRONMENTAL RISK ASSESSMENT
Abstract
Knowledge Representation (KR) is the key aspect towards the development of Intelligent Systems. Knowledge can be represented by employing Probabilistic methods, Bayesian and Subjective Bayesian models and also Fuzzy Algebra functions and Relations.
When Zadeh introduced the first Fuzzy Logic concepts in 1965, he stated that in the near future we will be calculating with words. This has been successfully achieved today and by using Fuzzy Algebra tools we are able to give a mathematical meaning to real world concepts like "hot room", "tall man" or "young man".
Fuzzy Algebra is not only used to model uncertainty or vague concepts, but it enables us to calculate with proper words called "Linguistics", especially in modern control systems. By using Fuzzy Sets and their corresponding Membership Functions we can build Rule Based Intelligent Systems that use Fuzzy "IF" THEN" rules. Also Fuzzy Databases can be developed to perform SQL queries in a more intelligent manner.
T-Norms and S-Norms operators can be applied towards performing conjunction or disjunction operations between Fuzzy Sets and offer us various modeling perspectives of a specific problem.
Finally Fuzzy c-means and Fuzzy Adaptive clustering, offer flexible clustering operations and they have advantages over traditional statistical methods.
All of these Fuzzy principles have been applied towards the development of Natural disasters (floods and forest fires) Risk estimation Systems. These systems have proven their validity and their potential use in a wider scale in the future.
Wednesday, February 25, 2009
Reliable Circuit Techniques for Low-Voltage Analog Design in Deep Submicron Standard CMOS
Abstract:
An overview of circuit techniques dedicated to design reliable low-voltage (1-V and below) analog functions in deep submicron standard CMOS processes. The challenges of designing such low-voltage and reliable analog building blocks are addressed both at circuit and physical layout levels. State-of-the-art circuit topologies and techniques (input level shifting, bulk and current driven, DTMOS), used to build main analog modules (operational amplifier, analog CMOS switches) are covered with the implementation of MOS capacitors.
Temperature, Stress and Hot Phonons in GaN Electronics and its Interfaces
Temperature, Stress and Hot Phonons in GaN Electronics and its Interfaces
Abstract -GaN power electronics has great potential for future radar and communication applications. Huge advances in their performance have made this new material system superior to GaAs and Si in particular in terms of power performance. However, there are still large reliability challenges which need to be addressed, often related to high device temperature and large stresses in the devices. Those are very challenging to assess as these are present only in sub-micron device regions typically located near the gate of an HEMT. I report on our work of the development of Raman thermography, to assess temperature, stress as well as hot phonon effects in AlGaN/GaN but also GaAs pHEMTs to address reliability challenges in power electronics. The techniques developed enable temperature and stress measurement in devices with submicron spatial and nanosecond time resolution. Effects of thermal cross-talk, but also heat transfer across interfaces in the devices will be discussed, together with hot-phonon effects.
Low Power, Robust SRAMs for nano-metric Technologies
Abstract:
Embedded random access memories can occupy up to 70% of the total area of
modern System on Chips (SoCs). Embedded SRAMs are the most popularly used
due to their robustness compared to DRAMs. Owing to a number of
constraints, embedded SRAMs have a significant impact on power, performance,
testability and yield of complex SoCs. In this presentation, some of these
issues will be discussed.
Integrated Receiver Systems for Next Generation Telescopes
Abstract
Nonlinear Effects in an OFDM-RoF Link
Abstract
Integration of orthogonal frequency division multiplexing (OFDM) and radio over fiber (RoF) technique emerged the possibility of cost-effective and high data rate ubiquitous wireless networks. As because OFDM is a combination of few subcarriers; resulting OFDM signal contains large peak to average power ratio (PAPR). The level of non-linear distortions largely depends on PAPR of an input signal led to a power amplifier (PA). RF power amplifier over a large dynamic range is also desired as high PAPR ratio reduces the power efficiency of the system. Hence, this work investigates the distortion effects of OFDM signals fed via an RF amplifier integrated with an RoF link employing electro absorption modulators (EAM) as a receiver for broadband in-building network applications. Firstly, adjacent channel power ratio (ACPR) (which estimates the degree of spectral re-growth due to the in-band and out-of-band interference resulting from distortion effects upon nonlinear amplification) is observed for the proposed system. Later, the dependence of PAPR and SFDR on the number of subcarriers of OFDM signals over RoF links integrated with an RF amplifier is examined. A complete analytical model of the OFDM-RoF system is also simulated and a closer agreement is reported for the distortion effects experienced in the system. To the best of knowledge, until now, the distortion effects of OFDM signals over RoF links integrated with an RF amplifier and EAM have not been reported with consideration of the distortion parameters as mentioned above. Therefore, this research will be helpful to determine design requirements for suitable picocell and microcell RoF applications in the future.
In Search of Scalable Models for FET and HEMT Structures
Abstract
FET and HEMT transistors designed to operate up to frequencies of 50GHz and beyond continue to be used for microwave and millimetre wave applications such as terrestrial and satellite communication, automotive radar and aerospace systems. These systems typically demand that the transistor based circuits have excellent wideband performance in terms of one or more of the following parameters – noise figure, linearity, power-added-efficiency, or saturated power handling. The optimal transistor geometry will vary widely for each scenario so circuit designers require transistor models for the full range of possible device sizes. Typically, a foundry will allow the unit-gate-width (UGW) and the number-of-gate-fingers (NOF) to be varied. In this context scalable models become very attractive as the measurement and characterisation of every possible geometry can be avoided. The following presentation will give an outline of some of the approaches that are proving to be successful in realising scalable models.
Antenna Validation Scheme for MIMO Broadcast Wireless Channels with Linear Receivers
Abstract
This seminar considers wireless broadcast systems with multi-antennas at base station and multiple users in the system. A typical wireless system usually has more users than the number of antennas at the base station. In this situation base station can not communicate to all users simultaneously. Therefore, base station has to select a set of users from all active users for communication. This selection process is known as user scheduling or user selection. We have discussed three practical user selection schemes in this talk. Once base station decides which users to serve, next step is to assign transmit antennas to each user. In traditional systems, base station selects a transmit antenna or a subset of transmit antennas for each user based on certain channel information from each user. We have proposed a new decentralized antenna selection scheme where each user selects a subset of transmit antennas for itself and sends this information back to the base station. This novel antenna selection approach reduces the interference among multiple users and increases the system throughput. We have analysed the performance of this approach with minimum mean square error (MMSE) and zero forcing (ZF) linear receivers. It is shown that the proposed scheme works better with both receivers.
Future generation high-performance radio communications circuits in gallium nitride technology
circuits in gallium nitride technology
Abstract
Microwave transistors based on gallium nitride (GaN) are an exciting new technology that holds the promise of high power densities, high supply voltages and easy matching. GaN transistors have recently become commercially available with the industry poised to explode in the not too distant future. There are, however, a number of issues with this relatively immature technology that are limiting their uptake. These limitations include a high number of defects in the crystal lattice and difficulties in dissipating the heat generated by these devices. This presentation will give an overview of GaN transistors and present some of my recent results.
Analytic Non-linear Circuit Analysis for Resistive FET Mixers
Abstract
Resistive mixers present a difficult circuit analysis problem as they involve a small signal incident to
a time varying, weak non-linearity. The most widely used technique for this type of analysis is Harmonic Balance,
however, no analytic insight is gained from this technique. Other techniques exist, however, all the current
methods either involve numerically derived solutions or are only useful for conversion gain, not distortion prediction.
It is desirable to find an analytic expression to link individual circuit components to the output effect of concern.
This talk looks at double volterra analysis, including non-linearity extraction, and its range of accuracy for prediction of
LO leakage in resistive mixers. One potential method of improvement is shown from a simplification of theanalytic result.
Technique 09 Punjab Engineering College 28th and 29th March
Introduction
Punjab Engineering College, Chandigarh takes immense pride in presenting to you Technique '09, our annual technical extravaganza, this time bigger, better and even more rewarding than before. Spreading its wings throughout North India and serving as a beacon for talented minds in the region, the 2-day spectacle to be held on 28th and 29th March, with an expected assemblage of over 4000 students promises an exhillarating experience. Technical events encompass varied disciplines from avionics and structure building to circuit designing and coding, leaving no field untouched. Managerial events like Business Plan and Bulls & Bears promises to cater to those with an entrepreneurial aptitude. With enthralling side attractions like 3-D Movies and Laser Show, the campus promises to become a congregation of keen minds and budding talent.
Preserving Time in Large-Scale Communication Traces
Abstract
Analyzing the performance of large-scale scientific applications is becoming increasingly difficult due to the sheer size of performance data gathered. Recent work on scalable communication tracing applies online interprocess compression to address this problem. Yet, analysis of communication traces requires knowledge about time progression that cannot trivially be encoded in a scalable manner during compression. We develop scalable time stamp encoding schemes for communication traces. At the same time, our work contributes novel insights into the scalable representation of time stamped data. We show that our representations capture sufficient information to enable what-if explorations of architectural variations and analysis for path-based timing irregularities while not requiring excessive disk space. We evaluate the ability of several time-stamped compressed MPI trace approaches to enable accurate timed replay of communication events. Our lossless traces are orders of magnitude smaller, if not near constant size, regardless of the number of nodes while preserving timing information suitable for application tuning or assessing requirements of future procurements. Our results prove timepreserving tracing without loss of communication information can scale in the number of nodes and time steps, which is a result without precedent.
Cache-Aware Real-Time Scheduling on Multicore Platforms
Abstract:
Multicore architectures, which have multiple processing units on asingle chip, have been adopted by most chip manufacturers. Most suchchips contain on-chip caches that are sharedby some or all of the cores on the chip. To efficiently use theavailable processing resources on such platforms, scheduling methodsmust be aware of these caches. In this talk, I will present a method forimproving cache performance when scheduling real-time workloads.Additionally, I will discuss our ongoing work on methods to dynamicallyprofile the cache behavior of real-time tasks, which allows ourscheduling method to be effectively employed. These scheduling andprofiling methods are especially applicable when multiple multithreadedreal-time applications exist with large working sets. As this couldeasily be the case for a multimedia server, we also present apreliminary case study that shows how our best-performing heuristics canimprove the end-user performance of video encoding applications --- weplan to expand this study in future work.
Massively Parallel Genomic Sequence Search on Blue Gene/P
Abstract:
This paper presents our first experiences in
mapping and optimizing genomic sequence search onto the
massively parallel IBM Blue Gene/P (BG/P) platform.
Specifically, we performed our work on mpiBLAST, a parallel
sequence-search code that has been optimized on numerous
supercomputing environments. In doing so, we identify several
critical performance issues. Consequently, we propose and
study different approaches for mapping sequence-search and
parallel I/O tasks on such massively parallel architectures.
We demonstrate that our optimizations can deliver nearly
linear scaling (93% efficiency) on up to 32,768 cores of BG/P.
In addition, we show that such scalability enables us to
complete a large-scale bioinformatics problem ¡Âª sequence
searching a microbial genome database against itself to
support the discovery of missing genes in genomes ¡Âª in only
a few hours on BG/P. Previously, this problem was viewed as
computationally intractable in practice.
High-performance Computing on NVIDIA GPUs
Abstract:
In this talk, I will cover three topics: 1) The NVIDIA GPU computing architecture, 2) The CUDA programming language, and 3) and recent work on N-Body simulation. The GPU architecture supports both graphics and non-graphics computation, using an array of custom processors on a single chip. The programming model is neither SIMD nor MIMD, but somewhere in between, where we can exploit the advantages of each. The current performance part has 240 processors running at 1.5 GHz. With dual-issue capabilities, this places the achieved peak performance just under 1 TFLOP. CUDA is NVIDIA's C/C++ programming language for programming the GPU. It has a few extensions that include thread launch/terminate, synchronization, data sharing, and atomic operations. I'll discuss a collaborative effort with Jan Prins (UNC-CH) and Mark Harris (NVIDIA), where we have written an N-Body simulator using CUDA that runs on NVIDIA GPUs. We achieve a sustained computational rate over 400 GFLOPS. I'll finish with a few demonstration applications, as well as a discussion of how other groups are using NVIDIA GPUs to accelerate their computations. As a postscript, I'll mention the "professor partnership program" where academicians can receive GPU computing hardware at no cost
A Hierarchical Programming Model for Sensor Networks
Abstract:
Sensor networks are usually envisioned as wireless networks of embedded computers that are equipped with several environmental sensors. Such sensors include, but are not limited to: thermistors, photometers, humidity sensors, accelerometers, and GPS. Using these devices, researchers have imagined (and in some cases implemented) applications that perform environmental monitoring, disaster response, and social monitoring. Some day, we may even be immersed in a sensor rich world where users are able to interact and query the physical world much like today's internet. In this presentation, I will give a basic overview of this field including some technological highlights and recent research trends. Afterwards, I'll focus on programming models for sensor networks. I'll discuss why programming sensor networks is challenging, and how these challenges can be resolved using an appropriate programming model. Specifically, I'll introduce the Hierarchical Programming model, a programming model that uses event-based groups of sensor nodes to task and program the sensor network. I'll then briefly discuss a straightforward implementation of this model. Afterwards, I'll introduce a very different implementation of this model that uses spreadsheet-inspired interfaces to program the sensor network. Finally, I'll discuss some future work with respect to alternative implementations of the model and discuss how this research may be applied to other pervasive-computing fields.
Virtualization and Image Management: Foundation Technologies for IT Simplification, Scalability and Optimization
Abstract:
To realize the Cloud Computing vision we will need to build large-scale distributed computing infrastructures capable of hosting thousands of applications that deliver IT functions to millions of users. Building such infrastructures requires solving the following key challenges: scalability, complexity and flexibility. Scalability refers to the ability of harnessing computing powers spread across a large number of distributed resources. Complexity refers to the problem of managing the configuration, security, availability, and performance of a large number of distributed applications, and systems. Flexibility and speed of delivery refers to the ability of creating and deploying new IT services at a speed one order of magnitude faster than what traditional IT infrastructure achieve today.
In this talk we will show how to leverage virtualization and image management technologies to address these challenges and provide the foundation for cloud computing. In particular, we will discuss the following specific benefits:
- Simplification of software lifecycle management. Virtual images, i.e., pre-built software stacks, will become the new unit of distribution, deployment, licensing, maintenance, archival and service/support. A Virtual image contains customization logic and has an associated meta-data manifest describing capabilities and requirements. Within a data center an Image Repository component will store and handle image manipulation operations. Image Repository will provide constructs for creating image instances from master images. It will also provide high-level interfaces for version-based functionality (e.g., check in, check out of image). Image Repository will handle image customization as well as providing efficient store and retrieve functions.
- Data center scalability and optimization. Virtualization will extend beyond single systems to multi-system pools consisting of servers, network and storage, thus creating a new platform for integrated management and optimization of data center resources. Because of the decoupling properties of virtualization technology, virtual images can run on any physical resource capable of hosting the image. Virtual images can be moved within an homogeneous pool without a change to their configuration. An pool manger controls the physical resources used within a pool, providing functions to the virtual images deployed in the pool. Because a pool deals with homogenous resources and the workload granularity is at the level of virtual images, the management complexity can remain constant as the number of physical elements in the pool increases.
- Model based solution composition. A new breed of tool will emerge to allow data center administrator to quickly assemble complex solutions from ready-made building blocks and pre-built templates. Enterprise software solution will consist of one or more virtual image and a model representing their hosting, communication, and performance requirements. A solution may be created manually, or may be created with the help of a solution designer tools. Solutions packaged in this way will be reusable, with multiple instances possibly deployed in the data center, or across different data centers.
End to End Computing research on Petascale Computers
Abstract:
ORNL has now procured a supercomputer with a peak performance of more than 1 petaflop, with over 300 TB of memory, 150K processors, and 10 PB of storage. One of the most daunting tasks of running simulations on this computer is with the associated scientific data management tasks, such as parallel I/O and workflow automation. In this talk I will present two middleware systems that we have worked with, and present the new research challenges that remain in each task. In particular, I will focus on the Adaptable I/O System (ADIOS), and the Kepler workflow system. We present many exciting research areas that our end to end team are working on, including: data staging/offloading, in situ visualization, fast reading of log based file formats, autonomic workflows, and fast writing of metadata rich output from 100K processors.
Formal Models of Reproduction: from Computer Viruses to Artificial Life
Abstract
In this thesis we describe novel approaches to the formal description of systems which reproduce, and show that the resulting models have explanatory power and practical applications, particularly in the domain of computer virology. We start by generating a formal description of computer viruses based on formal methods and notations developed for software engineering. We then prove that our model can be used to detect metamorphic computer viruses, which are designed specifically to avoid well-established signature-based detection methods. Next, we move away from the specific case of reproducing programs, and consider formal models of reproducing things in general. We show that we can develop formal models of the ecology of a reproducer, based on a formalisation of Gibson’s theory of affordances. These models can be classified and refined formally, and we show how refinements allow us to relate models in interesting ways. We then prove that there are restrictions and rules concerning classification based on assistance and triviality, and explore the philosophical implications of our theoretical results. We then apply our formal affordance-based reproduction models to the detection of computer viruses, showing that the different classifications of a computer virus reproduction model correspond to differences between anti-virus behaviour monitoring software. Therefore, we end the main part of the thesis in the same mode in which we started, tackling the real-life problem of computer virus detection. In the conclusion we lay out the novel contributions of this thesis, and explore directions for future research."
Download Complete Paper
The Computation of Equilibria in Congestion Networks
Abstract
We study a class of games in which a finite number of "selfish players" each controls a quantity of traffic to be routed through a congestion network in which n directed links are connected from a common source to a common destination. In particular, we investigate its equilibria in two settings.
Firstly, we consider a splittable flow model in which players are able to split their own flow between multiple paths through the network. Recent work on this model has contrasted the social cost of Nash equilibria with the best possible social cost. Here we show that additional costs are incurred in situations where a selfish "leader" player allocates its flow, and then commits to that choice so that other players are compelled to minimise their own cost based on the first player's choice. We find that even in simple networks, the leader can often improve its own cost at the expense of increased social cost. Focussing on a two-player case, we give upper and lower bounds on the worst-case additional cost incurred.
Secondly, we study the computation of pure Nash equilibrium for a load balancing game with variable-capacity links. In particular, we consider a simple generic algorithm for local optimisation; Randomised Local Search (RLS), which can simulate a network of selfish users that have no central control and only interact via the effect they have on the cost latency functions of links. It is known that an algorithm with series of self-improving moves will eventually reach the Nash equilibrium. Our contribution here is to show furthermore that Nash equilibria for this type of games are reached quickly by RLS.
Complete Paper
Hyperset Approach to Semi-structured Databases and the Experimental Implementation of the Query Language Delta
Abstract
This thesis presents practical suggestions towards the implementation of the hyperset approach to semi-structured databases and the associated query language Delta. This work can be characterised as part of a top-down approach to semi-structured databases, from theory to practice.
Over the last decade the rise of the World-Wide Web has lead to the suggestion for a shift from structured relational databases to semi-structured databases, which can query distributed and heterogeneous data having unfixed/non-rigid structure in contrast to ordinary relational databases. In principle, the World-Wide Web can be considered as a large distributed semi-structured database where arbitrary hyperlinking between Web pages can be interpreted as graph edges (inspiring the synonym ‘Web-like’ for ‘semi-structured’ databases also called here WDB). In fact, most approaches to semi-structured databases are based on graphs, whereas the hyperset approach presented here represents such graphs as systems of set equations. This is more than just a style of notation, but rather a style of thought and the corresponding mathematical background leads to considerable differences with other approaches to semi-structured databases. The hyperset approach to such databases and to querying them has clear semantics based on the well established tradition of set theory and logic, and, in particular, on non-well-founded set theory because semi-structured data allow arbitrary graphs and hence cycles.
The main original part of this work consisted in implementation of the hyperset \Delta-query language to semi-structured databases, including worked example queries. In fact, the goal was to demonstrate the practical details of this approach and language. The required development of an extended, practical version of the language based on the existing theoretical version, and the corresponding operational semantics. Here we present detailed description of the most essential steps of the implementation. Another crucial problem for this approach was to demonstrate how to deal in reality with the concept of the equality relation between (hyper)sets, which is computationally realised by the bisimulation relation. In fact, this expensive procedure, especially in the case of distributed semi-structured data, required some additional theoretical considerations and practical suggestions for efficient implementation. To this end the “local/global” strategy for computing the bisimulation relation over distributed semi-structured data was developed and its efficiency was experimentally confirmed.
Finally, the XML-WDB format for representing any distributed WDB as system of set equations was developed so that arbitrary XML elements can participate and, hence, queried by the \Delta-language.
Download Complete paper
AgentMT(TR) - a multi-threaded architecture using Teleo-Reactive plans
Abstract
In this talk we will argue that a multi-threaded control architecture with a library of partial plans, that are a generalization of Nilsson's Teleo-Reactive (TR) procedures, allows smooth integration of three key levels of robot control:
1: Speedy but goal directed response to changing sensor readings
2: Switching between level 1 control procedures as higher level inferred beliefs change
3: Reacting to events and goals by selecting appropriate level 2 plans
A key feature of TR procedure control is that the robot can be helped or hindered in its task and the TR procedure will immediately respond by skipping actions, if helped, or by redoing actions, if hindered. This operational semantics leads naturally to a multi-threaded implementation.
A multi-tasking robot can respond to events: new goal events or just significant belief updates triggered by sensor readings. It then selects an appropriate plan of action for each event using event/plan selection rules.
Hierarchical Graph Decompositions for Minimizing Congestion
Abstract
An oblivious routing protocol makes its routing decisions independent of the traffic in the underlying network. This means that the path chosen for a routing request may only depend on its source node, its destination node, and on some random input. In spite of these serious limitations it has been shown that there are oblivious routing algorithms that obtain a polylogarithmic competitive ratios w.r.t. the congestion in the network (i.e., maximum load of a network link).
The Theory of Dialectical Structures – Fundamentals, Applications, Outlook
Abstract
This talk gives a concise outline of the theory of dialectical structures with a special emphasis on applications and future implementation in computer programs. The theory of dialectical structures is an approach to reconstructing and analysing complex and controversial argumentation (Betz 2008; Betz forthcoming; Betz forthcoming). Debates are analysed as "bipolar argumentation frameworks" where attack‐ and support‐relations between the arguments are fully determined by the internal structure of the individual arguments plus the semantic relations which hold between the sentences these arguments are composed of. The basic concept any evaluation of a dialectical structure relies on is the notion of a (dialectically) coherent position a proponent can reasonably adopt in a debate. Building on that notion, the concepts of dialectic entailment and degree of justification, as well as the corresponding discursive aims such as fulfilling a burden of proof or increasing the robustness of one's position can be introduced. With the help of the argument‐mapping software Argunet (www.argunet.org), the theory of dialectical structures can be applied to already reconstructed debates, including, among others, parts of Plato's Parmenides, Descartes' Meditations, Hume's Dialogues Concerning Natural Religion, Larry Laudan's Science and Relativism. As a next step, I'm planning to implement the evaluation algorithms in a software library which can be integrated with Argunet and, more importantly, will allow one to design computer simulations which imitate real debates and evaluate them automatically.