Pages

Tuesday, March 24, 2009

20th National and 9th International ISHMT-ASME Heat and Mass Transfer Conference

20th National and 9th International

ISHMT-ASME Heat and Mass Transfer Conference

January 4-6, 2010


The importance of heat and mass transfer phenomena is ever increasing in established and emerging fields of research. All over the world, intensive research is being carried out on all aspects of heat transfer using theoretical, computational and experimental approaches. In order to foster international collaborations and discussions on heat transfer research, 20th National and 9th ISHMT-ASME Heat and Mass Transfer Conference is scheduled to be held between 4th and 6th January 2010 at Indian Institute of Technology, Bombay, Mumbai.

More than 500 leading researchers from academia, R&D organizations and industries from various countries are expected to participate in this conference.


2nd INTERNATIONAL WORKSHOP ON ELECTRON DEVICES AND SEMICONDUCTOR TECHNOLOGY (IEDST 2009)

2nd INTERNATIONAL WORKSHOP ON ELECTRON DEVICES AND SEMICONDUCTOR TECHNOLOGY (IEDST 2009)

June 1-2, 2009, Indian Institute of Technology Bombay, Mumbai, India.

The International Workshop on Electron Devices and Semiconductor Technology (IEDST) is Biennial event usually held in conjunction with the IEEE Electron Devices Society Adcom meeting. The first IEDST was held in Beijing, China, in 2007 and was organized by the Tsinghua University. The technical programme mainly consists of invited & plenary talks by world renowned scientists and IEEE EDS Distinguished lecturers in the broad areas of nano-scale electron devices and technology. There are also a few contributed papers in the form of oral & poster presentations. Continuing this tradition, the second IEDST is being hosted by IIT Bombay, India during June 1-2, 2009 immediately after the IEEE EDS Adcom meeting scheduled in Mumbai during May 29-31, 2009.

for more info visit : http://www.ee.iitb.ac.in/iedst/

Natural Laminar Flow Wing Development for Future Aircraft Design: Role of Bypass Transition

Natural Laminar Flow Wing Development for
Future Aircraft Design: Role of Bypass Transition

Abstract

For transport aircrafts more than half of total drag is caused by turbulent friction. Natural Laminar Flow (NLF) technology for transport aircraft wing design has come into being after the advent of supercritical airfoils that tries to delay transition. It is currently practiced with the help of semi-empirical models based on linear stability theory. The major problem of NLF design is due to the fact that transition prediction is still not reliable. Experimental validation
of the prediction is even less reliable. In the stability theory attention is focused on finding growing waves as precursor to transition, when the amplitude of these waves grow by empirically fixed factors. Stability approaches disregards role of actual disturbances. Moreover, flow over aircrafts often bypasses this route altogether – as is the case in many other technologically important flows also. Bypass transition is emerging as an important area of research in recent times. In the talk we will discuss about shortcomings of the stability theory and instead talk about receptivity approaches that link theoretically the transition process with specific types of background disturbances. Different types of bypass transition routes, including spatio-temporally growing disturbances would be tracked using receptivity
approach. These will be supplemented by high accuracy computing results for some new wing sections displaying bypass transition. Specific need for studying bypass routes for the design of future transport aircrafts would be discussed.

The Laminar Flame to Turbulent Flame to Detonation Transition: Studies of Non-Kolmogorov Turbulence and Stochasticity

The Laminar Flame to Turbulent Flame to Detonation Transition: Studies of Non-Kolmogorov Turbulence and Stochasticity

Abstract


The transition from a propagating subsonic Laminar flame to a high-speed Turbulent flame and then to supersonic Detonation wave (the LTD transition) involves a series of often dramatic events involving changes in the nature of the reaction wave. Some of the events develop continuously whereas others appear suddenly and with little apparent warning. The LTD transition occurs in highly exothermic energetic materials, for example in hydrogen-air mixtures resulting from gas leaks at hydrogen production and storage facilities as well as in carbon-oxygen mixtures in white-dwarf stars which, after ignition, become thermonuclear supernovae. This presentation describes the properties of the LTD transition using videos made from numerical solutions of the multidimensional, unsteady, chemically reacting, Navier-Stokes equations. The discussion focuses on selected features of the flow, including: formation of a turbulent flame and the nature of the turbulence, creation of hot spots as the origins of detonations, effects of stochastic processes on our ability to make predictions, and comparisons between simulations and experimental data.

Toward Numerical Simulations of Compressible Multiphase Flows with Applications to Shockwave Lithotripsy and Richtmyer-Meskov Instability

Abstract

Multiphase flows are ubiquitous in nature and in engineering applications, and encompass a range of phenomena as diverse as the dynamics of bubble clouds, the ablation of human tissue by focused ultrasound, and the impact of ocean waves onto naval structures. Though numerical simulations have become common design and analysis tools in fluid dynamics, current multiphase flow algorithms are still in developmental stages, particularly when the flow is compressible.

In the present talk, a compressible multicomponent flow method is presented and applied to study the non-spherical collapse of gas bubbles in the context of shockwave lithotripsy, a medical procedure in which focused shockwaves are used to pulverize kidney stones. The dynamics of non-spherical bubble collapse are characterized, and the damage potential of the shockwaves emitted upon collapse is evaluated by tabulating the wall pressure. In addition, various properties are compared to available experiments and theory, showing good agreement. Furthermore, by using the present results as boundary conditions for simulations of elastic wave propagation within a kidney stone, a new stone comminution mechanism is proposed. Finally, the application of the current method is discussed for simulations of the Richtmyer-Meshkov instability, in which a shock interacts with a perturbed interface.



Simulation and Control of Three-Dimensional Separated Flows around Low-Aspect-Ratio Wings

Simulation and Control of Three-Dimensional Separated Flows around Low-Aspect-Ratio Wings

Abstract

Micro air vehicles often fly with flow separation on their low-aspect-ratio wings due to the unique design and operational environment. However, three-dimensional flows around such vehicles have not been well understood compared to the classical high-Reynolds-number flows around conventional aircraft. To offer fundamental understanding of the flow field around small-scaled vehicles, a new formulation of the immersed boundary method is developed and used to perform three-dimensional flow simulations around low-aspect-ratio wings at low Reynolds numbers. The study highlights the unsteady nature of separated flows for various aspect ratios, angles of attack, and planform geometries. Following an impulsive start, the short and long time behavior of the wake and the corresponding forces exerted on the wing are examined.

At high angles of attack, the leading-edge vortices are observed to detach in many cases, resulting in reduced lift. Inspired by how insects benefit from the added lift due to the leading-edge vortices, actuation is introduced to increase lift by modifying the three-dimensional dynamics of the wake vortices behind translating wings. Successful control setups that achieve lift enhancement by a factor of two in post-stall flows for low-aspect-ratio wings will be presented.

Monday, March 23, 2009

High Performance Computing (HPC) and Lustre

High Performance Computing (HPC) and Lustre

Abstract

Sun manufactures, designs and implements High Performance Computing systems that scale up to the largest systems in the world using standardised technologies and open-source software. This talk shows Sun's approach and concentrates on one critical component - the parallel file system: Lustre. A parallel file system that scales must implement novel methods for coping with massive IO rates, vast numbers of clients and servers as well as handling the range of failures that occur regularly in very large hardware deployments. This talk overviews Sun's approach and looks at some of the methods implemented in the file system to cope with massive scale.

Managing Uncertainty in Networks

Managing Uncertainty in Networks

Abstract

We live in an uncertain world. One of the few things we do know is that when we make a forecast of what will happen it will inevitably be wrong. This talk considers uncertainty as part of the network planning process where traffic volumes, traffic mixes and distribution all impact the design of networks. We look at some of the tools and techniques that could be employed to manage uncertainty and in doing so revisit some of the ideas of the physicists Boltzmann and Newton.

Temperature, Stress and Hot Phonons in GaN Electronics and its Interfaces

Temperature, Stress and Hot Phonons in GaN Electronics and its Interfaces

Abstract

GaN power electronics has great potential for future radar and communication applications. Huge advances in their performance have made this new material system superior to GaAs and Si in particular in terms of power performance. However, there are still large reliability challenges which need to be addressed, often related to high device temperature and large stresses in the devices. Those are very challenging to assess as these are present only in sub-micron device regions typically located near the gate of an HEMT. I report on our work of the development of Raman thermography, to assess temperature, stress as well as hot phonon effects in AlGaN/GaN but also GaAs pHEMTs to address reliability challenges in power electronics. The techniques developed enable temperature and stress measurement in devices with submicron spatial and nanosecond time resolution. Effects of thermal cross-talk, but also heat transfer across interfaces in the devices will be discussed, together with hot-phonon effects.

Wednesday, March 18, 2009

On the integration of economic principles in grid resource management

On the integration of economic principles in grid resource management

Abstract

Grid computing has emerged as a paradigm for integrating and sharing IT resources and services across administrative and organizational boundaries. Large-scale grids such as those realized by the European “Enabling Grids for E-Science (EGEE)” project and the TeraGrid project in the US, now integrate compute and storage resources hosted by hundreds of research institutions across the globe. Grid technology allows researchers to tackle computational problems on an unprecedented scale. In addition, the technology delivers a platform for scientific collaboration and allows for more efficient use of resources by integrating and sharing them on a large scale. However, the developments in grid technology have given limited consideration to the realization of a model in which resources are shared among users and providers that have a potentially loose prefatory relationship.

As a consequence, mechanisms are at present missing that deliver clear incentives to resource owners to share their resources openly and for users to use resources in a well-considered manner. In addition, a flexible and efficient mechanism is currently missing that determines who obtains access to resources and services on a grid, at what time, and at what cost. As a result, the full potential of grid technology is not realized and many opportunities for increasing the efficiency of these systems remain.

Secure Electronic Voting

Secure Electronic Voting

Abstract

Elections need to be trustworthy, and to be seen to be trustworthy, in order for the electorate to have confidence in their outcomes. The introduction of technology into the electoral process brings potential new benefits, but may also increase the risk that accidental flaws or security weaknesses in the equipment leave an election open to tampering. Voting systems, whether run manually or on machines, should provide voters with the ability to cast a private vote, and to have confidence that their vote is really included in the final tally.

The Prêt à Voter electronic voting system is designed to provide these properties, and some further ones known as end-to-end verifiability, not currently present in standard UK elections: a receipt for the voters so that they can check their vote has been included in the tally, and can prove if it has not; and publication of the votes so that the count can be independently checked. This is achieved by making public all the stages in the processing of the votes, enabling the election to be audited independently. All this is possible while maintaining secrecy of the vote. Although electronic support for the election is necessary, the electronic components do not themselves need to be trusted because their outputs can be independently audited.

A coinductive approach to exact real number computation

A coinductive approach to exact real number computation

Abstract


Coinduction is a concept of increasing importance in mathematics and computer science, in particular in non-wellfounded set theory, process algebra and database theory. Probably the best-known example of a coinductive relation is bisimilarity of processes. In this talk I present an application of coinduction to exact real number computation. Intuitively, coinduction is about deconstructing, or observing data, or describing how a process evolves. This is in contrast to induction, which is about constructing data. A real number, say pi, is a typical candidate for coinduction. We cannot construct pi (exactly), but we can make observations about it, by, for example, successively computing digits of pi. Similarly, a computable continuous functions on the real numbers can be viewed as a coinductive process: we successively compute digits of the output while reading digits of the input.

In fact, it turns out that the computation of a single correct output digit is an inductive process. Therefore, continuous real functions are described by a combination of induction and coinduction.

Hierarchical Graph Decompositions for Minimizing Congestion

Hierarchical Graph Decompositions for Minimizing Congestion

Abstract

An oblivious routing protocol makes its routing decisions independent of the traffic in the underlying network. This means that the path chosen for a routing request may only depend on its source node, its destination node, and on some random input. In spite of these serious limitations it has been shown that there are oblivious routing algorithms that obtain a polylogarithmic competitive ratios w.r.t. the congestion in the network (i.e., maximum load of a network link).

AgentMT(TR) - a multi-threaded architecture using Teleo-Reactive plans

AgentMT(TR) - a multi-threaded architecture using Teleo-Reactive plans

Abstract

In this talk we will argue that a multi-threaded control architecture with a library of partial plans, that are a generalization of Nilsson's Teleo-Reactive (TR) procedures, allows smooth integration of three key levels of robot control: 1: Speedy but goal directed response to changing sensor readings2: Switching between level 1 control procedures as higher level inferred beliefs change3: Reacting to events and goals by selecting appropriate level 2 plans A key feature of TR procedure control is that the robot can be helped or hindered in its task and the TR procedure will immediately respond by skipping actions, if helped, or by redoing actions, if hindered. This operational semantics leads naturally to a multi-threaded implementation. A multi-tasking robot can respond to events: new goal events or just significant belief updates triggered by sensor readings. It then selects an appropriate plan of action for each event using event/plan selection rules. We conclude by describing our top level control architecture which borrows from classic BDI architectures, particularly AgentSpeak(L), but is multi-threaded and using TR plans.

Beyond Weighted Voting Games

Beyond Weighted Voting Games

Abstract

Weighted voting games are a simple but useful class of coalitional games that can model many real-life settings ranging from political decision-making to multi-agent coordination. However, the traditional model of weighted voting games does not allow for multiple coalitions to form simultaneously, or for agents to be uncertain about each other's weights. In this talk, we discuss extensions of weighted voting games that can handle these issues. Our main focus will be the stability of the resulting coalitions and coalition structures. We will also discuss computational issues that arise in such games.

Arguing Agents Competition

Arguing Agents Competition

Abstract

In this talk we present the Arguing Agents Competition project. It is an effort of a community of argumentation researchers and MSc/PhD students from various Universities towards creating an open, competitive environment in which heterogeneous agents argue against one another. AAC is designed to provide an open forum in which agents can compete using various argument dialogue protocols, where moves and arguments can be evaluated through a variety of argument computation engines. We see AAC as an opportunity to build a new tool that researchers in argumentation can exploit in order to advance the state of the art in this field. As the project is ambitious and in early stage of development, it is also hoped that this talk will stimulate discussion around the issue.

Second-Order Quantifier Elimination

Second-Order Quantifier Elimination

Abstract

In the investigation of logical methods and their application in Computer Science, and other fields, there is a tension between, on the one hand, the need for representational languages strong enough to expressively capture domain knowledge, the need for logical formalisms general enough to provide several reasoning facilities relevant to the application, and on the other hand, the need to ensure reasoning facilities are computationally feasible. Second-order logics are very expressive and allow us to represent domain knowledge with ease, but there is a high price to pay for the expressiveness. Most second-order logics are incomplete and highly undecidable.

It is the quantifiers which bind relation symbols that make second-order logics computationally unfriendly. It is therefore desirable to eliminate these second-order quantifiers, when this is mathematically possible; and often it is. If second-order quantifiers are eliminable we want to know under which conditions, we want to understand the principles and we want to develop methods for second-order quantifier elimination. In this talk we introduce the problem of second-order quantifier elimination and discuss two existing methods which have been automated: direct methods based on a result of Ackermann and clausal methods based on saturation with resolution. Various examples of applications will be given. We focus in more detail on modal correspondence theory where second-order quantifier elimination methods are being successfully used to automatically solve the correspondence problem for large classes of modal axioms and rules.

Uncoordinated two-sided matching markets

Uncoordinated two-sided matching markets

Abstract

Various economic interactions can be modeled as two-sided markets. A central solution concept to these markets are stable matchings, introduced by Gale and Shapley. It is well known that stable matchings can be computed in polynomial time, but many real-life markets lack a central authority to match agents. In those markets, matchings are formed by actions of self-interested agents. Knuth introduced uncoordinated two-sided markets and showed that the uncoordinated better response dynamics may cycle. However, Roth and Vande Vate showed that the random better response dynamics converges to a stable matching with probability one, but did not address the question of convergence time. We give an exponential lower bound for the convergence time of the random better response dynamics in two-sided markets. We also extend the results for the better response dynamics to the best response dynamics, i.e., we present a cycle of best responses, and prove that the random best response dynamics converges to a stable matching with probability one, but its convergence time is exponential. Additionally, we identify the special class of correlated matroid two-sided markets with real-life applications for which we prove that the random best response dynamics converges in expected polynomial time.

Stackelberg Strategies and Cost-Balancing Tolls for Atomic Congestion Games

Stackelberg Strategies and Cost-Balancing Tolls for Atomic Congestion Games

Abstract

In this talk, we discuss two natural approaches to reducing the inefficiency of (pure Nash) equilibria when self-interested atomic players route unsplittable traffic through a congested network. The first approach is to let a fraction of the players be coordinated by a Stackelberg strategy, which selects their paths so as to minimize the inefficiency of the worst equilibrium reached by the selfish players. We investigate the efficiency of three natural Stackelberg strategies for two orthogonal classes of congestion games, namely games with affine latency functions and parallel-link games. The second approach is to introduce edge tolls that influence the players' selfish choices and hopefully induce an optimal configuration. We focus on a natural toll mechanism called cost-balancing tolls. For symmetric network congestion games, we show how to compute in linear time a moderate set of cost-balancing tolls that induce the optimal configuration as an equilibrium of the modified game. For series-parallel networks with increasing latency functions, we prove that the optimal configuration is induced as the unique equilibrium of the game with the corresponding cost-balancing tolls.

Deductive Temporal Reasoning with Constraints

Deductive Temporal Reasoning with Constraints

Abstract

Often when modelling systems, physical constraints on the resources available are needed. For example, we might say that at most 'n' processes can access a particular resource at any moment or exactly 'm' participants are needed for an agreement. Such situations are concisely modelled where propositions are constrained such that at most 'n', or exactly 'm', can hold at any moment in time. This talk describes both the logical basis and a verification method for propositional linear time temporal logics which allow such constraints as input. The complexity of this procedure is discussed and case studies are examined. The logic itself represents a combination of standard temporal logic with classical constraints restricting the numbers of propositions that can be satisfied at any moment in time. We discuss restrictions to the general case where only 'exactly one' type constraints are allowed and extensions to first-order temporal logic.

Congestion games with faulty or asynchronous resources

Congestion games with faulty or asynchronous resources

Abstract

We introduce the concepts of resource failures and asynchronous task execution in congestion games. We present two models -- congestion games with load-dependent failures (CGLFs) in which resources may fail to execute their assigned tasks with some (congestion-dependent) probability, and asynchronous congestion games (ACGs) in which resources execute their assigned tasks not simultaneously but in a randomly chosen order. The random order of task execution reflects, for instance, a situation where players and resources are the elements of an asynchronous distributed system, in which each process has its own independent clock. As it turns out, these new settings lead to interesting observations about the interplay between the need to deal with failures or asynchronous nature of processes, and the emergence of congestion in non-cooperative systems. Indeed, the classical idea of using several resources in order to overcome the possibility of failure or to decrease the expected time of task completion, may result in a high congestion, hurting all agents in the system. Although, as we show, CGLFs and ACGs do not admit a potential function and therefore are not isomorphic to classic congestion games, we prove the existence of a pure strategy Nash equilibrium in the above classes of games. We also develop polynomial time algorithms for computing a pure strategy equilibrium in these games.

TAO - Transitioning Legacy Applications to Ontologies

TAO - Transitioning Legacy Applications to Ontologies

Abstract

Semantic-based software engineering has attracted a lot of attention recently from the Semantic Web community, and from the Software Engineering community who want to understand how to migrate existing, legacy systems to services that can exploit the use of semantics. However, to facilitate such a transition, a methodology is desirable, that considers the challenges of knowledge acquisition and modeling given the existence of both service descriptions, code, and support documentation. In this seminar, I will introduce the need for a methodology, and highlight some of the challenges in modeling the ontologies for both service annotation and support-document annotation, and then introduce the use of both formal models and procedural models. As the use of a Semantic Web Service Framework is crucial in the transitioning process, we also present two contrasting views of the most significant efforts to date: OWL-S and WSMO.

Reproduction, Rosen's Paradox and Computer Viruses

Reproduction, Rosen's Paradox and Computer Viruses

Abstract

Reproduction is a common phenomenon. Biological life was once thought to be the only area in which reproduction could be observed, but many other complex systems have apparent reproducing structures. For example, reproduction can be observed in many artificial life systems, such as cellular automata and digital organism simulators. There are even more exotic forms of reproduction, including computer viruses, memes (in psychology), firms (in economics) or even photocopies. In this talk we describe various examples of reproduction, and show that reproducers can be viewed as being assisted, or unassisted, by other entities in the environment. However, Rosen's paradox precludes the existence of unassisted reproduction. How then, can we reconcile apparently-unassisted reproduction with Rosen's paradox? We give an answer to this question using formal models of reproduction based on Gibson's affordance theory, and show how the same models can be applied to the real-life problem of computer virus detection.

Chloride-Induced Corrosion in Post-Tensioned Concrete Beams with Poor Grouting Condition

Chloride-Induced Corrosion in Post-Tensioned Concrete Beams with Poor Grouting Condition

Abstract

Post-tensioned prestressed concrete (PC) bridges properly designed and constructed generally have been considered highly durable because the prestressing tendons could be protected from corrosion by filling the duct with cement grout. In recent years, however, deterioration problems have been discovered in some existing PC bridges, raising serious concerns about the long-term durability of PC bridges. The major cause of deterioration in PC bridges is the corrosion of prestressing tendons, which affects structural performance in terms of serviceability and load-carrying capacity. In Japan, the major factor causing deterioration of PC bridges is chloride attack followed by poor grouting condition. The objectives of this study are to clarify the effect of grouting condition on corrosion of sheath and prestressing tendon and their influence on the deterioration of the load-carrying capacity of PC beams. To simulate deterioration of PC beams in a short period, an accelerated corrosion testing method (ACTM) was adopted. A series of accelerated corrosion tests were carried out in this study to clarify the influence of grouted ratios in a sheath on the corrosion of the sheath and the prestressing tendon. After the accelerated corrosion tests, the mechanical behaviour of the deteriorated PC beams was investigated under flexural loading.

Avalanches of fluid in the laboratory

Avalanches of fluid in the laboratory

Abstract:

The objective of this work was to increase our understanding of gravity-driven geophysical flows by developping a new platform to simulate avalanches of fluid in the laboratory. To simulate flow avalanches in the laboratory, we created a unique experimental setup consisting of a metallic frame supporting a reservoir, an inclined aluminum plane, and a horizontal run-out zone.

At 6-m long, 1.8-m wide, and 3.5-m high, the structure is probably the largest laboratory setup of its kind in the world. In a dam- break experiment, up to 120 liters of fluid can be released from the reservoir down the 4-m long inclined plane. We precisely control initial and boundary conditions. To measure the free-surface profile, a novel imaging system consisting of a high-speed digital camera coupled to a synchronized micro-mirror pro jector was developed.

The camera records how regular patterns pro jected onto the surface are deformed when the free surface moves. We developed algorithms to post- process the image data, determine the spreading rate, and generate whole-field 3-dimensional shape measurements of the free-surface profile. We compute the phase of the pro jected pattern, unwrap the phase, and then apply a calibration matrix to extract the flow thickness from the unwrapped phase. 56 different flow configurations, with a wide range of inclinations, were finally tested with Newtonian and viscoplastic fluids. For each test, the evolution of the free surface was recorded in 3 dimensions. Different flow regimes were observed, which depend on: the plane inclination, the setup geometry, the volume, and characteristics of the fluid. Partial agreements were found between theoretical models and our results.

Unsteady flows & convection, particle transport and realistic solar water heating

Unsteady flows & convection, particle transport and realistic solar water heating

Abstract

In the first part of this presentation, my research activities are briefly summarized. These include research interests, research techniques, computing and experimental facilities, competitive research grants and publications.In the second part, my research work on the following four specific topics is then briefly described and highlighted:

(1) Unsteady fountain flows, both in the weak and transitional regimes;
(2) Unsteady natural convection under a wide range of configurations and operating conditions; (3) Solar water heating under realistic application conditions;
(4) Particle transport in sheared flows.

Understanding the impact of micro-organisms in ecosystem dynamics

Understanding the impact of micro-organisms in ecosystem dynamics

Abstract:

Understanding the impact of micro-organisms in ecosystem dynamics: Linking multi-scale, multi-phase, and multi-process systems with microbial biomass function and dynamics in environmental modelingAlthough environmental modeling has achieved important results in predicting the dynamics of natural ecosystems, several aspects related to biota dynamics are still unresolved. Among the biota, micro-organisms have substantial effects in determining ecosystem function and its response to forcing. The overarching question that frames this work is: how can we take into account in our environmental models that micro-organisms have macro-effects? Micro-organisms largely affect the energy and mass transfer by means of highly non-linear feedback mechanisms that occur over a broad range of spatial and temporal scales.

These mechanisms imply a close coupling between aspects of physics, chemistry, and biology that requires a reshaping of the classic approach to environmental modeling. The work presented here focuses on the integration of microbial biomass dynamics and function in models used in environmental engineering. Two geophysical systems, i.e., aquatic and terrestrial, are used to show the role played by micro-organisms in determining the ecosystem response. In the first instance (the aquatic ecosystem) the dynamics of aggregation and breakup processes among fine mineral particles is coupled with the dynamics of aggregate-attached micro-organisms. In the second instance (the terrestrial ecosystem), a multi-phase and multi-component reactive flow transport model describing soil ecosystems is linked to micro-organism dynamics to highlight the effect of biomass on water flow, chemical kinetics, nutrient cycling, and contaminant release to the broader environment. A comparison with experimental data from the two ecosystems under investigation and the analysis proposed with our models suggest that the overall role of micro-organisms is more important than expected. The incorporation of a mechanistic description of micro-organism dynamics and function will allow us to improve our understanding of the environment complexity, dynamics, and response to natural and human perturbation.

Behaviour of Restrained Steel Components in Fire

Behaviour of Restrained Steel Components in Fire

Abstract

Fire is one of the disastrous effects on steel structures. To understand the behavior of components in a steel structure subjected to fire is essential to the structural safety against fire. Considering that the component in a structure is normally restrained by the other neighbor components, the behavior of restrained steel compo-nents in fire is more meaningful than that of isolated steel components in fire. Comparing with the un-restrained or independent components, the thermal expansion effect have to be considered on the behavior of restrained components, which is in general harmful to the fire-resistance of steel components. However, the restrain is beneficial to pertain the capacity of restrained components with some deflection developed, which can release the thermal expansion effect. Obviously, the behavior of restrained steel components may be complicated. It is founded that the catenary action may be developed in a restrained beam and the bowing effect may be developed in a restrained column. In addition, the membrane action can also be regarded as a kind of restrained effect being able to develop in a slab. The theoretical models to simulate the behavior of restrained steel beams, steel columns and steel-concrete composite slabs in fire are proposed and the experi-mental verifications are presented.

Wednesday, March 11, 2009

Characterization of FET Dynamics and Nonlinearity

Characterization of FET Dynamics and Nonlinearity

Abstract

Field Effect Transistors exhibit a variety of complicated dynamic and nonlinear
interactions that this session will attempt to demystify. The dynamics include self
heating, bias dependent change in trapped charge, and variations due to impact
ionization. These are feedback mechanisms that contribute to intermodulation as a
memory effect does. A similar contribution to distortion arising from external
impedances is intimately linked to the nonlinearity of the FET. Identifying and
characterizing FET dynamics and linearity is a key step in the design process, but a
variety of measurement issues arise. These include extraction of intrinsic
characteristics, exploration of nonlinearities across the whole spectrum, and
determination of rate dependencies from small-signal and pulse data. A FET is better
viewed as a nonlinear system with feedback, bias dependent rates, and high-order
nonlinear conductance and charge storage with specific terminal to terminal
interaction.

Comparison of Multi-User Scheduling Algorithms in MIMO Communications

Comparison of Multi-User Scheduling Algorithms in MIMO
Communications

Abstract

This seminar considers wireless broadcast systems with multi-antennas at
the base station. Precoding at the transmitter reduces interference between users
allowing independent data streams to be sent to multiple users simultaneously. With
typically more users than transmit antennas, efficient selection of user subsets is
important. The talk compares the effectiveness of several user selection algorithms
and identifies situations where they are suboptimal. The impact of zero-forcing (ZF)
and dirty paper coding (DPC) precoding are considered. Simulation results are used to
verify the performance of each scheduling algorithm and different scenarios are
suggested when a particular algorithm is optimal. The inefficiencies of scheduling
algorithms with particular precoding techniques are discussed in the talk.

Spiral Resonators

Spiral Resonators

Abstract

- In this talk I will be presenting an overview of recently proposed spiral resonators
and several applications of interest. In the first part, I will briefly talk about the similarities and
dissimilarities between the spirals and periodic electromagnetic bandgap structures. I will
compare several defect-ground type structures used as filters. In the second part, I will try to
explain the operating principles of the spirals. I will show examples of their applications in
harmonic suppression of an antenna, microstrip duplexer and coplanar waveguide duplexer.
Finally by exhibiting their negative effective permeability, I will conclude that these spirals are
indeed meta-material particles when excited by proper electromagnetic fields in space. I will
also present a left-handed aka negative refractive index medium consisting of metal spirals
and strips.

Phased Array Feed Development with the NTD Interferometer

Phased Array Feed Development with the
NTD Interferometer

Abstract:

A two dish interferometer has been built at the CSIRO Marsfield site to investigate phased array feed technologies for the next generation of radio telescopes, including MIRANdA and the SKA. Phased array feeds consisting of densely spaced focal plane arrays have the potential to provide contiguous wide fields of view and hence can greatly increase survey speeds. This interferometer is being used to investigate beamforming and calibration techniques. This talk will describe the instrument, some of the challenges in its development, proposed experiments and if available in time, some results.

Nonlinearity of Anti-Parallel Schottky Diodes For Mixer Applications

Nonlinearity of Anti-Parallel Schottky Diodes For Mixer Applications

Abstract

This work investigates the nature of nonlinearities in an anti-parallel combination of Schottky diodes, which is often used as a frequency converting device or a mixer. An anti-parallel Schottky diode pair mixer requires only half the local oscillator frequency and this property makes it very attractive at millimeter-wave frequencies. The sources of nonlinearity in a metal-semiconductor Schottky junction are the junction resistance caused by the thermionic emission of electrons across the barrier and the junction capacitance resulting from the charge seperation across the barrier. The mixing terms of utmost importance in an anti-parallel diode pair mixer are the wanted fundamental frequency converted product, the unwanted third-order frequency converted product and the undesirable second-order local oscillator breakthruough. This work aims to discover which; among the nonlinear sources is the primary contributor in the generation of unwanted third-order products. An anti-parallel diode pair has an anti-symmetrical current-voltage characteristic about the origin and a symmetrical capacitance-voltage characteristic about the Y-axis. In other words the current in the diode pair is an odd function of applied voltage whereas the capacitance is an even function of applied voltage. When both nonlinearities are described by a polynomial model extracted from measurements, the resistive nonlinearity contains only odd power terms and the capacitive nonlinearity is a sum of even power terms. The nonlinear resistance with only the odd power terms will be the dominant of the two in generating the unwanted third-order products where as the capacitive nonlinearity will be the primary contributor of local oscillator breakthrough. This work supports such an assertion by means of analysis, simulation and measurement. The results demonstrate the dominance of nonlinear resistance over junction capacitance in the generation of third-order products in an anti-parallel diode pair mixer. Understanding the nonlinearities involved also enables us to optimise the physical structure and geometry of the diode for good mixer performance.

Next Generation Wireless Networks Operating at 60GHz

Next Generation Wireless Networks Operating at 60GHz

Abstract

Next generation wireless networks operating at 60GHz require implementation using low-cost Silicon based processes. This talk will give an overview of the difficulties of operating at 60GHz on Silicon and demonstrate the approach we have taken to overcome these difficulties. Recent results for a 60GHz IQ modulator, VCO and power amplifier will be shown. Finally, I will explore the future for this technology.

60GHz Silicon Circuits for Gigabit Wireless Networks

60GHz Silicon Circuits for Gigabit Wireless Networks

Abstract

The 60GHz band has recently attracted much attention due to transistor scaling allowing low cost CMOS and SiGe BiCMOS circuits to perform sufficiently well in this unlicensed band. An overview of recently fabricated circuits in 0.18μm SiGe including low-noise amplifiers, mixers and receivers will be presented as will preliminary measured results.

Monday, March 9, 2009

From Novel Fuels to NanoParticle Formation: a Multiscale Computational Approach

From Novel Fuels to NanoParticle Formation: a Multiscale Computational Approach

Abstract

The process of combustion is the dominant pathway through which mankind continuously injects particulate matter into the atmosphere. These combustion-generated particles are present not only in very large amounts, but they are produced, at the smallest scale, in the form of clusters with nanometric dimensions. Although the total mass of particulate emissions has been significantly reduced with improvement of combustion efficiency and emissions control systems, the very small nanoparticles are exceedingly difficult to control by the emission systems typically installed on vehicles. In addition, the current emissions regulations are mass-based and do not address the presence of nanoparticles. Predictive models of nanoparticle formation and oxidation that provide detailed chemical structures of the particles currently do not exist, a fact that greatly limits our ability to control this important chemical process. The objectives of this work are focused on gaining a clear understanding of the chemical and physical processes occurring during the formation of carbon nanoparticles in combustion conditions and their fate in the environment. Starting from the chemistry of novel fuels, including esters, the primary focus is to provide a detailed multi-scale characterization of nanoparticle formation in combustion environments, through the use of novel simulation methodologies operating across disparate (spatial/temporal) regimes. The use of ab initio simulations to describe the reaction pathways for the breakdown of the fuel molecules, together with atomistic models, such as Molecular Dynamics simulations, allow us to follow the transformations that occur from fuel decomposition to nanoparticle formation in a chemically specific way, thereby providing information on both the chemical structure and the configuration of the nanoparticles and their agglomeration. This approach establishes a connection between the various time scales in the nanoparticle self-assembly problem, together with an unprecedented opportunity for the understanding of the atomistic interactions underlying carbonaceous nanoparticle structures and growth. Preliminary results will also be given from atomistic-scale simulations of the nanoparticles interacting with model cell membranes.

Protein Crystallisation

Protein Crystallisation

Abstract

A critical phase in an X-ray structural determination process is the crystallisation of the purified protein. Not only is this step time-consuming, it also requires a considerable amount of protein sample. The main objective is to obtain large, preferably single well-ordered crystals capable of diffracting X-rays to the highest possible resolution. In this paper I will describe the theory, physical methods and strategy involved in the crystallisation of biological macromolecules, and also the work that has been done to crystallise two distinct proteins – HR1b domain of PRK1 (Protein Kinase C-Related Kinase 1) and NK1 fragment of HGF /SF (Hepatocyte Growth Factor/Scatter Factor).

Application of the combined SPH and DEM techniques to simulate realistic Beer

Application of the combined SPH and DEM techniques to simulate realistic Beer

Abstract

A discrete particle based method capable of creating very realistic animations of bubbles in fluids is presented. It allows for the generation (nucleation) of bubbles from gas dissolved in the fluid, the motion of the discrete bubbles including bubble collisions and drag interactions with the liquid which could be undergoing complex free surface motion, the formation and motion of coupled foams and the final dissipation of bubbles. This allows comprehensive simulations of dynamic bubble behavior. The underlying fluid simulation is based on the mesh-free Smoothed Particle Hydrodynamics method. Each particle representing the liquid contains an amount of dissolved gas. Gas is transferred from the continuum fluid model to the discrete bubble model at nucleation sites on the surface of solid bodies. The rate of gas transport to the nucleation sites controls the rate of bubble generation, producing very natural time variations in bubble numbers. Rising bubbles also grow by gathering more gas from the surrounding liquid as they move. This model contains significant bubble scale physics and allows, in principle, the capturing of many important processes that cannot be directly modeled by traditional methods. The method is used here to realistically animate the pouring of a glass of beer, starting with a stream of fresh beer entering the glass, the formation of a dense cloud of bubbles, which rise to create a good head as the beer reaches the top of the glass.

Application of Smoothed Particle Hydrodynamics to simulate multi fluid flows with large density differences and application to mixing of paints

Application of Smoothed Particle Hydrodynamics to simulate multi fluid flows with large density differences and application to mixing of paints

Abstract


Liquid mixing and blending is an important operation before packaging a many fluid product. The blending or mixing process should be cheap and efficient and at the same time result in high product quality in terms of consistency. In order to achieve this goal experimentally one needs to perform many trials involving changes in the motion and geometric dimensions of the packaging container as well as product ratios of the fluid that is being blended together. Simulation can be used as an efficient tool to reduce the number of such experimental trials and/or optimise the system once a reasonable solution is obtained from the experimental trials.

There are three principle issues that arise while modelling such systems which can lead to difficulties for traditional grid-based CFD methods:

1. Tracking of the complex interface between the two phases as well as the free surface behaviour,
2. Dealing with complex motions of the packaging equipment
3. Dealing with large density differences between the fluids in consideration and
4. Tracking of convective movement of different fluid components.

In order to overcome these issues the mesh-free Smoothed Particle Hydrodynamics (SPH) method is explored as an alternative tool for modelling such systems. In this paper we develop a robust methodology to deal with fluids having a density ratio of up to 1:1000. We then demonstrate its application to the mixing of a viscous liquid in a container moving in a complex circular path with varying liquid levels. The simulations presented in this paper are performed in 2D. Extension to 3D and comparison with experimental results will be presented in a subsequent study.

Rheology of Core Cross-Linked Star Polymers

Rheology of Core Cross-Linked Star Polymers

Abstract

Living radical polymer synthesis enables novel and interesting polymer architectures to be synthesized with relative ease. One such polymer design is the core cross-linked star polymer (CCS polymer). This polymer has received much attention from a synthetic point of view, but the rheology of these polymers has not been characterised in detail.

This study presents the results of a rheological study of various CCS polymer solutions covering a range of different architectures and flow conditions. The polymer solutions have been shown to give an almost Newtonian response up to relatively high molecular weights and concentrations. At very high concentrations and molecular weights, the response becomes more polymeric in its nature. It is hoped that this relatively preliminary work will provide some insight into the potential application of these polymers which could include such areas as the film forming component of paint, rheology modifiers for plastics processing and drug delivery agents among many other possibilities.

Drug Delivery and Transport in the Human Brain

Drug Delivery and Transport in the Human Brain

Abstract

Several treatment modalities for neuro-degenerative diseases or tumors of the central nervous system involve invasive delivery of large molecular weight drugs to the brain. Despite ample experimental efforts, accurate drug targeting for the human brain remains a challenge. Our interdisciplinary research aims at a systematic design process for the targeted delivery of therapeutic agents into specific regions of brain based on first principles mathematical equations of drug transport and pharmaco-kinetics in porous tissue. The proposed mathematical framework predicts achievable treatment volumes in the desired regions as a function of target anatomy and infusion catheter positioning.

We tackle the three-dimensional optimal catheter placement problem to determine optimal infusion and catheter design parameters that maximize drug penetration and volumes of distribution in the target area, while minimizing toxicity in non-targeted regions. A novel computational approach for determining unknown transport properties of therapeutic agents from in-vivo imaging data will also be introduced.

We expect for the near future that rigorous computational approaches like ours will enable physicians and scientists to design and optimize drug administration in a systematic fashion.
Development and Production of Nuclear Probes for Molecular Imaging in-vivo

Abstract

The availability of specific imaging probes is the “nuclear fuel” for molecular imaging by Positron Emission Tomography (PET) and Single Photon Emission Tomography (SPECT). These two radiotracer based imaging modalities represent the prototype methods for non-invasive depiction and quantification of biochemical processes, allowing a functional characterization of biology in the living organism. A variety of powerful radiolabeled probes tracers are already established in the routine clinical management of human disease and others are currently subject to clinical assessment. Emerging from investigations of the genomic and proteomic signatures of cancer cells, an increasing number of promising targets are being identified, including receptors, enzymes, transporters and antigens. Corresponding probes for these newly-identified targets need to be developed and transferred into the clinical setting. An overview of tracer concepts, target selection and development strategies for radiotracers is given and the potential impact of µ-fluidic systems for routine tracer production is discussed.

Learning Mathematics via Image Processing: A Rationale and a Research Direction

Learning Mathematics via Image Processing: A Rationale and
a Research Direction

Abstract


Digital image processing offers several possible new approaches to the
teaching of a variety of mathematical concepts at the middle-school and
high-school levels. There is reason to believe that this approach will be
successful in reaching some ``at-risk'' students that other approaches
miss. Since digital images can be made to reflect almost any aspect of the
real world, some students may have an easier time taking an interest in
them than they might with artificial figures or images resulting from other
graphics-oriented approaches. Using computer-based tools such as image
processing operators, curve-fitting operators, shape analysis operators,
and graphical synthesis, students may explore a world of mathematical
concepts starting from the psychologically ``safe'' territory of their own
physical and cultural environments. There is reason to hope that this
approach will be particularly successful with students from diverse
backgrounds, girls and members of minority groups, because the imagery used
in experiments can easily be tailored to individual tastes.
This approach can be explored in a variety ways including the development
of modules either to replace components of existing curricula or to
augment them. Some possible modules covering both traditional and
non-traditional topics for school mathematics are described.
Some of the issues related to evaluation of such modules are also
discussed.


Index Assignment for Progressive Transmission of Full Search Vector Quantization

Index Assignment for Progressive Transmission of Full Search Vector
Quantization

Abstract

The question of progressive image transmission for full search vector
quantization is addressed via codeword index assignment. Namely, we
develop three new methods of assigning indices to a vector
quantization codebook and formulate these assignments as labels of
nodes of a full search progressive transmission tree. This tree
defines from the bottom up the binary merging of codewords for
successively smaller-sized codebooks. The binary representation for
the path through the tree represents the progressive transmission
code. The methods of designing the tree which we apply are Kohonen's
self-organizing neutral net, a modification of the common splitting
technique for the generalized Lloyd algorithm, and, borrowing from
optimization theory, minimum cost perfect matching. Empirical
experiments were run on a medical image data base to compare the
signal-to-noise ratio (SNR) of the techniques on the intermediate as
well as the final images. While the neural net technique worked
reasonably well, the other two methods performed better and were close
in SNR to each other. We also compared our results to tree-structured
vector quantizers and confirmed that full search VQ has a slightly
higher SNR.


Interactive Proof Systems with Polynomially Bounded Strategies

Interactive Proof Systems with Polynomially Bounded Strategies

Abstarct

Interactive proof systems in which the Prover is restricted to have a
polynomial size strategy are investigated. The restriction of
polynomial size computation tree, visible to the Prover, or
logarithmically bounded number of coin flips by the Verifier guarantee
a polynomial size strategy. The additional restriction of logarithmic
space is also investigated. A main result of the paper is that
interactive proof systems in which the Prover is restricted to a
polynomial size strategy are equivalent to MA, Merlin-Arthur games,
defined by Babai and Moran [1]. Polynomial tree size is also
equivalent to MA, but when logarithmic space is added as a
restriction, the power of polynomial tree size reduces to NP.
Logarithmically bounded number of coin flips are equivalent to NP, and
When logarithmic space is added as a restriction, the power is not
diminished. The proof that NP subseteq IP(log-space,
log-random-bits) illustrates an interesting application of the new
``fingerprinting'' method of Lipton [14]. Public interactive proof
systems which have polynomial size strategies are also investigated.

A Performance Study of Memory Consistency Models

A Performance Study of Memory Consistency Models

Abstract

Recent advances in technology are such that the speed of processors is
increasing faster than memory latency is decreasing. Therefore the
relative cost of a cache miss is becoming more important. However, the
full cost of a cache miss need not be paid every time in a multiprocessor.
The frequency with which the processor must stall on a cache miss can
be reduced by using a relaxed model of memory consistency.

In this paper, we present the results of instruction-level simulation
studies on the relative performance benefits of using different models
of memory consistency. Our vehicle of study is a shared-memory
multiprocessor with processors and associated write-back caches
connected to global memory modules via an Omega network. The benefits
of the relaxed models, and their increasing hardware complexity, are
assessed with varying cache size, line size, and number of processors.
We find that substantial benefits can be accrued by using relaxed
models bu the magnitudes of the benefits depends on the architecture
being modeled, the benchmarks, and how the code is scheduled. We did
not find any major difference in levels fo improvement among the
various relaxed models.

Adaptive Guided Self-Scheduling

Adaptive Guided Self-Scheduling

Abstract

Loops are an important source of parallelism in application programs.
The iterations of such loops may be scheduled statically
(at compile time) or dynamically (at run-time) onto the processors
of a parallel machine.
Dynamic scheduling methods, although offering increased robustness
and flexibility relative to static approaches,
must be carefully designed if they are to simultaneously
achieve low overhead and good load balance.

In this paper we propose Adaptive Guided Self-Scheduling, an
enhancement of the Guided Self-Scheduling (GSS) approach to the
dynamic scheduling of loops with independent iterations. Adaptive
Guided Self-Scheduling addresses the two principal weaknesses of GSS:
excessive overhead caused by fine grained scheduling of the final few
loop iterations when iteration execution times are small, and load
imbalance resulting from consecutive iteration execution times that
vary widely but in a correlated way. We address the first weakness by
adaptively adjusting the scheduling granularity if, at run time, loop
iterations are determined to be so short that overhead would otherwise
be excessive. We address the second weakness by providing the option
of wrapped assignment of loop iterations, assigning processors groups
of iterations sampled uniformly throughout the iteration space rather
than blocks of consecutive iterations.

Performance measurements are presented for an implementation of Adaptive
Guided Self-Scheduling on a Sequent Symmetry multiprocessor.


Predicting Program Execution Times by Analyzing Static and Dynamic Program Paths

Predicting Program Execution Times by Analyzing Static and Dynamic Program Paths

Abstract

This paper describes a method to predict guaranteed and tight
deterministic execution time bounds of a program. The basic prediction
technique is a static analysis based on simp[le timing schema for
source-level language constructs, which gives accurate predictions in
many cases. Using powerful user-provided information, dynamic path
analysis refines looser predictions by eliminating infeasible paths
and decomposing the possible execution behaviors in a path wise
manner. Overall prediction cost is scalable with respect to desired
precision, controlling the amount of information provided. We
introduce a formal path model for dynamic path analysis, where user
execution information is represented bu a set of program paths. With a
well-defined practical high-level interface language, user information
can be used in an easy and efficient way. We also introduce a method
to verify given user information with known program verification
techniques. Initial experiments with a timing tool show that safe and
tight predictions are possible for a wide range of programs. The tool
can also provide predictions for interesting subsets of program
executions.

Sunday, March 8, 2009

CYBERSPACE GEOGRAPHY VISUALIZATION

CYBERSPACE GEOGRAPHY VISUALIZATION

ABSTRACT

The central goal of this paper is to give information about virtual locations to the actors of cyberspace in order to help them solve orientation issues, i.e. the lost-in-cyberspace syndrome. The approach taken involves low dimensional digital media to create the visualization that can guide you.

The World-Wide Web can be depicted as a graph. Each resource is a vertex and the links are the edges. The distances between pairs of resources is then defined as the shortest path in the graph between them, leading to the creation of a metric. With the ability provided to measure the distances among resources, it becomes possible to represent each resource as a point in a high dimensional space where their relative distances are preserved.
It is clear that a high dimensional space cannot be visualized and thus its dimensionality has to be reduced. To perform this task, the self-organizing maps algorithm is used because it preserves the topological relationships of the original space, conjointly lowering the dimensionality. This creates the ability to map any resources onto a lower dimensional space, while maintaining their order of proximity.

During this non-linear dimensionality reduction, the distances among resources are lost. Since it is primordial that the distances can be evaluated, the unified matrix method is used. By geometrically approximating the vector distribution in the neurons of the self-organizing maps, this method provides a means to analyse the landscape of the mapping of cyberspace.
To permit exploratory analysis of the self-organizing map, the mapping is made onto a two-dimensional visualization media. Note, however, that reduction is also possible, using the proposed method, to a space having an arbitrary dimension. This approach enables the visual display of virtual locations of resources on a landscape, in a fashion similar to geographical maps.

Friday, March 6, 2009

Tumour tracking for radiotherapy of lung tumours”

Tumour tracking for radiotherapy of lung tumours”

Abstract

One of the main causes of death for males in New Zealand is lung cancer. Lung cancer has a poor prognosis. The 5-year survival for lung cancer patients is as low as 10-15%. One of the treatment options is radiotherapy. One of the difficulties in treatment of lung cancers with radiotherapy is that intra-fractional movement of lung tumours necessitates considerable treatment margins around the lesion to ensure sufficient dose coverage. These margins result in the treatment volumes to be several folds larger than the actual tumour volume. Consequently, a large fraction of healthy lung tissue surrounding the tumour receives an undesired high dose, which may lead to considerable side effects. As a consequence, treatment of lung cancers with radiotherapy is often restricted to non-optimal radiation doses. This is despite the clinical evidence that higher radiation doses to the tumour result in a survival advantage.

One way of reducing the treatment margins and thereby enabling higher radiation doses to the tumour is to track and correct for tumour motion in real-time. In the past few years, I have been involved in developing the Wuerzburg Adaptive Tumour Tracking System (WATTS). The WATTS acquires two independent (non-optimal) data sets, one based on real-time megavoltage imaging and the other on an optical tracking device to record abdominal movement, to infer the tumour position in real-time. This information is processed and passed to the control system of a robotic hexapod table. The idea is to move the hexapod table continuously against the direction of the tumour motion in such a way that the moving tumour remains stationary in space.

Recent Research in Composite Materials: Nanomaterials and Fatigue, Compressive Strength, Articulated LCM ”

Recent Research in Composite Materials: Nanomaterials and Fatigue, Compressive Strength, Articulated LCM ”

Abstract

Brief summaries of recent research in mechanical behavior and processing of structural composites will be presented. Topics discussed are: Nanomaterials and fatigue damage, compressive strength models, and liquid composite molding (LCM). Incorporating small quantities of carbon nanotubes is shown to produce increases in the high-cycle fatigue strengths of glass composites which normally show significant degradation in fatigue strength with cycling. Fatigue data and high-resolution electron microscopy on glass fiber composites modified with well-dispersed carbon nanotubes provide evidence that distributed nanocracking by the carbon nanotubes extends fatigue life. The second topic is on a new model for the compressive strength in which the incorporation of an interphase is shown to provide an explanation for the observed dependence of the compressive strength with volume fraction in a variety of composite materials. The third topic is a study of the effect of incorporating tool articulation during resin transfer molding which accelerates molding speeds by a factor of ten or more. Models, experimental data and potential applications will be discussed.

Towards a Generic Simulation of Liquid Composite Moulding Processes”

Towards a Generic Simulation of Liquid Composite Moulding Processes”

Abstract

The term Liquid Composite Moulding (LCM) encompasses several composite manufacturing processes, including Resin Transfer Moulding (RTM), Injection/Compression Moulding (I/CM), Resin Infusion (a.k.a. Vacuum Assisted RTM), and RTM Light. A significant number of additional variants processes have been developed, but in all cases dry fibre reinforcement contained within a mould cavity is infiltrated by a moving front of polymeric thermoset resin. As the application of fibre reinforced plastics has widened, and manufacturers are increasingly required to work under tighter environmental regulations, LCM variants have emerged which best suit a particular manufacturing scenario (i.e. one-off production, through to mass production). Processes such as Resin Infusion and RTM Light, which utilize flexible and semi-rigid tooling, provide the potential for reducing tooling costs but produce significant challenges for modelling. If LCM variants utilizing flexible or semi-rigid tooling are to be addressed, a generic LCM simulation will require a thorough analysis of the forces exerted on tooling. An overview will be given on the LCM simulation under development at the University of Auckland. Several experimental verification studies will be presented to highlight progress, and the challenges that remain. Recent highlights include measurement of stress distributions exerted on rigid mould tools (RTM, I/CM), and a stereophotogrammetry system developed to make full-field laminate thickness measurements during Resin Infusion.

Methodology and Modelling Approach for Strategic Sustainability Analysis of Complex Energy-Environment Systems"

Methodology and Modelling Approach for Strategic Sustainability Analysis of Complex Energy-Environment Systems"

Abstract

It is likely that in the near future, energy engineering will be required to help society adapt to permanently constrained fuel supplies, constrained green house gas emissions, and electricity supply systems running with minimal capacity margins. The goal of this research is to develop an analytical approach for adaptive energy systems engineering within the context of resource and environmental constraints. This involves assessing available energy resources, environmental and social issues, and economic activities. The approach is applied to a relatively simple case study on Rotuma, an isolated Pacific Island society. The case study is based on new data from field work. A spectrum of development options is identified for Rotuma and a reference energy demand is calculated for each representative level. A spectrum of conceptual reference energy system models is generated for each energy service level with a range of renewable energy penetration. The outcome is a matrix of energy system investment and resource utilization for the range of energy service levels. These models are then used for comparative risk assessment. The result is an easily understood visual based investment and risk assessment for both development and adaptation to constrained resource availability. The resultsshow a clear development opportunity space for Rotuma where needs and services are in balance with investment, local resource availability and environmental constraints.

Robot Manipulator Operation based on Human Brain-Wave Signal”

Robot Manipulator Operation based on Human Brain-Wave Signal”

Abstract

Brain waves are representative sensory signals of the human body and exhibit different frequencies depending on a person’s emotional state. In this presentation, the author discusses the operation of a robot manipulator by brain waves and evaluate its motion experimentally. When people look at a comfortable scene or feel relaxed, their brain waves generally exhibit a -wave signal in the frequency band of approximately 8 to 13 Hz. In such conditions, these a waves particularly exhibit a 1/f fluctuation (in which the corresponding power is inversely proportional to the frequency “f ”) which is said to be a comfortable fluctuation for humans.

With this background into consideration, we input the 1/f a -wave signals obtained from test subjects listening comfortable music to a robot manipulator by controlling the angle of each joint. To evaluate this motion, questionnaires were filled in by twenty test subjects while they watched two types of manipulator motion: a 1/f motion and a white noise-like motion. It was suggested from the test that 90% of people felt comfortable with the 1/f manipulator motion.

Development of a Rotational Shear Vane for use in Avalanche Safety Work

Development of a Rotational Shear Vane for use in Avalanche Safety Work

Abstract

This Masters Thesis describes the continuation of the Snow Probe development. The focus of this project was to establish the rotational shear vane as a useful tool in avalanche safety work as well as develop a robust method for measuring the applied torque.

A new and novel way of measuring the torque on a rotational shear vane has been developed to illustrate its effectiveness. The new system measures the power supplied to a cordless drill to get an indication of the applied torque. This was done because it was found that the earlier method of using a strain gauge/cantilever system repeatedly failed to work, largely due to complexity.
The snow probe in its present embodiment has been shown to provide a good clear indication of the snow profile under easily repeated circumstances. Shear strength results are at this stage not sufficiently for reliable quantitative results. However the probe in its present form is able to give pictorial impressions of the snow pack that compare well to current hand hardness profiles derived from snow pit methods. Even in its current form the snow probe is able to collect useful snow profile data in a matter of minutes, much quicker than conventional snow pit methods.

A loose relationship was found to exist between the approach angle of a shear vane blade and the clarity of the snow profile. These relationships are relatively inaccurate at present due to lack of rotational velocity data and therefore approach angle data. It is believed that the addition of a rotation counter would greatly increase the accuracy of the probe results and enable a shear strength profile to be quantified.

Six Sigma

Six Sigma

Six Sigma (6 ) is a disciplined, data-driven approach to process improvement, reduced costs, and increased profits. The Six Sigma methodology, consisting of the steps "Define - Measure - Analyze - Improve - Control," is the roadmap to achieving this goal. This methodology was originally practiced with great success by Motorola in the early 1980s. Today, Six Sigma methodology has been adopted by major organizations in the world. The goal of this brief overview session will be to present the basic concepts of Six Sigma disciplines in plain language. Engineering and quality professional at all levels within a company are expected to benefit by attending this class. Discussions during the session will address the following areas:

● WHAT is Six Sigma
● WHERE is Six Sigma applicable
● WHY is Six Sigma important
● HOW would Six Sigma be implemented
● WHO would drive and implement Six Sigma
● WHEN should Six Sigma be launched

A Multiscale Characterization and Analysis Methodology for Ductile Fracture in Heterogeneous Metallic Materials

A Multiscale Characterization and Analysis Methodology for Ductile Fracture in Heterogeneous Metallic Materials

Abstract

Heterogeneous metallic materials e.g. cast aluminum alloys or metal matrix composites arewidely used in automotive, aerospace, nuclear and other engineering systems. The presence ofprecipitates and particulates in the microstructure often affect their failure properties like fracturetoughness or ductility in an adverse manner. Important micromechanical damage modes that areresponsible for deterring the overall properties include particulate fragmentation, debonding atinterfaces and ductile matrix failure due to void initiation, growth and coalescence, culminating inlocal ductile failure. The complex interaction between competing damage modes in the presenceof multiple phases makes failure and ductility prediction for these materials quite challenging.While phenomenological and straightforward micromechanics models have predicted stress-strainbehavior and strength of multi-phase materials with reasonable accuracy, their competence inpredicting ductility and strain-to-failure, which depends on the extreme values of distribution, isfar from mature. To address the needs of a robust methodology for ductility, the work will discussa comprehensive multi-scale characterization based domain decomposition method followed by amulti-scale model for deformation and ductile failure. Adaptive multi-scale models are developedfor quantitative predictions at critical length scales, establishing functional links betweenmicrostructure and response, and following the path of failure from initiation to rupture. Thework is divided into three modules. (i) Multi-scale morphology based domain partitioning todevelop a pre-processor for multiscale modeling, (ii) Enriched Voronoi Cell FEM for particle andmatrix cracking leading to ductile fracture and (iii) Macroscopic homogenization continuumdamage model for ductile fracture. Finally a robust framework for two-way multi-scale analysismodule is the coupling of different with different inter-scale transfer operators and interfaces isdeveloped.

BEM/FEM Analysis of MEMS With Thin Features

BEM/FEM Analysis of MEMS With Thin Features

Abstract

Numer ical analyses of a class of electr ically actuated MEMS deviceshave been car r ied out for over fteen year s by using the BEM to modelthe exter ior electr ic eld and the FEM to model the deformation of thestructure. I n many MEMS applications, the structural elements are thinplates or beams. A modied BEM approach has been proposed recentlythat can handle these thin features e-ciently without having to dealwith near ly singular matr ices. Use of a L agrangian formulation for boththe electr ical and mechanical domains makes these calculations verye-cient and accurate.
T his talk will present an overview of these recent developments. I npar ticular , two problems will be discussed in some detail. T he r st isconcerned with quasi-static deformations of thin MEMS plates,subjected to electrostatic for ces. T he second concerns free and for cedvibrations of thin MEMS beams. Stokes damping from the sur rounding uid is also included in the beam problem.

Virtual Testing & Applications (Multiscale Modeling & Simulation)

Virtual Testing & Applications (Multiscale Modeling & Simulation)

Abstract

Fracture toughness and fatigue crack growth rate data are two key parameters which are necessary for conducting the safe life analysis of fracture critical parts used in space and aircraft structures. Currently, these allowables are obtained through the ASTM testing standards which
are costly and time consuming. In many occasions, due to budget limitations and deadlines set forth by the customer, it is not possible to conduct fracture related tests in time. A proposed numerical approach has been developed by B. Farahmand that is based on the extended Griffith theory and can predict fracture allowables for a variety of alloys. The simplicity of the concept is based on the use of basic, and in most cases available, uniaxial full stress-strain data to derive material fracture toughness values. The fracture toughness value is thickness dependent and its value is used to predict region III of the fatigue crack growth rate curve. Regions I & II of the da/dN versus ∆K curve can be estimated separately and will be connected to region III to establish the total fatigue crack growth rate data. As the result of this work two computer codes, fracture toughness determination (FTD) and fatigue crack growth (FCG), were generated under the NASA contract. Results of fracture toughness and fatigue crack
growth rate data calculated by this approach were compared with numerous test data in the NASGRO database. Excellent agreements between analyses and test data were found, which will validate the FTD and FCG methodology. This novel approach is referred to as the virtual testing
technique. It enables engineers to generate fracture allowables analytically by eliminating unnecessary tests. Yet, there is another innovative approach in the virtual testing arena that relies on the multiscale modeling and simulation technique. This technique is becoming popular in the field of computational materials, where the failure mechanism is described from the bottom up approach. The methodology is based on the abinitio concept where it is assumed that the failure of material will initiate from the atomistic level and grow to a visible crack. Under this
condition nanocracks will grow under the applied load (as the result of weak interface) and advance toward the micro and macro size, which thereafter will cause total structural failure.

Computing for the Future of the Planet

Computing for the Future of the Planet

Abstract:

Digital technology is becoming an indispensable and crucial component of our lives, society,
and environment. A framework for computing in the context of problems facing the planet
will be presented. The framework has a number of goals: an optimal digital infrastructure,
sensing and optimising with a global world model, reliably predicting and reacting to our
environment, and digital alternatives to physical activities.

OPTIMAL MODEL SIMPLIFICATION OF QUASIRATIONAL AND RETARDED DISTRIBUTED SYSTEMS AND THEIR USE FOR RAPID DESIGN OF SIMPLE CONTROLLERS

OPTIMAL MODEL SIMPLIFICATION OF QUASIRATIONAL AND RETARDED DISTRIBUTED SYSTEMS AND THEIR USE FOR RAPID DESIGN OF SIMPLE CONTROLLERS

The optimal model simplification technique which was originally proposed for the simplification of systems having simple delays in the numerator transfer functions has been extended to systems having several numerator delays such as quasirational and related systems. This extension also takes care of situations when the model has the additional complexity of possessing integral, double integral, or higher type factors, once their step responses are bounded. The design of simple feedback controllers for such systems has engaged the attention of researchers for sometime. In this work, it is shown that by using the internal model control method, the optimal simplified models can be used to parametrize simple feedback controllers for these systems. This procedure facilitates the deployment of a very effective and powerful controller design technique for quasirational distributed and related systems. It is further demonstrated that contrary to previous works, proportional plus integral controllers can be used to effectively control these systems.

Thursday, March 5, 2009

Quantum Algorithms and Basis Changes

Quantum Algorithms and Basis Changes

ABSTRACT:

The state of an n-bit quantum computer is described by a unit vector in a 2^n-dimensional complex vector space. This means that transformations are possible, such as a square root of NOT or a Fourier transform of the amplitudes of a state, that would not even make sense for classical probability distributions. Some of these transformations, like the quantum Fourier transform, allow for exponential speedups over classical computation. In my talk, I'll review what these transformations mean and what can be accomplished with them. Then I'll talk about work I've done on efficiently implementing an operation known as the Schur transform, which is based on a quantum analogue of the type classes used in classical information theory.

Speed-Accuracy Tradeoffs in Collective Decision Making

Speed-Accuracy Tradeoffs in Collective Decision Making

ABSTRACT:

Information flows in ever greater quantities on dynamically changing networks. There is thus an increasing need to understand how to put computing power where it is most needed. Most traditional centrally controlled scheduling algorithms are not designed for such tasks, and they should be replaced by more decentralized systems. Ant colonies provide a prime example of a decentralized decision making system as a potential source of inspiration for distributed computing environments. Through the course of evolution, ants have acquired sophisticated mechanisms that use only local information, which they integrate into collective decisions. In this talk I will combine simple models with experimental data to illustrate how ants achieve tradeoffs between speed and accuracy in ant colony emigrations. In particular, I will focus on the question which of the mechanisms are most important to maximize one or the other.

Breaking the Pixel: Component-Based Rendering Systems

Breaking the Pixel: Component-Based Rendering Systems

ABSTRACT:

Due to ray-tracing having linear complexity to the number of pixels rendered or rays shot, traditional rendering systems attempt to reduce number of primary rays shot. [...] By considering that at each intersection point the reflectance function of the medium is calculated by shooting rays to simulate different properties of the material we can use the individual rays from one intersection point to the next as a finer level of granularity. In this talk we investigate this approach termed component-based rendering and highlight the advantages of adopting a finer level of granularity by removing the recursion when required. We present a framework for component-based rendering which allows the user to control the desired transport equation using a regular expression. This technique is further extended to progressive rendering, time-constrained rendering and perceptually-based rendering.

Automating Aspects of Cryptographic Implementation: A Cryptography-Aware Language and Compiler

Automating Aspects of Cryptographic Implementation: A Cryptography-Aware Language and Compiler

ABSTRACT:

History has shown that programmers do bad mathematics and mathematicians write bad programs. This isn't a good situation if we need to write mathematically oriented, cryptographic software. It is even worse if this software runs on your credit card since it needs to be secure as well as efficient and functionally correct. As a step toward resolving this problem, we present a language and compiler which allows novel cryptography-aware analysis and optimisation phases.

Quantum Walks: Definition and Applications

Quantum Walks: Definition and Applications

ABSTRACT:

Random walks on graphs are an important tool in computer science. A recently-developed quantum mechanical version of random walks has the potential to become equally important in the study of quantum computation. This talk will provide an introduction to the field of quantum walks, and will be divided into two parts. The first part will explain the concepts behind quantum walks, how they differ from classical random walks, and how a quantum walk on a given graph can be produced. Unlike classical random walks, not every directed graph admits the definition of a quantum walk that respects the structure of the graph. The second part of the talk describes several applications of quantum walks. A number of quantum walk algorithms have been developed that outperform their classical counterparts: I will describe quantum algorithms for network routing, unstructured search, and element distinctness.

Pervasive Computing: A SoC Challenge

Pervasive Computing: A SoC Challenge

ABSTRACT:

Although each sensor network may be composed of hundreds or even thousands of identical sensors, these volumes are not sufficient to justify the design of custom integrated circuits. Instead, we need a `pervasive computing platform' - a reconfigurable system that can be quickly adapted to a specific application. In addition to these design challenges, such a system must be capable of self-diagnosis and even limited self-repair in order to maintain the integrity of the network. In this talk, the design challenges for a System on Chip pervasive computing platform will be presented, together with some applications that bring together the various research skills at the University of Southampton.

Higher-Order Bayesian Networks: A Probabilistic Reasoning Framework for Structured Data Representations Based on Higher-Order Logics

Higher-Order Bayesian Networks: A Probabilistic Reasoning Framework for Structured Data Representations Based on Higher-Order Logics

ABSTRACT:

Bayesian Networks (BNs) are a popular formalism for performing probabilistic inference. The propositional nature of BNs restricts their application to data which can be represented as tuples of fixed length, excluding a vast field of problems which deal with multi-relational data. Basic Terms, recently introduced by John W. Lloyd, are a family of terms within a typed higher-order logic framework which are particularly suitable for representing structured individuals such as tuples, lists, trees, graphs, sets etc. I will present a proposed extension of BNs, Higher-Order Bayesian Networks, which define probability distributions over domains of Basic Terms. We can perform sampling from these distributions, and use that to calculate the answer to probabilistic inference queries. We have also developed a method for model learning given a database of observations. Finally, I will show how we have applied learning and inference on real-world classification problems.

Towards Robust Camera Tracking in Cluttered Visual Environments

Towards Robust Camera Tracking in Cluttered Visual Environments

ABSTRACT:

It has long been of interest in computer vision and more recently in the robotics community to estimate camera motion from a sequence of images (structure from motion/simultaneous location and mapping). Traditionally, this requires tracking visual features such as corners or lines between at least two frames and possibly over the duration of the video sequence. We present a novel particle-filter based algorithm for estimating camera motion which combines feature tracking and motion estimation within the same framework.