Apply to our Graduate Programs online >>
Apply to our Undergraduate Programs online >>
Title: A home-based rehabilitation system for deficient knee patients
Year: 2015
Advisor: Krovi, Venkat N.
Degree: Ph.D.
Abstract: The Smart Health paradigm has opened up immense possibilities for designing cyber-physical systems with integrated sensing and analysis for data-driven healthcare decision-making. Clinical motor-rehabilitation has traditionally tended to entail labor-intensive approaches with limited quantitative methods and numerous logistics deployment challenges. We believe such labor-intensive rehabilitation procedures offer a fertile application field for robotics and automation technologies which is easily applicable to home-based rehabilitation system. Our long-term goal is the creation, analysis and validation of a Home-based Rehabilitation Framework comprised of quantitative human subject measurement technologies, an adjustable smart brace coupled with an integrated PC-based control system to enhance rehabilitation process for deficient knee patient. Human motion-capture and computational analysis tools have played a significant role in a variety of product-design and ergonomics settings for over a quarter-century. However, there exist significant differences in the capabilities and ease-of-use between these tools thus we perform comparative analysis of motion data from two alternate human motion-capture systems (high-resolution Vicon vs low-resolution Kinect). In addition to traditional resolution/accuracy study, data for multiple trials of motions were captured and examined to verify motion capture fidelity and the role of pre- and post-processing (calibration and estimation). In our work, we adapt Principal Component Analysis (PCA) approaches and K-Nearest Neighbors (K-NN) method for subject classification. Knee bracing has been used to realize a variety of functional outcomes in both sport and rehabilitation application. Traditionally, the design of exoskeletons (from choice of configuration to selection of parameters) as well as the process of fitting this exoskeleton (to the individual user/patient) has largely depended on intuition and/or practical experience of a designer/physiotherapist. However, improper exoskeleton design and/or incorrect fitting can cause buildup of significant residual forces/torques (both at joint and fixation site). Performance can be further compromised by the innate complexity of human motions and need to accommodate the immense individual variability (in terms of patient-anthropometrics, motion-envelopes and musculoskeletal-strength). In our work, we propose a systematic and quantitative methodology to evaluate various alternate exoskeleton designs using kinetostatic design optimization and twist-/wrench- based modeling and analysis. This process is applied in the context of a case-study for developing optimal configuration and fixation of a knee brace/exoskeleton. An optimized knee brace is prototyped using 3D printing and physically tested. Recent research on exoskeletons has examined ways of improving flexibility, wearability as well as reducing overall weight. Very few exoskeletal systems, however, have succeeded in satisfying all these criteria due to the complexities engaged in human joint motions and loading. Compliant mechanisms offer a class of articulated multibody systems that allow relatively stiff but lightweight solutions for exoskeleton/braces. In our study, we introduce Parallel Coupled Compliant plate (PCCP) mechanism and Pennate Elastic Band (PEB) spring architecture and evaluate them. PCCP/PEB system provides both flexibility and extreme stiffness to user with respect to posture/angle of knee joint. The performance of PCCP/PEB system was verified by 3D printed physical exoskeleton prototype. The overall human subject measurement and adjustable smart brace controller are integrated within a Matlab based acquisition, analysis and control framework. Motion measured by low-cost devices (Kinect and Wii Balance Board) was used to calculate load at knee joint then the smart knee brace automatically adjusted parameters of brace to control load at the knee joint based on prescription by therapist or doctor
Full text: PDF at ProQuest
ISBN: 9781321921779
Title: Design, development and applications of a framework for autonomous vehicle operations
Year: 2015
Advisor: Majji, Manoranjan
Degree: Ph.D.
Abstract: The main objective of the current dissertation is to develop a "Plug and Play" autopilot. We present a systematic approach to decouple controller and filter design from hardware selection. This approach also addresses the performance decay due to hardware limitations. The outcome of this work is an integrated environment to develop, validate, and test algorithms to be used on flight controllers. The dynamic model of an off-the-shelf radio controlled airplane is derived. A controller is developed for the aircraft and it is implemented within the proposed framework. The framework and the controller developed are validated using hardware-in-the-loop simulation. A physical experiment with an autonomous ground vehicle is performed to test the autopilot and algorithms before test flights are performed. Finally a test flight is performed using a Great Planes Funster. A novel microsatellite attitude controller is presented. This controller is also developed and implemented using the approach presented in this dissertation. A satellite attitude hybrid simulator is presented and its design is discussed. An experimental apparatus to test the controller in a microgravity environment is constructed and discussed. Finally a set of experiments that demonstrate how the framework can be integrated into commercially available autopilots and vehicles is presented. First an off-the-shelf quad-rotor that integrates a look-down camera is used to perform visual navigation and landing. Second, a rover and a quad-rotor are used on a cooperative schema. The rover follows a route and the quad-rotor escorts it. The third experiment presents a popular autopilot on a surveillance mission. For this mission the airplane is equipped not only with the autopilot and radio link, but also with a video system. The video system consists of a camera and a radio link. These experimental results demonstrate the utility of the proposed framework in enhancing the capabilities of off-the-shelf autopilots and vehicles while simultaneously simplifying mission preparation and execution.
Full text: PDF at ProQuest
ISBN: 9781321921151
Title: Commanding a robot through action grammar using non-invasive brain computer interface
Year: 2015
Advisor: Kesavadas, Thenkurussi
Degree: Ph.D.
Abstract: The advent of brain computer interface (BCI) technology is leading to methodologies to interface robots with human brain. This work explores the possibility of using a low-cost non-invasive BCI using electroencephalogram (EEG) to control robots based on the intentions of a person. The conventional methods use classifiers to identify intentions from the EEG collected from a person and then discretely control robot's each action which constitutes a complex task. This required a large set of intentions to be identified and a very accurate classifier. In addition, controlling each action of the robot is a tedious process if a complex motion is desired. In this work, the fundamental actions called "actemes" will be combined by the robot itself to perform the complex task. The combination is based on a set of rules called "grammar". Grammar is a set of allowable and unallowable combinations of motions. The automatic construction of the task construction also requires that the robot is aware of its workspace to avoid collision with objects. This involves knowing its own internal states (proprioception) as well as the objects and other features of the working environment (spatial cognition). The ability to construct the task is called task awareness and the latter is called spatial awareness. This mimics the local intelligence of a human hand for the robot to act as an extended arm of our body. The decision to perform an acteme based on the grammar is equivalent to finding a control policy at a state of the system (task) in Markov decision process (MDP). Thus the action grammar is modelled using stationary MDP. Proprioception is carried out by estimating the states (joint and workspace kinematics) of the robot using unscented Kalman filter/particle filter. The objects in the workspace are identified using computer vision techniques. Since an articulated robot can take amorphous shapes in the workspace, the objects are represented with respect to the robot in a self-map. The self-map is a normalized map that represents various metrics of the objects with respect to the robot having fixed shape. The self-map converts the sensory information about the objects in proximity to the reward function of a non-stationary MDP. The task and spatially aware robot is easy to control using BCI. The effort of the user has been reduced to the intentions of initiating a task and to proceed at any stage of the task. A neural network classifier is trained using the data collected from the subject while watching the robot performing fundamental actions. The features used are 36 Hjorth and 1440 autoregression parameters. The classifier is used to classify the intentions from the EEG. Two experiments were conducted, one is to command the robot to perform a screw insertion task and the second is a door opening problem in which the robot is required to perform actions that will lead to opening a door based on the user's intentions. The subject was able to command the robot and accomplish the task more easily compared to the conventional methods.
Full text: PDF at ProQuest
ISBN: 9781321920925
Title: Carbon-related materials for electrochemical and high-temperature structural applications
Year: 2015
Advisor: Chung, Deborah D.L.
Degree: Ph.D.
Abstract: The processing, structure and properties of carbon-related materials for structural and electrochemical applications have been addressed. In relation to the structural materials, a new material, namely a nanostructured ceramic-carbon hybrid that is prepared by hot-pressing organobentonite particles is provided. In addition, carbon-carbon (C/C) composites that have been improved by filler incorporation, with the fillers including organobentonite and fumed alumina, are provided. In relation to the electrochemical electrode materials, which include various types of particulate carbons, a new method of electrical characterization of such materials in the absence of an electric double layer is provided, thereby enabling for these materials the first determination of (i) the relative dielectric constant, (ii) the effect of the electrolyte on the relative dielectric constant and the volumetric electrical resistivity, (iii) the specific capacitance and areal electrical resistivity of the interface between the electrode and its electrical contact, and (iv) the specific capacitance and areal resistivity of the interface between the electrode and the electrolyte. The need for densification (thereby decreasing the fabrication cost) has been reduced by the incorporation of a particulate filler (fumed alumina or organoclay) during the C/C fabrication. Fumed alumina is in the form of aggregates of nanosize alumina particles. Due to this structure, it is highly deformable (squishable). The squishability enables conformability, which is attractive for the filler to fill the space between the carbon fibers in C/C. Partly due to the presence of the organic component in organoclay, it is possible to use the organoclay both as a binder and a reinforcing filler in C/C. Also partly due to the organic component in organoclay, it is possible to consolidate organoclay particles by the application of heat and pressure, thereby forming a monolith in the absence of a binder and providing a new low-cost high-temperature structure material. All prior studies of electrochemical devices have focused on the characterization of the behavior of the electrochemical cell, without decoupling the contributions to the cell performance by the various components in the cell. In contrast, a method for characterizing the dielectric and conduction behavior of electrode materials is provided in this dissertation. The impacts of this dissertation pertain to the following. A new class of high-temperature structural material in the form of a nanostructured ceramic-carbon hybrid is provided. In addition, improved carbon-carbon composites are provided through filler incorporation, with the improvement pertaining to the oxidation resistance and the mechanical properties, and with the consequence that the need for expensive densification is reduced. The science of electrochemical electrodes has been advanced by providing a method for characterizing the dielectric and conduction behavior of the electrode materials.
Full text: PDF at ProQuest
ISBN: 9781321570496
Title: Minimum-time optimal output transitions using pre- and post-actuated inputs: Impact of zeros on the structure of the optimal control profile
Year: 2015
Advisor: Singh, Tarunraj
Degree: Ph.D.
Abstract: A great deal of focus is now placed on reliable approaches to feedforward control for precise positioning of flexible structures, such as hard disk drives, gantry cranes, wafer scanners, large-scale manipulators, and flexible spacecraft. This includes shaping the reference input for a stable system, i.e., feedback stabilized system or an open-loop control such as time-optimal control which provides the nominal trajectories which are followed by a perturbation feedback controller. Methods are now available for robust control to both desensitize the reference shaper to model parameter uncertainties and minimize excitation of unmodeled modes through the introduction/inclusion of control constraints to minimize jerk or smooth out trajectories. Precise positioning is achieved through minimization of residual vibrations with the use of input shapers/time-delay filters. Until recently, the majority of the these control problems have be posed in the framework of a state-to-state transition (SST) problem, with application to rest-to-rest maneuvers where a stable or asymptotically stable system transitions from one set of equilibrium points to another. Optimal trajectories have been determined without direct consideration of the system zeros in frequency domain derivation of the minimum-time control. In systems whose transfer functions are characterized by zeros, the output is a function of multiple states, and one can pose the control design as an output-to-output transition problem. To address remaining gaps in derivation of the time-optimal control, a new approach is proposed, which allows the system zeros to impact the optimal control profile in the formulation of a minimum-time Optimal Output Transition (OOT) problem using pre- and post-actuated inputs. The result greatly simplifies the derivation of the control, provides a clearer interpretation about implementation to the true physical system, provides a means of deriving solutions robust to inherent system uncertainties, In addition, results clearly demonstrate the significant decrease transition time when compared to the traditional SST solution. Presented techniques derive a frequency domain (input-shaper/ time-delay filter) to time-optimal control using pre- and post-actuated inputs for stable or asymptotically stable systems. Derivation of the control begins in the time-domain where optimality is proven and closed-form solutions to the system inputs are determined, thus allowing for accurate parameterization in the frequency domain. It is shown that time-optimal output transitions for a system with minimum phase zeros (left-half plane zeros) is achieved through the use of post-actuation and that the optimal post-actuated control is equal to placement of a pole of the input-shaper at the location of the minimum phase zeros. Additionally, for systems with nonminimum phase zeros (right-half plane zeros) the time-optimal pre-actuation control is achieved by canceling the nonminimum phase system zeros with the poles of the input-shaper. Robustness can be achieved by placing multiple zeros at the nominal location of the system poles and, when knowledge of the support of the uncertain poles are known, a minimax optimization problem can be formulated to minimize the worst performance of the controller over the domain of uncertainty. The generalized approach to the minimum-time OOT problem with post-actuation is then derived as a Linear Programming (LP) problem for straight-forward solution in a convex framework and robustness is added through the inclusion of sensitivity states of the system (i.e. placement of multiple zeros at the nominal location of the system poles). Finally, a convex minimax problem is derived, which minimizes the maximum of the L2 norm in tracking error during post-actuation, and determines the robust mimimum-time OOT solution over a domain of uncertain system models.
Full text: PDF at ProQuest
ISBN: 9781321569506
Title: Modeling the electrohydrodynamics of three-dimensional vesicles
Year: 2015
Advisor: Salac, David
Degree:
Abstract: In this work a numerical method is presented to model the electrohydrodynamics of a three-dimensional vesicle. The objective of this study is to develop robust numerical algorithms to solve the physical governing equations of the vesicle in the presence of fluid flow and DC electric fields. Furthermore the model will be able to predict the fast dynamics of the vesicle exposed to strong fields for a wide range of material properties and deformations that cannot be easily captured in experimental settings. The vesicle membrane is modeled as an infinitesimally thin capacitive interface. The electric field calculations explicitly take into account the capacitive interface by an implicit Immersed Interface Method formulation, which computes the electric potential field and the trans-membrane potential simultaneously. The interface is tracked through the use of a semi-implicit, gradient-augmented level set method. The enclosed volume and surface area are conserved both locally and globally by a new Navier-Stokes projection method. The valdiation of the hydrodynamic model was examined in the light of experimental data and observations. The two major modes of the vesicle motion in the linear shear flow namely the tank-treading and tumbling regimes, were studied. Simulation results show a very good agreement between the present results and the experimental data. The electrohydrodynamic results also match well with previously published experimental, analytic and two-dimensional computational works and the model is capable of capturing the type of topological changes previously observed in experiments. A parameter study of different important material properties is carried out for the transition between oblate and prolate ellipsoidal shapes in order to estimate the critical parameter tresholds for this transition to happen. In addition, investigation of the vesicle behavior under the combined effects of shear flow and weak DC electric fields reveals the remarkable influence of the electric field in changing the standard behaviors of tank-treading and tumbling vesicles. If the electric field is strong enough the induced resistance caused by the electric field may alter the behavior of a tumbling vesicle into a tank-treading motion.
Full text: PDF at ProQuest
ISBN: 9781321569674
Title: Size-Dependent Fluid Mechanics: Theory and Application
Year: 2015
Advisor: Dargush, Gary F.
Degree: Ph.D.
Abstract: Some physical experiments exhibit size-dependency in fluid flow at small scales. This necessitates the introduction of couple stresses in the corresponding continuum theory. This in turn requires the vorticity to be considered as an additional degree of freedom associated with the angular momentum equation. Subsequently, the formulation accounts not only for stretches of the fluid elements, but also their bending deformations. The resulting size-dependent couple-stress fluid mechanics then can be used to explore the flow behavior at small sizes, such as micro- and nano-scales, and also to bridge between atomistic and classical continuum theories. The work here concentrates on two-dimensional flow and examines the effects of couple-stresses by developing and then applying a stream function-vorticity computational fluid dynamics formulation. Details are provided both on the governing equations for size-dependent flow and on the corresponding numerical implementation. Afterwards, the formulation is applied to the much studied lid-driven cavity problem to investigate the behavior of the flow as a function of the length scale parameter l . The investigation covers a range of Reynolds numbers, and includes an evaluation of the critical value beyond which a stationary response is no longer possible. This provides us with different unexpected sets of results for different boundary conditions when accounting for couple-stresses. These in turn might explain different chaotic behaviors of fluid flow. Since problems of thermoviscous flows are of prime importance for many physical processes, the classical Boussinesq equations for the Rayleigh-Bénard convection problem are modified by including couple stresses, which account for size-dependency. Then the stability of natural convection in a square cavity is studied numerically and the onset of convective instability within a range of Rayleigh and Prandtl numbers is investigated.
Full text: PDF at ProQuest
ISBN: 9781321569513
Title: Hemodynamics induced intracranial aneurysm initiation: Mechanism and model exploration
Year: 2015
Advisor: Meng, Hui, Kolega, John
Degree: Ph.D.
Abstract: Understanding the molecular mechanisms of intracranial aneurysms (IAs) is paramount to developing treatment strategies for preventing IA initiation and growth, and to understanding why some IAs are asymptomatic while others rupture with devastating consequences. Unique hemodynamics, specifically high wall shear stress (WSS) and positive wall shear stress gradient (WSSG) play a critical role in IA initiation, and so the molecular mechanism subsequent to hemodynamic mechanotransduction is an important gap in understanding IAs. A hemodynamics induced rabbit IA model was used to determine if high WSS and positive WSSG affects endothelial nitric oxide synthase (eNOS) expression/activity, and if eNOS-derived nitric oxide (NO) or superoxide leads to smooth muscle cell (SMC) production of matrix metalloproteinases (MMPs) and IA initiation. Results showed that eNOS production of NO in the intima and superoxide production in the media independently led to IA initiation. Both eNOS and superoxide caused IA damage in part via SMC de-differentiation and MMP2/9 production. To expand the available tools for studying hemodynamics driven IA mechanisms, the rabbit bilateral common carotid artery ligation model was attempted in a novel rat model. While successful computational fluid dynamics demonstrated increased WSS and WSSG after ligation in the rat, IAs fail to form in the rat BT compared to the rabbit given the same treatment. Allometric scaling can potentially be used to compare biological findings across species, as well as to estimate biological values not otherwise obtainable. The feasibility of allometric scaling was assessed for estimating the WSS and WSSG thresholds for IA initiation in humans based on rabbit model threshold data and other available animal data. Scaling agreed well between rabbit and human, and an estimated threshold predicted that rats would not form IAs in the bilateral common carotid artery ligation model based on WSS and WSSG data. In sum, this work both contributed to understanding the mechanism of hemodynamics induced IA initiation as well as facilitated its study in the future.
Full text: PDF at ProQuest
ISBN: 9781321922097
Title: Towards cooperative manipulation using cable robots
Year: 2015
Advisor: Krovi, Venkat N.
Degree: Ph.D.
Abstract: Cable robots form a class of parallel architecture robots with significant benefits including simplicity of construction, low cost, large workspace, significant payload to weight ratio and end-effector stiffness. In this work, we seek to extend the current capability of cable robots towards cooperative manipulation. Specifically, we first explore inclusion of mobility into the bases (in the form of gantries, and/or vehicle bases) which can significantly further enhance the capabilities of cable robots. However, this also introduces redundancy and complexity into the system which needs to be carefully analyzed and resolved. We propose a generalized modeling framework for systematic design and analysis of cooperative mobile cable robots, building upon knowledge base of multi-fingered grasping, and illustrate it with a case study of four cooperating gantry mounted cable robots transporting a planar payload. We then explore modifications on the payload attachment as alternate means to simplify the design and enable practical deployment. We examine analysis of the system and develop a virtual cable-subsystem formulation (which also facilitates subsumption into the previously developed mobile cable robot analysis framework). We also seek improvement of the tension distribution by utilizing configuration space redundancy to shape the tension nullspace. For true load sharing among multiple robots, we also investigate explicitly the compliance in cable robots and the modulation of task space stiffness of mobile cable robots. First the compliance is introduced via linear springs connected in series with non-extensible cables. The benefit of such series elastic cables include tension control without using force sensors and tension redistribution. We exploit the configuration redundancy in mobile cable robots to optimize certain desired task space stiffness criterion. Then we move on to variable stiffness modules instead of dealing with constant sitffness springs. Traditionally, most existing variable stiffness modules tend to be bulky by virtue of their use of solid components making them less suitable for mobile applications. In recent times, pretensioned cable-based modules have been proposed to reduce weight. While passive, these modules depend on significant internal tension to provide the desired stiffness and stiffness modulation capability tends to be limited. We present design, analysis and testing of a cable based active variable stiffness module that can be realized to achieve a large stiffness range with decoupled tension. A one degree-of-freedom (DOF) rotational joint is set up using two of these modules to evaluate the capability. We then present a planar 2DOF cable robot formed by three active variable stiffness modules. By controlling each module's stiffness, the overall Cartesian stiffness of the robot can be modulated. We show that this approach is more effective than by increasing internal tension only. It is also more efficient than varying configuration to achieve variable stiffness. Further, it is able to independently vary stiffness and internal tension achieving same Cartesian stiffness with much lower internal tension, which is more efficient.
Full text: PDF at ProQuest
ISBN: 9781321570656
Title: Fluid-Structure Interaction Simulations of Flows with Symmetry: From Fish Schooling to Rheology of Suspensions
Year: 2015
Advisor: Borazjani, Iman
Degree:
Abstract: A numerical framework is developed to model large symmetric fluid-structure interaction (FSI) systems. The computational framework is based on the curvilinear-immersed boundary (CURVIB) method for FSI simulations. The fluid solver is extended by implementing periodic boundary conditions using parallel programming. For the structure solver, the 3-D rotation of rigid objects is added using quaternion-angular velocity, which do not cause drift as the conventional rotation matrix. We also developed a special treatment for immersed bodies passing through the periodic boundaries. After demonstrating the accuracy and validity of the implemented modifications, solvers were applied to simulate two important FSI problems: 1) Schooling for fish-like swimmers; and 2) Suspension of rigid particles. For fish-like swimming, we have discovered, for the first time, the leading edge vortex reattachment as a physical mechanism that enhances the locomotor force on fish tails. Furthermore, to the best of our knowledge, our large-eddy simulations for fish schooling are the first 3-D numerical simulations of a school of fish at realistic Reynolds number. We found up to 20\% higher swimming speed for fish schools relatives to a single swimmer, while using similar energy. For suspension modeling, we have simulated suspensions of arbitrary complex-shaped particles and, for the first time, calculated all the components of particle stress considering inertia. We found that inertia increases the contribution of the other components of the particle stress, e.g., acceleration and Reynolds stresses, which are typically ignored relative to the stresslet. The contribution of these components is small at low Re, but they are in the order of 10% of the total particle stress for Re ~ O (10) for ellipsoids. We also found that complexity of particles, especially when particles are more slender, can remarkably increase relative viscosity of a suspension compared to suspensions of simple particles. Moreover, normal stress differences are found to be significantly higher compared to Stokesian simulations of suspensions of axisymmetric particles.
Full text: PDF at ProQuest
ISBN: 9781321921076
Title: Haptic Modeling of Trocar Insertion Procedure
Year: 2015
Advisor: Kesavadas, Thenkurussi
Degree: Ph.D.
Abstract: Laparoscopic surgery has become one of the most commonly performed Minimally Invasive Surgery (MIS). Trocar insertion is the first step in any laparoscopic procedure. A majority of injuries during MIS is attributed to excessive use of force by surgeon during trocar insertion [7]. It is a difficult procedure to learn and practice because it is carried out almost entirely without any visual feedback of the organs underlying the tissues being damaged. Therefore, it a training system with haptic feedback will be very beneficial. Challenges in developing such a feedback system are many. For example, characterizing accurate biomechanical properties of tissues from experimental data, integrating proper deformation mechanism of tissues into haptic feedback, and advanced visualization techniques are just a few of them. In this research, to extract reliable relationship between force/torque and deformation of the abdomen walls in the thorax area, trocar insertion data (force/torque, time, displacement, etc.) was collected by inserting specially instrumented trocar into a few specimens of pig tissues. The instrumentation used included a force/torque sensor, aurora sensors, and a 6 DOF haptic device. Using this experimental data set and accurate simulation of the experimental trocar insertion process, an optimization scheme is proposed to stochastically characterize the biomechanical properties of tissues based on non-linear and large strain models. Commercially available high-level programming software, MATLAB, and finite element (FE) software, ABAQUS, are used for this purpose. The predicted material properties and deformation mechanisms have been cross-validated with a different set of experimental results. This provides sufficient confidence in reliably using the estimated material properties and deformation mechanism to further develop a virtual reality system for trocar insertion procedure with haptic feedback. A new graphic deformation system based on Artificial Neural Networks (ANN) is proposed. Our proposed ANN framework consists of two separate neural networks. The first ANN models the force (haptic) feedback of the trocar insertion procedure and synthesizes appropriate reaction force based on clinical data through a haptic device. The second ANN models the mechanism of tissue deformation. We train this second ANN model using the FE computed deformation data for real time rendering of appropriate tissue deformations. The virtual training system is finally simulated based on these two ANN models for tissue deformation and force feedback in real time. This novel method allows precise trocar insertion simulation based on prior offline FE analysis.
Full text: PDF at ProQuest
ISBN: 9781321570274
Title: An Affordance-based Approach to Evaluating Consumer Variation
Year: 2014
Advisor: Lewis, Kemper E.
Degree: Ph.D.
Abstract: A design process exists to help designers transform an identified need into a solution. However, the solution required to satisfy the need will vary from consumer to consumer, as each consumer has a unique set of human factors, preferences, personal knowledge, and solution constraints. A design firm must therefore decide what variation will be addressed and how they will address it. Currently there is little work supporting the decision of how to address variety, resulting in an ad hoc or a priori decision. Currently, there is little work that specifically investigates how to guide designers through the selection process. This research explores the information and tools needed to help designers capture, quantify, and leverage consumer commonality in order to address consumer variation. This facilitates the creation of a system or set of systems that meets the demands of both the consumer(s) and organization. An affordance-based approach is leveraged because it maintains the relational field of view between the user, artifact, and resulting artifacts. Because the use of affordances is inconsistent in the literature, a formal affordance form was created to capture all required information, and a working affordance basis was created. Numerical Taxonomic Research from the biological classification domain is adapted to help designers understand consumer variation. Once consumer variation is understood, the design firm enters the conceptual design phase. When addressing consumer variation, designers are tasked with developing concepts for the different subproblems, but must also develop concepts to address the consumer variation. Traditional design approaches can be leveraged for both; however, to facilitate addressing consumer variation, a set of design heuristics was developed. This research is packaged within the early stages of a design process, both leveraging and complementing existing design research.
Full text: PDF at ProQuest
ISBN: 9781321262452
Title: Experimental studies on physical deterioration and electrical fatigue behavior in ferroelectric polymers
Year: 2014
Advisor: Fu, John Y.
Degree: Ph.D.
Abstract: Ferroelectric materials are widely used in various electronic applications based upon their excellent electrical bi-stabilities and dielectric performance in response to the applied electric field. They have been utilized to make nonvolatile electronic memories by exploiting the hysteretic behavior and high energy density capacitors in regard to the high capability of electrical energy storage. One critical issue is that the ferroelectrics are required to endure a large number of electrical cycles. A large body of scientific efforts has been devoted to high fatigue failure resistance of ferroelectric-based electronic devices. Fatigue failure of ferroelectric materials still needs to be solved. It is the objective of this work to explore the intrinsic origin of fatigue failure mechanisms. In this study, it was found that electric-field-induced stress relaxation in α-phase poly(vinylidene fluoride) (PVDF) films can be well described by using the Kohlraush function groups, also known as the stretched exponential relaxation function. The electric strength of the dielectric is strongly dependent on its elastic properties due to the electromechanical coupling effect. Our fitting result of the stretched exponent is in accordance with a Weibull cumulative distribution function. This indicates that the elastic properties of insulating polymers are crucial to the capability of electrical energy storage. In ferroelectric materials, the electromechanical coupling may be indicative of the microscopic origin of polarization fatigue. Further experiments were focused on the polarization fatigue in semi-crystalline poly(vinylidene fluoride trifluoroethylene) [P(VDF-TrFE)] copolymers films, whose ferroelectric response is superior to PVDF homopolymer films. Fatigue resistance of normal virgin P(VDF-TrFE) films was compared to that of P(VDF-TrFE) films modulated by using magnetic field. It was shown that normal P(VDF-TrFE) films exhibit a higher fatigue resistance. The artificially introduced lattice reorientation in magnetic-field-modulated P(VDF-TrFE) films would be closely related to the fatigue resistance. Under an ac electric field, the correspondingly microstructures may also influence the electrically induced lattice defects. Polarization fatigue data in P(VDF-TrFE) films was also analyzed by a dynamic Coffin-Manson law, wherein the corresponding coefficients and the exponent of the function can be estimated via different Weibull distribution function. The smallest scale found to be significant in electrical fatigue is the irreversible atomic movements. Studies on electrical failure behavior were also performed in P(VDF-TrFE) copolymer films. Experiment results consistently show that the measured electric polarization near the breakdown limit with respect to the failure life cycles obeys the Coffin-Manson law that is the most widely used to describe the mechanical fatigue failure behavior. The corresponding Coffin-Manson exponents remain constant. Our experimental evidence indicates that accumulation of the disordered structure at the atomic level is closely related to the physical origin of the fatigue in dielectric materials. It is the intrinsic atomic movement that constitutes the major finding in this work.
Full text: PDF at ProQuest
ISBN: 9781321263138
Title: Viscous Behavior of Exfoliated Graphite and its Cement-Matrix Composites
Year: 2014
Advisor: Chung, Deborah D.L.
Degree: Ph.D.
Abstract: This dissertation addresses the viscoelastic behavior of graphite and graphite cement-matrix composites. The graphite is exfoliated graphite, with the cell wall in its cellular structure consisting of ~60 graphite layers and exhibiting extraordinarily viscous behavior, in addition to elastomeric behavior, due to interfacial sliding in the nanoscale multilayer. The viscous character of the cell wall decreases with increasing solid (graphite) content, due to the increasing difficulty of shear, which becomes limited at solid contents above 4 vol.%. At the lowest solid content of 1.0 vol.%, the loss-tangent/solid-content is 35 and 25 under flexure and compression respectively, the storage-modulus/solid-content is 125 MPa and 46 kPa under flexure and compression respectively, and the loss-modulus/solid-content is 45 MPa and 13 kPa under flexure and compression respectively. The loss-tangent/solid-content decreases with increasing solid content, leveling off at 0.9 at 15 vol.% solid. Exfoliated graphite compacts exhibit elastomeric behavior, as enabled by the high-amplitude reversible and easy sliding of the graphite layers relative to one another in the cell wall of exfoliated graphite. The total shear strain and reversible shear strain of the cell wall are up to 40 and 35 respectively during nanoindentation (in the compaction direction) without fracture of the graphite layers, compared to corresponding values of 12 and 8 for flexible graphite with 38 vol.% solid. The fraction of displacement that is irreversible is as low as 12%, compared to 29% for flexible graphite. The modulus is as low as 83 kPa, compared to 790 kPa for flexible graphite. The load per unit displacement is as low as 0.32 μN/nm, compared to 1.1 μN/nm for flexible graphite. Microscale constrained-layer damping has been achieved by incorporating the cell wall of exfoliated graphite in a cement matrix. The incorporation involves compaction of exfoliated graphite prior to the cement curing. Without the compression, the exfoliated graphite remains fluffy and the cell walls are not adequately constrained by the cement matrix. Without the compression, the cement-matrix composite is isotropic; with the compression, the composite is anisotropic. With the compression, the composite exhibits high values of both the loss tangent (up to 0.8) and the loss modulus (up to 7 GPa), as previously reported, in contrast to conventional damping materials which do not have high values for both of these quantities. Silica fume and the abovementioned composite made with the compression (used prior to curing) are effective as admixtures for enhancing the mechanical energy dissipation of cement-based materials, as shown under small-strain dynamic flexure at 0.2 Hz. The fraction of energy dissipated reaches 0.26, 0.58 and 0.22 for cement paste, mortar and concrete respectively, as provided by silane-treated silica fume and the cementitious admixture, which cause concrete to increase the dissipation, loss modulus, loss tangent and storage modulus by 11000%, 260000%, 9000% and 190% respectively. The highest loss tangent and loss modulus obtained are 0.14 and 3.5 GPa respectively. Silane-treated silica fume alone causes concrete to increase the dissipation by 11000%; untreated silica fume alone gives a 9100% increase. In exfoliated graphite, the mechanism of energy dissipation involves interfacial friction. This notion is supported by an analytical model.
Full text: PDF at ProQuest
ISBN: 9781303750205
Title: Quantitative Evaluation of User Performance in Minimally Invasive Surgical Procedures
Year: 2014
Advisor: Krovi, Venkat
Degree: Ph.D.
Abstract: Skilled human interactions are efficient, repetitive and easily noticeable. Identification, analysis and verification of skilled activities (and their performers) are critical to unlocking the potential of human machine interfaces (HMI ) used in innumerable robot teleoperation settings from remote-controlled vehicles to robotic- surgery. Yet the underlying processes (inter-coupled perceptual, sensory, and cognitive aspects) of such interactions still remain elusive and difficult to characterize (let alone quantify) which has served to motivate our efforts. We seek to evaluate and study the human motor behaviors, by use of system-identification principles with controlled experimentation, to model and capture manipulation and interaction performance in quantifiable manifest skill-levels. And in lieu of an abstract treatment, we concretize our efforts in the context of surgical procedural assessment and training in this research. Though, developing an abstract assessment framework for use in clinical curriculum is a part of our long-term plan, we present our quantitative analyses specific to minimally invasive procedures: (a) robotic laparoscopic or minimally invasive surgeries (MIS) and (b) percutaneous needle biopsies (PNBs). These procedures demand expertise not only in exhibiting efficient motor actions but more importantly in making reliable decisions based on continuous sensory and cognitive feedback aspects. Therefore, by seeking to formulate quantitative methods to evaluate surgeons and physician for these two cases, we also believe realize the objective of a unified, generalizable and scalable assessment framework that can be applicable to other procedures as well. A combination of virtual and physical experiments involving phantoms and cadavers (approved by SUNY HSIRB) were subsequently administered to validate our methods and performance metrics. The surgeons and trainees with varied levels of expertise were recruited from the respective specialties. For MIS case studies, Intuitive Surgical's da Vinci surgical robot with its SKILLs simulator and a custom-built laparoscopic box trainer with instrumented tools were used as testbeds to generate the desired data corpus. The PNB experiments were conducted using our simulator-trainer framework, Augmented Reality SIMulator for Biopsies ( AR-SIMBiopsies ) that can replicate the 'feel' and 'look' of typical tissue phantoms and enables seamless recording of surgical force and motion signatures, under different (in-vitro and ex-vivo) scenarios. A discrete finite-state segmentation approach relying on fundamental surgical motion-blocks, called Therbligs was proposed. The raw experimental data were then manually annotated with Therblig information for each active tool (ground truth dataset) and the resulting time series data were post-processed (filter, interpolation, differentiation and normalization) to train and validate our Therblig classifiers (T-class ). A comparative analysis using the predicted Therbligs between different users revealed discriminative signatures of experts and trainees and were used to quantify surgical efficacy (dexterity, motion and force economy). The task segmentation for both case-studies yielded additional performance measures to identify skill deficiencies and ineffective motions confirming the concurrent validation of our assessment metrics. For the specific case of PNB, the significance of force-modulation and force-based measures were also shown in quantitative terms. The final part of this research outlines our ongoing work in developing a purely video-based surgical performance evaluation and feedback framework. Different modules of this framework include tool-detection, tracking, semantic identification and pose estimation. The preliminary results obtained from this framework using real-surgical data are also presented prior to discussing the limitations of our current efforts as well as the future work.
Full text: PDF at ProQuest
ISBN: 9781321072273
Title: Geometric-based nonlinear filtering with applications to attitude estimation
Year: 2014
Advisor: Crassidis, John L.
Degree: Ph.D.
Abstract: The attitude determination problem is one of estimating the orientation of a given body with respect to some reference. This problem is frequently encountered throughout many fields, especially in aerospace engineering where determining or controlling the orientation of aircraft and spacecraft is a prevalent subject. Since this problem was posed by Grace Wahba, it has received an a great amount of attention. A fundamental obstacle in attitude determination is the nature of its parameterizations. Attitude parameters either contain singularities or must satisfy certain constraints. The structure of the problem requires the use of multiple coordinate systems and, in addition, other state quantities involved often depend on the unknown attitude itself. This poses natural difficulties since classical, real-time estimation techniques are constructed upon a single, unconstrained coordinate system. Since the seminal work of Lefferts, Markley and Shuster much research has been conducted in the development of attitude estimation techniques that are both global and effective. This work builds on an existing approach to addressing the attitude parameter constraint problem. It is generalized to account for the representation of state quantities with respect to a random coordinate basis, a structure which is imposed by the unknown orientation of the frame in which the quantities are specified. This objective requires the adaptation of existing filtering algorithms, namely the extended Kalman Filter and the Unscented Kalman Filter, to a modified error metric. Theoretical developments corroborate existing work that addresses the attitude parameter constraint issue while contributing a correct treatment of state quantities represented with respect to the coordinate basis attached to the unknown attitude. The proposed algorithms are validated using both a planar inertial navigation example and a realistic spacecraft attitude determination problem.
Full text: PDF at ProQuest
ISBN: 9781303748677
Title: Simulation and modeling of compressible turbulent mixing layer
Year: 2014
Advisor: Madnia, Cyrus K.
Degree: Ph.D.
Abstract: Direct numerical simulations (DNS) of compressible turbulent mixing layer are performed for subsonic to supersonic Mach numbers. Each simulation achieves the self-similar state and it is shown that the turbulent statistics during this state agree well with previous numerical and experimental works. The DNS data is used to extract the physics of compressible turbulence and to perform a priori analysis for subgrid scale (SGS) closures. The flow dynamics in proximity of the turbulent/non-turbulent interface (TNTI), separating the turbulent and the irrotational regions, is analyzed using the DNS data. This interface is detected by using a certain threshold for the vorticity norm. The conditional flow statistics based on the normal distance from the TNTI are compared for different convective Mach numbers. It is shown that the thickness of the interface layer is approximately one Taylor length scale for both incompressible and compressible mixing layers, and the flow dynamics in this layer differs from deep inside the turbulent region. Various terms in the transport equations for total kinetic energy, turbulent kinetic energy, and vorticity are examined in order to better understand the transport mechanisms across the TNTI in compressible flows. The DNS data is also employed to analyze the local flow topology in compressible mixing layers using the invariants of the velocity gradient tensor. The topological and dissipating behaviors of the flow are analyzed in two different regions: near the TNTI, and inside the turbulent region. It is found that the distribution of various flow topologies in regions close to the TNTI differs from inside the turbulent region, and in these regions the most probable topologies are non-focal. The occurrence probability of different flow topologies conditioned by the dilatation level is presented and it is shown that the structures in the locally compressed regions tend to have stable topologies while in locally expanded regions the unstable topologies are prevalent. In order to better understand the behavior of different flow topologies, the probability distributions of vorticity norm, dissipation, and rate of stretching are analyzed in incompressible, compressed and expanded regions. The DNS data is also used to perform a priori analysis for subgrid scale (SGS) viscous and scalar closures. Several models for each closure are tested and effects of filter width, compressibility level, and Schmidt number on their performance are studied. A new model for SGS viscous dissipation is proposed based on the scaling of SGS kinetic energy. The proposed model yields the best prediction of SGS viscous dissipation among the considered models for filter widths corresponding to the inertial range. For the range of Mach numbers and Schmidt numbers studied in this work, the SGS scalar dissipation model based on proportionality of turbulent time scale and scalar mixing time scale produces the best results in the filter widths corresponding to the inertial subrange. For both viscous and scalar SGS dissipation models, two dynamic approaches are used to compute the model coefficient. It is shown that if the dynamic procedure based on global equilibrium of dissipation and production is employed, more accurate results are generated compared to the conventional dynamic method based on test-filtering.
Full text: PDF at ProQuest
ISBN: 9781321264685
Title: Optimal Force Generation with Fluid-Structure Interactions
Year: 2014
Advisor: Milano, Michele
Degree: Ph.D.
Abstract: Typical computational and experimental methods are unsuitable for studying large scale optimization problems involving complex fluid structure interactions, primarily due to their time-consuming nature. A novel experimental approach is proposed here that provides a high-fidelity and efficient alternative to discover optimal parameters arising from the passive interaction between structural elasticity and fluid dynamic forces. This approach utilizes motors, force transducers, and active controllers to emulate the effects of elasticity, eliminating the physical need to replace structural components in the experiment. A clustering genetic algorithm is then used to tune the structural parameters to achieve desired optimality conditions, resulting in approximated global optimal regions within the search bound. A prototype fluid-structure interaction experiment inspired by the lift generation of flapping wing insects is presented to highlight the capabilities of this approach. The experiment aims to maximize the average lift on a sinusoidally translating plate, by optimizing the damping ratio and natural frequency of the plate's elastic pitching dynamics. Reynolds number, chord length, and stroke length are varied between optimizations to explore their relationships to the optimal structural parameters. The results reveal that only limited ranges of stroke lengths are conducive to lift generation; there also exists consistent trends between optimal stroke length, natural frequency, and damping ratio. The measured lift, pitching angle, and torque on the plate for optimal scenarios exhibit the same frequency as the translation frequency, and the phase angles of the optimal structural parameters at this frequency are found to be independent of the stroke length. This critical phase can be then characterized by a linear function of the chord length and Reynolds number. Particle image velocimetry measurements are acquired for the kinematics generated with optimal and suboptimal structural parameters. By examining the vorticity field and the measured lift, leading edge vortices and added mass are identified as primary lift generation mechanisms under optimality. This is similar to the unsteady lift generating mechanism employed by flapping wing insects. Further analysis reveals that longer stroke lengths rely mainly on vortex formation to maximize average lift, whereas added-mass effects / wing wake interaction become more prominent at shorter stroke lengths.
Full text: PDF at ProQuest
ISBN: 9781321071856
Title: Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents
Year: 2014
Advisor: Singla, Puneet
Degree: Ph.D.
Abstract: All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of \textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions and observed data, given a set of kinetic constraints on mobile sensors. Dynamic Programming method has been utilized to solve the resulting optimal control problem. To complete the loop of source characterization process, two different estimation techniques, minimum variance estimation framework and Bayesian Inference method has been developed to fuse model forecast with measurement data. Incomplete information regarding the distribution of associated noise signal in measurement data, is another major challenge in the source characterization of plume dispersion incidents. This frequently happens in data assimilation of atmospheric data by using the satellite imagery. This occurs due to the fact that satellite imagery data can be polluted with noise, depending on weather conditions, clouds, humidity, etc. Unfortunately, there is no accurate procedure to quantify the error in recorded satellite data. Hence, using classical data assimilation methods in this situation is not straight forward. In this dissertation, the basic idea of a novel approach has been proposed to tackle these types of real world problems with more accuracy and robustness. A simple example demonstrating the real-world scenario is presented to validate the developed methodology.
Full text: PDF at ProQuest
ISBN: 9781321263756
Title: Multilevel-multiscale ensembles for uncertainty quantification with application to geophysical models
Year: 2014
Advisor: Patra, Abani K.
Degree: Ph.D.
Abstract: Disaster response managers routinely use numerical modeling to assist in hazard response and mitigation making the reliability and the accuracy of these models crucial to the decision making process. Dealing with uncertainties and multiple scenarios are important challenges for decision making and are usually dealt with by using ensembles representative of the uncertainty. In geophysical mass flow problems (e.g. landslides, volcanic debris flows), many flow characteristics such as material properties, the size or location of failing mass, o the terrain over which the flow occurs at every point in the domain are difficult, if not impossible to characterize because of their large dimensionality. The goal of the present work is to characterize these uncertainties introduced by large dimensional inputs such as terrain and windfields and construct hazard maps in a computationally efficient manner. To do so, this work first explores existing approaches based on standard statistical approaches and then introduces novel methodology based on multilevel and multiscale approximations. The novel methodology for constructing digital terrain ensemble and thus characterizing the uncertainty in Digital Elevation Models is illustrated by propagating the ensembles through a numerical model of dry blocks and ash flows over natural terrain (TITAN2D). The basic approach is also used with ensembles of a model of volcanic ash transport (PUFF) used in the construction of a hazard map. There are large uncertainties associated with the construction of the DEMs. For surface elevation, data at any given pixel in the DEM tends to be similar to data from nearby pixels. If more than one DEM obtained through different techniques of the same location are available, then error maps can be constructed. Most error maps are spatially autocorrelated, and random fields can be used to represent spatially autocorrelated error. We show that using graph-based algorithms and low-rank approximations of the adjacency matrix we obtain representative data points in the space of interest. This approach can be used with success in characterizing the uncertainty in DEMs and to create accurate and efficient hazard maps, with minimum changes to the algorithm or implementation. Connecting data points according to local similarities to obtain reliable scale-dependent global properties, arising from these local similarities, implies using algorithms for finding coherent regions that display similar features. On the coarse version of the original graph, multilevel hierarchies can be formed, which allows rapid calculation of low-rank approximations. It is further shown that created hierarchies can be used to accelerate the DEM ensemble creation process. The benefits of using randomized projection algorithms for computing low rank matrix approximations are their simple implementation, applicability to large scale problems, and the existence of theoretical bounds for the approximation errors. Often the processes required to construct a simulation-based probabilistic hazard map for volcanoes, leads to large amount of data and intensive computational cost. Here, we present a novel approach - Multilevel Approximation (MLA) in creating a fast surrogate of the simulator which will improve the speed of hazard map creation. Multilevel-multiscale methods are successfully applied in developing a complete probabilistic forecast for the ash concentration at a given time and location. Randomized low-rank approximation methods are used in efficiently finding a sparse representation in the space of interest. We represent both the parameter space (sample points at which the numerical model is evaluated) and physical space (ash concentration covering a parcel) by a weight graph. We follow by generating a sequence of approximations at the given function (e.g. ash concentration) on the data, as well as their extensions to any newly-arrived data point. The subsampling is done by interpolative decomposition of the associated Gaussian kernel matrix in each scale in the hierarchical procedure. Results obtained show significant computational advantages over standard Monte Carlo sampling while preserving the output quality. Compared to other weighted sampling methods the cost of computations is similar, but this does not suffer the disadvantage of having to compute weights or fail if any sample is lost.
Full text: PDF at ProQuest
ISBN: 9781321265071
Title: New variations of multiple model adaptive estimation for improved tracking and identification
Year: 2013
Advisor: Crassidis, John
Degree: Ph.D.
Abstract: Multiple model adaptive estimation (MMAE) is a recursive algorithm that uses a bank of estimators, each purposefully dependent on a particular hypothesis, to determine an estimate of an uncertain system under consideration while simultaneously tracking the system state. The first generation of MMAE, introduced by Magill in 1965 considered the estimators to act independently and in parallel, determining state estimates conditional with each hypothesis. Through computation of a normalized mode-conditioned likelihood, the conditional probability that each hypothesis correctly models the system is computed. Since Magill's seminal work, many offshoots of MMAE have been developed. Modifications have been reported, but are typically on on an application specific basis which limits their versatility. In this dissertation, two variations of MMAE are considered. The first variation is based on an observed flaw which leads to degenerate tracking performance. The second variation is motivated by previous research which showed improved convergence performance by considering a generalized mode-conditioned likelihood function for determining the hypothesis conditional probabilities. Each estimator, or specifically Kalman filter, is designed around a particular system hypothesis. If the hypothesis is not sufficiently close to the true system, the resulting filter will generally produce erroneous estimates which do not track the system. This is because each filter believes that the hypothesized system is optimal. Further, the state error covariances resulting from such a suboptimal filter will be inconsistent because they have no knowledge of the incorrect hypothesized model. By explicitly accounting for the deviation of the hypothesis, recursions are developed which, when combined with MMAE are shown to provide superior tracking performance over the standard MMAE. Additionally the proposed variation, called model error MMAE, is shown to provide acceptable tracking performance for dynamically switching systems at a fraction of the computational expense of other algorithms specifically developed for that application. The second variation, referred to as generalized multiple model adaptive estimation (GMMAE), uses an augmented vector of current and past residuals to drive the recursion for the hypothesis conditional probabilities. Necessary for that recursion is evaluation of the time-domain autocovariance matrix of the residual sequence. When filtering linear (and linearized) systems, the autocovariance can be analytically expressed as a function of the system matrices, covariances and filter gain. When filtering nonlinear systems using the Unscented filter, analytic expressions for the autocovariance are not possible. Motivated to include Unscented filters within the GMMAE framework, a method for calculating the time-domain autocovariance of the residual sequence from an Unscented filter is presented. The proposed method is validated analytically on a simplified system and simulation results are presented using the algorithm for process noise estimation in a planar tracking problem.
Full text: PDF at ProQuest
ISBN: 9781303751516
Title: Computer Modeling of Neurovascular Flow Diverter
Year: 2014
Advisor: Meng, Hui
Degree: Ph.D.
Abstract: Intracranial aneurysm rupture is one of the main contributors (15%) to subarachnoid stroke, which is identified as the third deadliest disease in North America. Among current widely used treatment strategies, flow diversion represents the most recent treatment paradigm shift. The use of the flow diverter (FD), a densely braided, stent-mesh device has achieved superior occlusion/cure rate in long term (6-12 months) clinical follow-ups for traditionally difficult-to-treat aneurysms (wide-necked, large or giant, and fusiform/dissecting aneurysms). However, current FD application is compromised by a 6-8% complication rate, including in-stent thrombosis, posttreatment aneurysm rupture, and parenchymal vessel hemorrhage etc. The highly flexible FD construct also causes technical issues during the deployment. Furthermore, the delayed aneurysm occlusion induced by flow diversion makes it difficult for clinicians to predict the treatment outcome. Recently, neurointerventionalists have been using the dynamic "push-pull" technique during the FD implantation to manipulate the FD mesh density for enhanced flow diversion. However, the clinical deployment results using this technique could not be evaluated due to the limited resolution of the angiogram, therefore hindering its future application. These difficulties and challenges underscore the need for comprehensive understanding of the procedure of FD deployment and the resulting 3D hemodynamics in patient-specific aneurysms. To this end, we developed a finite element analysis (FEA) based workflow, the high fidelity virtual stenting (HiFiVS) technique, to simulate the complete clinical processes of deploying the FD and provide accurate account for the final FD geometry. The developed HiFiVS was preliminarily validated using the mechanical testing data of a braided stent and the x-ray recording of the FD unsheathing procedure. Further in vitro validation was implemented, where FD samples were deployed into patient-specific aneurysm phantoms using the push-pull technique to generate FD configurations with various mesh densities (dense vs. loose). The experimental deployments were then recapitulated by the HiFiVS. The experiment and simulation results were compared qualitatively and quantitatively on FD's positioning and mesh configuration. Good agreement showed that the simulation accurately captured key operations of the push-pull technique observed in vitro. Image-based computational fluid dynamics (CFD) was performed based on the 3D geometries of virtually deployed FDs. The CFD results showed that FDs with higher mesh densities across the aneurysm orifice achieved enhanced aneurysmal inflow reduction than FDs with low mesh density. The dynamic push-pull technique was demonstrated to be effective in maximizing the flow diversion performance of the FD, and therefore was favorable for more immediate aneurysm occlusion. The HiFiVS has shown its unique capability to analyze different deployment strategies of the FD and accurately predict its final geometry. Combined with the CFD analysis, this modeling workflow presents a promising analytical tool for further optimization of the flow diversion treatment.
Full text: PDF at ProQuest
ISBN: 9781303160387
Title: Carbon fiber polymer-matrix structural composites for electrical-resistance-based sensing
Year: 2013
Advisor: Chung, Deborah
Degree: Ph.D.
Abstract: This dissertation has advanced the science and technology of electrical-resistance-based sensing of strain/stress and damage using continuous carbon fiber epoxy-matrix composites, which are widely used for aircraft structures. In particular, it has extended the technology of self-sensing of carbon fiber polymer-matrix composites from uniaxial longitudinal loading and flexural loading to uniaxial through-thickness loading and has extended the technology from structural composite self-sensing to the use of the composite (specifically a one-lamina composite) as an attached sensor. Through-thickness compression is encountered in the joining of composite components by fastening. Uniaxial through-thickness compression results in strain-induced reversible decreases in the through-thickness and longitudinal volume resistivities, due to increase in the fiber-fiber contact in the through-thickness direction, and minor-damage-induced irreversible changes in these resistivities. The Poisson effect plays a minor role. The effects in the longitudinal resistivity are small compared to those in the through-thickness direction, but longitudinal resistance measurement is more amenable to practical implementation in structures than through-thickness resistance measurement. The irreversible effects are associated with an increase in the through-thickness resistivity and a decrease in the longitudinal resistivity. The through-thickness gage factor is up to 5.1 and decreases with increasing compressive strain above 0.2%. The reversible fractional change in through-thickness resistivity per through-thickness strain is up to 4.0 and decreases with increasing compressive strain. The irreversible fractional change in through-thickness resistivity per unit through-thickness strain is around -1.1 and is independent of the strain. The sensing is feasible by measuring the resistance away from the stressed region, though the effectiveness is less than that at the stressed region. A one-lamina carbon fiber epoxy-matrix composite is an effective attached flexural sensor, with effectiveness comparable to a commercially manufactured self-sensing 24-lamina quasi-isotropic carbon fiber epoxy-matrix composite. In the one-lamina sensor, the arrangement of the fibers is such that adjacent fibers make contact with one another at points along their length, as shown by the substantial conductivity in the transverse direction. The surface resistance of the sensor attached to the tension surface of the beam increases upon flexure, due to decrease in the degree of current penetration within thickness of the sensor. The surface resistance of the sensor attached to the compression surface of the beam decreases upon flexure, due to increase in the degree of current penetration. The sensing effectiveness is superior for the tension surface than the compression surface. Minor/major/catastrophic damage and damage evolution during flexure are indicated by characteristic increases in the surface resistance of the one-lamina sensor; the characteristics are simpler and easier to interpret than those of previously reported 24-lamina quasi-isotropic carbon fiber composites without glass fiber.
Full text: PDF at ProQuest
ISBN: 9781267946973
Title: Expansion tunnel characterization and development of non-intrusive microwave plasma diagnostics
Year: 2013
Advisor: Ringuette, Matthew J.
Degree: Ph.D.
Abstract: The focus of this research is the development of non-intrusive microwave diagnostics for characterization of expansion tunnels. The main objectives of this research are to accurately characterize the LENS XX expansion tunnel facility, develop non-intrusive RF diagnostics that will work in short-duration expansion tunnel testing, and to determine plasma properties and other information that might otherwise be unknown, less accurate, intrusive, or more difficult to determine through conventional methods. Testing was completed in LENS XX, a new large-scale expansion tunnel facility at CUBRC, Inc. This facility is the largest known expansion tunnel in the world with an inner diameter of 24 inches, a 96 inch test section, and an end-to-end length of more than 240 ft. Expansion tunnels are currently the only facilities capable of generating high-enthalpy test conditions with minimal or no freestream dissociation or ionization. However, short test times and freestream noise at some conditions have limited development of these facilities. To characterize the LENS XX facility, the first step is to evaluate the facility pressure, vacuum, temperature, and other mechanical restrictions to derive a theoretical testing parameter space. Test condition maps are presented for a variety of parameters and gases based on 1D perfect gas dynamics. Test conditions well beyond 10 km/s or 50 MJ/kg are identified with minimum test times of 200 us. Additionally, a four-chamber expansion tube configuration is considered for extending the stagnation enthalpy range of the facility even further. A microwave shock speed diagnostic measures primary and secondary shock speeds accurately every 30 in. down the entire length of the facility resulting in a more accurate determination of freestream conditions required for computational comparisons. The high resolution of this measurement is used to assess shock speed attenuation as well as secondary diaphragm performance. Negligible shock attenuation is reported over a large range of test conditions and gases, and this is attributed to the large diameter of the LENS XX driven and expansion tubes. Shock tube boundary layer growth solutions based on Mirels's theory confirm LENS XX test conditions should not be adversely affected by viscous effects. Mirels's theory is applied to both large- and small-scale expansion tube facilities to determine displacement thicknesses, and quasi one-dimensional solutions show how viscous effects become significant in long, smaller diameter facilities. In collaboration with ElectroDynamic Applications, Inc., (EDA) plasma frequency measurements are made in two different configurations using a swept microwave frequency power reflection measurement. Electric field characteristics of EDA's probe are presented and show current probe design is ideal for measuring properties of shock layers that are 1-2 cm thick. Electron density and radio frequency communication characteristics through a shock layer on the lee side of a capsule up to 8.9 km/s and in a stagnation configuration up to 5.4 km/s in air are reported.
Full text: PDF at ProQuest
ISBN: 9781267945938
Title: Probabilistic identification and discrimination of deep space objects via astrometric and photometric data fusion
Year: 2013
Advisor: Crassidis, John
Degree: Ph.D.
Abstract: In this dissertation two problems are studied in detail; shape estimation and space object classification. The progress made towards addressing these space situational awareness problems is discussed in this dissertation and simulation results are shown. The feasibility of the proposal approaches is shown. Additional areas of research are discussed, including generalized shape parameters estimation, inertia estimation, and mass estimation. The main focus of this dissertation presents a new method, based on a multiple-model adaptive estimation approach, to determine the most probable attribute, such as shape, of a resident space object in orbit among a number of candidate attribute models while simultaneously recovering the observed resident space object's inertial orientation and trajectory. Multiple-model adaptive estimation uses a parallel bank of filters to provide multiple resident space object state estimates, where each filter is purposefully dependent on a mutually unique resident space object model. Estimates on the conditional probability of each model given the available measurements are provided from the multiple-model adaptive estimation approach. The multiple-model adaptive estimation state estimates are determined using a weighted sum of the individual models, weighted by model probabilities, whereas the shape estimates are determined from the model with the highest probability. Each filter employs the unscented estimation approach, reducing passively-collected electro-optical data to infer the unknown state vector comprised of the resident space object inertial-to-body orientation, position and respective temporal rates. Each hypothesized model results in a different observed optical cross-sectional area. The effect of solar radiation pressure may be recovered from accurate angles-data alone, if the collected measurements span a sufficiently long period of time so as to make the non-conservative mismodeling effects noticeable. However, for relatively short data arcs, this effect is weak and thus the temporal brightness of the resident space object can be used in conjunction with the angles data to exploit the fused sensitivity to both resident space object attributes and associated trajectory, the very same ones which drive the non-conservative dynamic effects. Recovering these attributes and trajectories with sufficient accuracy is shown in this dissertation, where the attributes are inherent in unique resident space object models. The performance of this strategy is demonstrated via simulated scenarios.
Full text: PDF at ProQuest
ISBN: 9781303751271
Title: On the flow generated by rotating flat plates of low aspect ratio
Year: 2014
Advisor: Ringuette, Matthew
Degree: Ph.D.
Abstract: Low-aspect-ratio propulsors typically allow for high maneuverability at low-to-moderate speeds. This has made them the subject of much recent research aimed at employing such appendages on autonomous vehicles which are required to navigate tumultuous environments. This experimental investigation focuses on the fluid dynamic aspects associated with overly-simplified versions of such biologically-inspired propulsors. In doing so, fundamental contributions are made to the research area. The unsteady, three-dimensional flow of a low-aspect-ratio, trapezoidal flat plate undergoing rotation from rest at a 90° angle of attack and Reynolds numbers of O (103 ) is investigated experimentally. The objectives are to develop a straightforward protocol for vortex saturation, and to understand the effects of the root-to-tip flow for different velocity programs. The experiments are conducted in a glass-walled tank, and digital particle image velocimetry is used to obtain planar velocity measurements. A formation-parameter definition is investigated and is found to reasonably predict the state corresponding to the pinch-off of the initial tip vortex across the velocity programs tested. The flow in the region near the tip is relatively insensitive to Reynolds number over the range studied. The component normal to the plate is unaffected by total rotational amplitude while the tangential component has dependence on this angle. Also, an estimate of the first tip-vortex pinch-off time is obtained from the near-tip velocity data and agrees very well with values estimated using circulation. The angle of incidence of the bulk root-to-tip flow relative to the plate normal becomes more oblique with increasing rotational amplitude. Accordingly, the peak magnitude of the tangential velocity is also increased and as a result advects fluid momentum away from the plate at a higher rate. The more oblique impingement of the root-to-tip flow for increasing rotational amplitude is shown to have a distinct effect on the associated fluid dynamic force normal to the plate. For impulsive plate deceleration the time that a non-negligible force exists decreases, while for non-impulsive plate deceleration both this time and the relative force magnitude decrease for larger rotational amplitudes. In a separate set of experiments, force measurements are conducted on a similar plate that performs an advancing stroke from rest followed by a returning stroke. The parameters varied are the rotational amplitude of the motion and the rest time between the advancing and returning strokes. The unsteady normal forces track with the angular acceleration of the plate, with the added mass force peak in the returning stroke being larger than that in the advancing stroke. However, as the rest time is increased, the normal forces generated in each stroke become dynamically similar. The maximum total impulse is calculated from the force measurements and rapidly decays from its largest value at zero rest time and asymptotes to a constant with increased rest time. The direction of this impulse is also calculated and quickly approaches the direction about which the plate motion is symmetric. The largest additional impulse contribution obtained from executing a returning stroke within a finite time is approximately 18%. Increases in rotational amplitude initially increase the maximum total impulse, but it then plateaus at an amplitude of around 90 degrees. For non-zero rest times, any maxima of the impulse in a fixed direction are weak and necessarily reduced from the maximum possible impulse. For a nearly 100 degrees range of directions, the impulse is largest for rotational amplitudes between 75-90 degrees. The results are also applied to three types of propulsive configurations.
Full text: PDF at ProQuest
ISBN: 9781303159473
Title: Contribution of cytoskeletal stresses to cell volume regulation
Year: 2013
Advisor: Hua, Susan Z.
Degree: Ph.D.
Abstract: Cells in the kidney collecting duct experience changes in extracellular osmolarity as urine flows out of the nephron. These changes impart osmotic forces on collecting duct epithelia, causing variations in cell volume. Proper function of the kidney requires control of cell volume, in order to maintain homeostasis. An area of understanding which has in the past been dismissed is the role of mechanical force in maintaining equilibrium across the membrane. In the first part of this thesis, the volume regulatory behavior of cells is evaluated in the adherent and suspended states using impedance-based measurement techniques. Using actin modifying drugs, both swelling and volume recovery were determined to depend on the integrity of the cytoskeleton in adherent cells. However, volume recovery in detached cells was insensitive to actin content, showing that actin content alone was not the important factor. The next section demonstrates sensitivity of the cytoskeleton to fluid shear stress. By utilizing a stress-sensitive probe in an actin-linking host, we visualized spatial distribution of stress in adherent cells cultured in a parallel plate microchannel, in real time. Flow pulses were repeatedly applied, allowing for observation of the time dependent adaptive reorganization at the whole cell and subcellular level. Furthermore, changes in stress were observed with the introduction of actin modifiers as well as in cell detachment, allowing for visualization of tensile changes which preceded our volume measurements. Finally, stress response to hypoosmotic solution was investigated using the stress sensor and microchannel. Varying the osmotic gradient revealed osmotically sensitive tension in the recovery phase, which was borne by filaments that persisted through the swelling phase. Additionally, the initial response to swelling was observed to depend on an intact actin cytoskeleton, but not on the magnitude of the osmotic gradient. The results show actin filaments are tensed in volume recovery, and may add to the balance of forces which resist volume changes.
Full text: PDF at ProQuest
ISBN: 9781267946638
Title: Finite-span rotating flat-plate wings at low reynolds number and the effects of aspect ratio
Year: 2013
Advisor: Ringuette, Matthew J.
Degree: Ph.D.
Abstract: In the complex and dangerous environments of the modern warrior and emergency professional, the small size, maneuverability, and stealth of flapping-wing micro air vehicles (MAVs), scaled to the size of large insects or hummingbirds, has the potential to provide previously inaccessible levels of situational awareness, reconnaissance capability, and flexibility directly to the front lines. Although development of such an efficient, autonomous, and capable MAV is years away, there are immediate contributions that can be made to the fundamental science of the flapping-wing-type propulsion that makes MAVs so attractive. This investigation contributes to those fundamentals by considering the unsteady vortex dynamics problem of a rigid, rectangular flat plate at a fixed angle of attack rotating from rest--a simplified hovering half-stroke. Parameters are chosen to be biologically-relevant and relevant to MAVs operating at Reynolds numbers of [Special characters omitted.] (103 ), and experiments are performed in a 50% by mass glycerin-water mixture. These experiments use novel application of methodologies verified by rigorous uncertainty analysis. The overall objective is to understand the vortex formation and forces as well as aspect ratio ( AR ) effects. Of interest is the overall, time-varying, three-dimensional vortex structure obtained qualitatively from dye visualization and quantitatively from volumes reconstructed using planar stereoscopic digital particle image velocimetry (S-DPIV) measurements. The velocity information from S-DPIV also allows statements to be made on leading-edge vortex (LEV) stability, spanwise flow, LEV and tip-vortex (TV) circulation, and numerous circulation scalings. Force measurements are made and the lift coefficient is discussed in the context of the flow structure, the dimensional lift and the ability to relate velocity and force measurements going forward. AR effects is a topic of continued interest to those performing MAV-related research and also a primary objective here, so from both the S-DPIV and the force measurements the role of the TV is continually addressed via observed differences with AR.
Full text: PDF at ProQuest
ISBN: 9781267945785
Title: Numerically stable covariance intersection for spacecraft formation flying
Year: 2013
Advisor: Crassidis, John L.
Degree: Ph.D.
Abstract: The goal of this dissertation is to achieve more numerically stable attitude estimates for distributed systems of spacecraft. The method used is the Covariance Intersection (CI) algorithm, and this research continues and expands upon the foundation laid in the seminal Julier & Uhlman paper entitled A Non-divergent Estimation Algorithm in the Presence of Unknown Correlations and the body of related work that is now scarcely 15 years old. An optimization problem is developed using the quaternion to parameterize the attitude of each spacecraft. The norm constraint on the quaternion makes it necessary to augment the cost function using the method of Lagrange multipliers. Closed-form solutions for two or more quaternions have yet to be found (as far as the author or his collaborators are aware), and numerical solutions have been unstable and difficult, or they have not been demonstrated at all. Several numerically stable solutions are presented here. For two quaternions, a solution is found using a multi-variate Newton-Raphson algorithm and using built-in Matlab functions (employing only standard double precision variables), but these are not guaranteed to find the optimal solution. For two and three quaternions, homotopy continuation methods are used, which guarantee optimal solutions. For three quaternions, a square-root algorithm is developed for use with a homotopy continuation method. In the process, the design space of the Lagrange multipliers is explored, simplified forms of the quaternion constraints cost function are presented, an analogous optimization problem in two-dimensions is developed and analyzed, and some insight into the fundamental nature of the problem is gained.
Full text: PDF at ProQuest
ISBN: 9781267945709
Title: Bridge Health Monitoring for a Beam Bridge using Damage Model and Slope Sensors
Year: 2013
Advisor: Dargush, Gary F.
Degree: Ph.D.
Abstract: The use of Bridge Health Monitoring systems has the potential to provide greater confidence in the integrity of a bridge. As a result, the life of a bridge can be maximized by reducing whole life costs and providing additional safety measures. Structural damage detection and integrity assessment is one of the fundamental objectives for Bridge Health Monitoring. In such an application, early stage damage detection is desirable by examining changes in its measured responses, especially bridge deflection. Until now, it is believed that such a parameter is the most reliable indicator to overcome difficulties in bridge measurement with workable signal to noise ratio. Structural vibrational responses are the most commonly used measurements in part due the hypothesis that damage changes the physical properties of a structure, which in turn will cause changes in the vibrational characteristics of the structure. Vibration-based damage detection is a rapidly developing technology and a number of methods have been proposed. A new factor called the Modal Flexibility Participating Factor (MFPF) is introduced, which is a good measure of changes in the structural characteristics. First, it is sensitive to bridge damage and can be directly used to indicate damage. Secondly, MFPF itself is robust and insensitive to measurement noise. One of the major improvement is the use of mode shape in bridge damage model. Mode shapes were used in the study since they are orthogonal i.e. mode shape for a system is linearly independent of all other mode shapes for the system. Hence, mode shapes can move independently in the sense that excitation of one mode will never cause motion of a different mode. Also, mode shape for a system changes if and only if there is a change in the physical characteristics and/or boundary conditions of the system. The modal parameters (including natural frequencies, mode shapes, phase angles, and coherence function) of the I-beam are collected using Modal Analysis (in laboratory using Impact Hammer modal testing). Also, finite element method is used to obtain mode shapes of the I-beam using ANSYS. The mode shapes are then used to represent I-beam deflection because deflection is time variable, and robust bases are vital to stably represent deflection. These same shapes tend to dominate the motion during an earthquake, vehicle motion, windstorm, etc. The proposed damage model has been tested for different type of loadings and beams using numerical simulations. From these tests it has been evident that the damage model can significantly increase signal to noise ratio and is a good indicator of damage. The change in MFPF, when damage occurs, with change in natural frequency and change in specific location of mode shape together, will provide better indications of damage. However, if these modal parameters are used individually, the values of change are almost negligible. Hence the individual modal parameters cannot be used to indicate bridge damage. Very often, the change in individual modal parameters will be buried by measurement noises. In other words, poor signal to noise ratio is the main reason that we cannot use individual modal parameters to indicate bridge damage. Real life bridge (i.e. Bridge 263) was also tested for deflection values under dynamic loading. The deflection calculation system (i.e. slope sensors) were installed on the bridge and data was collected without interrupting the traffic. The data was analyzed using deflection calculation software and a dynamic deflection testing report was generated. The dynamic testing report further corroborates on the robustness of the deflection calculation system. (Abstract shortened by UMI.)
Full text: PDF at ProQuest
ISBN: 9781303159701
Title: Numerical Modeling and Simulation of Flame Spread Over Charring Materials
Year: 2013
Advisor: DesJardin, Paul E.
Degree: Ph.D.
Abstract: The overall objective of this dissertation is the development of a modeling and simulation approach for upward flame spread. This objective is broken into two primary tasks: development of a porous media charring model for carbon-epoxy composites and an algorithm to couple flow and structural solvers. The charring model incorporates pyrolysis decomposition, heat and mass transport, individual species tracking and volumetric swelling using a novel finite element algorithm. Favorable comparisons to experimental data of the heat release rate (HRR) and time-to-ignition as well as the final products (mass fractions, volume percentages, porosity, etc.) are shown. The charring model and flow solvers are coupled using a newly developed conjugate heat and mass transfer algorithm designed for complex geometries in fire environments. Highlights of the coupling algorithm include: a level set description of complex moving geometry, perfect conservation of energy and mass transfer across the interface, a no-slip and no-penetration ghost-fluid interface description, and a patch level set update system that balances accuracy and computational efficiency by reducing the resolution of the Lagrangian model away from the interface. A systematic study of grid convergence order and comparison to analytical benchmark problems is conducted to show the soundness of the approach. The interface methodology is combined with the carbon-epoxy charring model and is used to study burning composites. Comparison of simulations to experimental data show good agreement of composite material response and flame spread (critical heat flux).
Full text: PDF at ProQuest
ISBN: 9781267946423
Title: A Jacobian singularity based robust controller design for structured uncertainty
Year: 2013
Advisor: Singh, Tarunraj
Degree: Ph.D.
Abstract: Any real system will have differences between the mathematical model response and the response of the true system it represents. These differences can come from external disturbances, incomplete modeling of the dynamics (unstructured uncertainty) or simply incorrect or changing parameter values (structured uncertainty) in the model. Sources of unstructured uncertainty are unavoidable in real systems, so a controller design must always consider robustness to these effects. In many cases, when the sources of structured uncertainty are addressed as another source of unstructured uncertainty, the resulting controller design is conservative. By accurately addressing and designing a controller for the structured uncertainty, large benefits in the controller performance can be generated since the conservative bound is reduced. The classical approach to output shaping of a system involves a feedback loop since this architecture is more robust to differences between the mathematical model and the true system. This dissertation will present an approach to design a feedback controller which is robust to structured uncertainties in a plant, in an accurate and minimal way. The approach begins by identifying a critical set of system parameters which will be proven to represent the full set of system parameters in the Nyquist plane. This critical set is populated by all parameter vectors which satisfy a developed deficiency condition and is the minimal set which will contribute to the Nyquist plane portraits. The invariance of this critical set to control structure is shown explicitly. An improvement of previous work is the addition of a numerical solution technique which guarantees that all critical points are found. The presented approach will allow for the designer set minimum relative stability margins, such as gain and phase margins, which previous work could not compute accurately or with confidence of the results. A robust controller is designed with respect to this critical set. The presented technique will yield a set of controller gains which will meet the desired performance criteria, allowing the designer to select gains from this set to account for qualitative analysis results. The advantages of the presented technique are highlighted by a direct comparison to previous work. These advantages are also shown through a series of examples. The final example, which is a helicopter in hover, shows that a system with nonlinear coefficient dependencies can be accurately addressed by the current technique considering only a minimal set of system parameters, which cannot be done accurately by the previous work.
Full text: PDF at ProQuest
ISBN: 9781267945761
MAE researchers have developed advanced computational techniques for Fire Simulation and multi-phase reacting turbulent flows.
UB MAE researchers in computational mechanics have developed a high fidelity volcanic landslide simulator to aid geologists in mapping the hazard areas at locations such as the island of Montserrat.
A Level Set Embedded Interface Method has been developed at Compuational Fluid Dynamics Laboratory to simulate Conjugate heat transfer for irregular geometries
MAE's Laser Flow Diagnostic Laboratory is a leader holographic particle image velocimetry, a three-dimensional, next generation flow diagnostics tool.
MAE's Automation, Robotics, and Mechatronics Laboratory is conducting research both on the theoretical formulation and experimental validation of such novel mechatronic systems as multi-robot collaboration.
The nonlinear estimation group is developing techniques for propagating uncertainties through nonlinear dynamical systems for better forecasting and output uncertainty characterization.
Study of Non-premixed flame-wall interaction using vortex ring configuration is done for the first time at the Computational Fluid Dynamics Laboratory.
Send an e-mail to the Chair with information on your project requirements >>