The analysis of effectiveness of the gradient algorithm for the two-dimension steady state heat transfer problems is being performed. The three gradient algorithms - the BCG (biconjugate gradient algorithm), the BICGSTAB (biconjugate gradient stabilized algorithm), and the CGS (conjugate gradient squared algorithm) are implemented in a computer code. Because the first type boundary conditions are imposed, it is possible to compare the results with the analytical solution. Computations are carried out for different numerical grid densities. Therefore it is possible to investigate how the grid density influences the efficiency of the gradient algorithms. The total computational time, residual drop and the iteration time for the gradient algorithms are additionally compared with the performance of the SOR (successive over-relaxation) method.
Raman spectrometers are devices which enable fast and non-contact identification of examined chemicals. These devices utilize the Raman phenomenon to identify unknown and often illicit chemicals (e.g. drugs, explosives) without the necessity of their preparation. Now, Raman devices can be portable and therefore can be more widely used to improve security at public places. Unfortunately, Raman spectra measurements is a challenge due to noise and interferences present outside the laboratories. The design of a portable Raman spectrometer developed at the Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology is presented. The paper outlines sources of interferences present in Raman spectra measurements and signal processing techniques required to reduce their influence (e.g. background removal, spectra smoothing). Finally, the selected algorithms for automated chemicals classification are presented. The algorithms compare the measured Raman spectra with a reference spectra library to identify the sample. Detection efficiency of these algorithms is discussed and directions of further research are outlined.
The article presents an example of the use of functional series for the analysis of nonlinear systems for discrete time signals. The homogeneous operator is defined and it is decomposed into three component operators: the multiplying operator, the convolution operator and the alignment operator. An important case from a practical point of view is considered – a cascade connection of two polynomial systems. A new, binary algorithm for determining the sequence of complex kernels of cascade from two sequences of kernels of component systems is presented. Due to its simplicity, it can be used during iterative processes in the analysis of nonlinear systems (e.g. feedback systems).
The primary importance of the paper is the application of the efficient formulation for the simulation of open-loop lightweight robotic manipulator. The framework employed in the paper makes use of the spatial operator algebra and the associated equations are expressed in joint space. This compact representation of the manipulator dynamics makes it possible to solve the robot forward and inverse dynamics problems in a recursive and fast manner. In the current form, the presented algorithm can be applied for the dynamics simulation of an open-loop chain system possessing any number of joints. Specifically, the formulation has been successfully applied for the analysis of the 7DOF KUKA LWR robot. Results from a number of test cases for the robot demonstrate the verification of the calculations.
The presented paper concerns CFD optimization of the straight-through labyrinth seal with a smooth land. The aim of the process was to reduce the leakage flow through a labyrinth seal with two fins. Due to the complexity of the problem and for the sake of the computation time, a decision was made to modify the standard evolutionary optimization algorithm by adding an approach based on a metamodel. Five basic geometrical parameters of the labyrinth seal were taken into account: the angles of the seal’s two fins, and the fin width, height and pitch. Other parameters were constrained, including the clearance over the fins. The CFD calculations were carried out using the ANSYS-CFX commercial code. The in-house optimization algorithm was prepared in the Matlab environment. The presented metamodel was built using a Multi-Layer Perceptron Neural Network which was trained using the Levenberg-Marquardt algorithm. The Neural Network training and validation were carried out based on the data from the CFD analysis performed for different geometrical configurations of the labyrinth seal. The initial response surface was built based on the design of the experiment (DOE). The novelty of the proposed methodology is the steady improvement in the response surface goodness of fit. The accuracy of the response surface is increased by CFD calculations of the labyrinth seal additional geometrical configurations. These configurations are created based on the evolutionary algorithm operators such as selection, crossover and mutation. The created metamodel makes it possible to run a fast optimization process using a previously prepared response surface. The metamodel solution is validated against CFD calculations. It then complements the next generation of the evolutionary algorithm.
The transfer function (TF) method is presently a well-known method used to detect various types of winding damage in power transformers. Although abundant research has been done on this subject using laboratory windings as test objects, it is hard to find one, whose test objects are actual large-power transformer windings. Hence, a 400 kV disc winding consisting of 86 discs is used in this paper to study turn-to-turn short circuit with the help of the TF method. To evaluate the effects of this type of fault on TF curves, some mathematical comparison algorithms are used in this research.
Electromagnetic arrangements which create a magnetic field of required distribution and magnitude are widely used in electrical engineering. Development of new accurate designing methods is still a valid topic of technical investigations. From the theoretical point of view the problem belongs to magnetic fields synthesis theory. This paper discusses a problem of designing a shape of a solenoid which produces a uniform magnetic field on its axis. The method of finding an optimal shape is based on a genetic algorithm (GA) coupled with Bézier curves.
The fundamental concepts of nano and quantum systems of informatics have been presented. The nanotechnological processes taking place in biological systems of informatics have been discussed in terms of informatics. Presented analysis shows that the application of nanotechnologies in the technical informatic systems enables realization of processes for formation of products and objects with self-replication feature, similarly to the processes existing in biological informatic systems. It seems also that the quantum technologies enable further miniaturization of the technical systems of informatics as well as make the execution time of some computing processes like, e.g. Shor's and Grover's algorithms, shorter.
Evolutionary computing and algorithms are well known tools of optimisation that are utilized for various areas of analogue electronic circuits design and diagnosis. This paper presents the possibility of using two evolutionary algorithms - genetic algorithm and evolutionary strategies - for the purpose of analogue circuits yield and cost optimisation. Terms: technologic and parametric yield are defined. Procedures of parametric yield optimisation, such as a design centring, a design tolerancing, a design centring with tolerancing, are introduced. Basics of genetic algorithm and evolutionary strategies are presented, differences between these two algorithms are highlighted, certain aspects of implementation are discussed. Effectiveness of both algorithms in parametric yield optimisation has been tested on several examples and results have been presented. A share of evolutionary algorithms computation cost in a total optimisation cost is analyzed.
The article presents the problem of scheduling a multi-stage project with limited availability
of resources with the discounted cash flow maximization criterion from the perspective of
a contractor. The contractor’s cash outflows are associated with the execution of activities.
The client’s payments (cash inflows for the contractor) are performed after completing the
agreed project’s stages. The proposed solution for this problem is the use of insertion algorithms.
Schedules are generated using forward and backward schedule generation schemes
and modified justification techniques. The effectiveness of the proposed procedures is the
subject of the examination with the use of standard test instances with additionally defined
financial settlements of a project.
One of the least expensive and safest diagnostic modalities routinely used is ultrasound imaging. An attractive development in this field is a two-dimensional (2D) matrix probe with three-dimensional (3D) imaging. The main problems to implement this probe come from a large number of elements they need to use. When the number of elements is reduced the side lobes arising from the transducer change along with the grating lobes that are linked to the periodic disposition of the elements. The grating lobes are reduced by placing the elements without any consideration of the grid. In this study, the Binary Bat Algorithm (BBA) is used to optimize the number of active elements in order to lower the side lobe level. The results are compared to other optimization methods to validate the proposed algorithm.
A spectrum defragmentation problem in elastic optical networks was considered under the assumption that all connections can be realized in switching nodes. But this assumption is true only when the switching fabric has appropriate combinatorial properties. In this paper, we consider a defragmentation problem in one architecture of wavelength-spacewavelength switching fabrics. First, we discuss the requirements for this switching fabric, below which defragmentation does not always end with success. Then, we propose defragmentation algorithms and evaluate them by simulation. The results show that proposed algorithms can increase the number of connections realized in the switching fabric and reduce the loss probability.
In the areas of acoustic research or applications that deal with not-precisely-known or variable conditions, a method of adaptation to the uncertainness or changes is usually necessary. When searching for an adaptation algorithm, it is hard to overlook the least mean squares (LMS) algorithm. Its simplicity, speed of computation, and robustness has won it a wide area of applications: from telecommunication, through acoustics and vibration, to seismology. The algorithm, however, still lacks a full theoretical analysis. This is probabely the cause of its main drawback: the need of a careful choice of the step size - which is the reason why so many variable step size flavors of the LMS algorithm has been developed.
This paper contributes to both the above mentioned characteristics of the LMS algorithm. First, it shows a derivation of a new necessary condition for the LMS algorithm convergence. The condition, although weak, proved useful in developing a new variable step size LMS algorithm which appeared to be quite different from the algorithms known from the literature. Moreover, the algorithm proved to be effective in both simulations and laboratory experiments, covering two possible applications: adaptive line enhancement and active noise control.
This work deals with the inverse problem associated to 3D crack identification inside a conductive material using eddy current measurements. In order to accelerate the time-consuming direct optimization, the reconstruction is provided by the minimization of a last-square functional of the data-model misfit using space mapping (SM) methodology. This technique enables to shift the optimization burden from a time consuming and accurate model to the less precise but faster coarse surrogate model. In this work, the finite element method (FEM) is used as a fine model while the model based on the volume integral method (VIM) serves as a coarse model. The application of the proposed method to the shape reconstruction allows to shorten the evaluation time that is required to provide the proper parameter estimation of surface defects.
An automated procedure based on evolutionary computation and Finite Element Analysis (FEA) is proposed to synthesize the optimal distribution of nanoparticles (NPs) in multi-site injection for a Magnetic Fluid Hyperthermia (MFH) therapy. Evolution Strategy and Non dominated Sorting Genetic Algorithm (NSGA) are used as optimization procedures coupled with a Finite Element computation tool.
The article describes the problem of selection of heat treatment parameters to obtain the required mechanical properties in heat- treated
bronzes. A methodology for the construction of a classification model based on rough set theory is presented. A model of this type allows
the construction of inference rules also in the case when our knowledge of the existing phenomena is incomplete, and this is situation
commonly encountered when new materials enter the market. In the case of new test materials, such as the grade of bronze described in
this article, we still lack full knowledge and the choice of heat treatment parameters is based on a fragmentary knowledge resulting from
experimental studies. The measurement results can be useful in building of a model, this model, however, cannot be deterministic, but can
only approximate the stochastic nature of phenomena. The use of rough set theory allows for efficient inference also in areas that are not
yet fully explored.