In this paper we show how formal computer science concepts—such as encoding, algorithm or computability—can be interpreted philosophically, including ontologically and epistemologically. Such interpretations lead to questions and problems, the working solutions of which constitute some form of pre-philosophical worldview. In this work we focus on questions inspired by the IT distinction between digitality and analogicity, which has its mathematical origin in the mathematical distinction between discreteness and continuity. These include the following questions: 1) Is the deep structure of physical reality digital or analog, 2) does the human mind resemble a more digital or analog computational system, 3) does the answer to the second question give us a cognitively fruitful insight into the cognitive limitations of the mind? As a particularly important basis for the above questions, we consider the fact that the computational power (i.e., the range of solvable problems) of some types of analog computations is greater than that of digital computations.
We address one of the weaknesses of the RSA ciphering systems i.e. the existence of the private keys that are relatively easy to compromise by the attacker. The problem can be mitigated by the Internet services providers, but it requires some computational effort. We propose the proof of concept of the GPGPU-accelerated system that can help detect and eliminate users’ weak keys. We have proposed the algorithms and developed the GPU-optimised program code that is now publicly available and substantially outperforms the tested CPU processor. The source code of the OpenSSL library was adapted for GPGPU, and the resulting code can perform both on the GPU and CPU processors. Additionally, we present the solution how to map a triangular grid into the GPU rectangular grid – the basic dilemma in many problems that concern pair-wise analysis for the set of elements. Also, the comparison of two data caching methods on GPGPU leads to the interesting general conclusions. We present the results of the experiments of the performance analysis of the selected algorithms for the various RSA key length, configurations of GPU grid, and size of the tested key set.
Mobile devices have become an integral part of our life and provide dozens of useful services to their users. However, usability of mobile devices is hindered by battery lifetime. Energy conservation can extend battery lifetime, however, any energy management policy requires accurate prediction of energy consumption, which is impossible without reliable energy measurement and estimation methods and tools. We present an analysis of the energy measurement methodologies and describe the implementations of the internal (profiling) software (proprietary, custom) and external software-based (Java API, Sensor API, GSM AT) energy measurement methodologies. The methods are applied to measure energy consumption on a variety of mobile devices (laptop PC, PDA, smart phone). A case study of measuring energy consumption on a mobile computer using 3DMark06 benchmarking software is presented
This paper concerns measurement procedures on an emotion monitoring stand designed for tracking human emotions in the Human-Computer Interaction with physiological characteristics. The paper addresses the key problem of physiological measurements being disturbed by a motion typical for human-computer interaction such as keyboard typing or mouse movements. An original experiment is described, that aimed at practical evaluation of measurement procedures performed at the emotion monitoring stand constructed at GUT. Different locations of sensors were considered and evaluated for suitability and measurement precision in the Human- Computer Interaction monitoring. Alternative locations (ear lobes and forearms) for skin conductance, blood volume pulse and temperature sensors were proposed and verified. Alternative locations proved correlation with traditional locations as well as lower sensitiveness to movements like typing or mouse moving, therefore they can make a better solution for monitoring the Human-Computer Interaction.
Some materials-related microstructural problems calculated using the phase-field method are presented. It is well known that the phase field method requires mesh resolution of a diffuse interface. This makes the use of mesh adaptivity essential especially for fast evolving interfaces and other transient problems. Complex problems in 3D are also computationally challenging so that parallel computations are considered necessary. In this paper, a parallel adaptive finite element scheme is proposed. The scheme keeps the level of node and edge for 2D and level of node and face for 3D instead of the complete history of refinements to facilitate derefinement. The information is local and exchange of information is minimized and also less memory is used. The parallel adaptive algorithms that run on distributed memory machines are implemented in the numerical simulation of dendritic growth and capillary-driven flows.
The computational intelligence tool has major contribution to analyse the properties of materials without much experimentation. The B4C particles are used to improve the quality of the strength of materials. With respect to the percentage of these particles used in the micro and nano, composites may fix the mechanical properties. The different combinations of input parameters determine the characteristics of raw materials. The load, content of B4C particles with 0%, 2%, 4%, 6%, 8% and 10% will determine the wear behaviour like CoF, wear rate etc. The properties of materials like stress, strain, % of elongation and impact energy are studied. The temperature based CoF and wear rate is analysed. The temperature may vary between 30°C, 100°C and 200°C. In addition, the CoF and wear rate of materials are predicted with respect to load, weight % of B4C and nano hexagonal boron nitride %. The intelligent tools like Neural Networks (BPNN, RBNN, FL and Decision tree) are applied to analyse these characteristics of micro / nano composites with the inclusion of B4C particles and nano hBN % without physically conducting the experiments in the Lab. The material properties will be classified with respect to the range of input parameters using the computational model.
Parallel computers are becoming more available. The natural way to improve computational efficiency of multibody simulations seems to be parallel processing. Within this work we are trying to estimate the efficiency of parallel computations performed using one of the commercial multibody solver. First, the short theoretical outline is presented to give the overview of modeling issues in multibody dynamics. Next, the experimental part is demonstrated. The series of dynamics analyses are carried out. The test mechanisms with variable number of bodies are used to gather the performance results of the solver. The obtained data allow for estimating the number of bodies which are sufficient to gain benefits from parallel computing as well as their level. The parallel processing profits are taken into account in the case of contact forces present in the system. The performance benefits are indicated, when the multilink belt chain is simulated, in which contact forces are included in the model.
Green spaces are an integral element of urban structures. They are not only a place of rest for their users, but also positively affect their well-being and health. The eff ect of these spaces, is the better, the smoother they create larger urban layout – stings of greenery. The introduction of urban greenery can and should be one of the basic elements of revitalization. Often, however, greenery is designed without multi-aspect analysis, enabling understanding of conditions and the use of existing potential in a given place. The use of computational design in conjunction with the use of generally available databases, such as numerical SRTM terrain models, publicly available OSM map database and EPW meteorological data, allows for the design of space in a more comprehensive way. These design methods allow better matching of the greenery design in a given area to specific architectural, urban and environmental conditions.
Computational modeling plays an important role in the methodology of contemporary science. The epistemological role of modeling and simulations leads to questions about a possible use of this method in philosophy. Attempts to use some mathematical tools to formulate philosophical concepts trace back to Spinoza and Newton. Newtonian natural philosophy became an example of successful use of mathematical thinking to describe the fundamental level of nature. Newton’s approach has initiated a new scientific field of research in physics and at the same time his system has become a source of new philosophical considerations about physical reality. According to Michael Heller, some physical theories may be treated as the formalizations of philosophical conceptions. Computational modeling may be an extension of this idea; this is what I would like to present in the article. I also consider computational modeling in philosophy as a source of new philosophical metaphors; this idea has been proposed in David J. Bolter’s conception of defining technology. The consideration leads to the following conclusion: In the methodology of philosophy significant changes have been taking place; the new approach do not make traditional methods obsolete, it is rather a new analytical tools for philosophy and a source of inspiring metaphors.
Disk motors are characterized by the axial direction of main magnetic flux and the variable length of the magnetic flux path along varying stator/rotor radii. This is why it is generally accepted that reliable electromagnetic calculations for such machines should be carried out using the FEM for 3D models. The 3D approach makes it possible to take into account an entire spectrum of different effects. Such computational analysis is very time-consuming, this is in particular true for machines with one magnetic axis only. An alternate computational method based on a 2D FEM model of a cylindrical motor is proposed in the paper. The obtained calculation results have been verified by means of lab test results for a physical model. The proposed method leads to a significant decrease of computational time, i.e. the decrease of iterative search for the most advantageous design.