We describe the spatial variability of snow accumulation on three selected glaciers in Spitsbergen (Hansbreen, Werenskioldbreen and Aavatsmarkbreen) in the winter seasons of 1988/89, 1998/99 and 2001/2002 respectively. The distribution of snow cover is determined by the interrelationships between the direction of the glacier axes and the dominant easterly winds. The snow distribution is regular on the glaciers located E-W, but is more complicated on the glaciers located meridionally. The western part of glaciers is more predisposed to the snow accumulation than the eastern. This is due to snowdrift intensity. Statistical relationships between snow accumulation, deviation of accumulation from the mean values and accumulation variability related to topographic parameters such as: altitude, slope inclination, aspect, slope curvature and distance from the edge of the glacier have been determined. The only significant relations occured between snow accumulation and altitude (r = 0.64-0.91).
The paper indicates the significance of the problem of foundry processes parameters stability supervision and assessment. The parameters, which can be effectively tracked and analysed using dedicated computer systems for data acquisition and exploration (Acquisition and Data Mining systems, A&D systems) were pointed out. The state of research and methods of solving production problems with the help of computational intelligence systems (Computational Intelligence, CI) were characterised. The research part shows capabilities of an original A&DM system in the aspect of selected analyses of recorded data for cast defects (effect) forecast on the example of a chosen iron foundry. Implementation tests and analyses were performed based on selected assortments for grey and nodular cast iron grades (castings with 50 kg maximum weight, casting on automatic moulding lines for disposable green sand moulds). Validation tests results, applied methods and algorithms (the original system’s operation in real production conditions) confirmed the effectiveness of the assumptions and application of the methods described. Usability, as well as benefits of using A&DM systems in foundries are measurable and lead to stabilisation of production conditions in particular sections included in the area of use of these systems, and as a result to improvement of casting quality and reduction of defect number.
This paper describes a forecasting exercise of close-to-open returns on major global stock indices, based on high-frequency price patterns that have become available in foreign markets overnight. Generally speaking, out-of-sample forecast performance depends on the forecast method as well as the information that the forecasts are based on. In this paper both aspects are considered. The fact that the close-to-open gap is a scalar response variable to a functional variable, namely an overnight foreign price pattern, brings the prediction exercise in the realm of functional data analysis. Both parametric and non-parametric functional data analysis are considered, and compared with a simple linear benchmark model. The information set is varied by dividing global markets into three clusters, Asia-Pacific, Europe and North-America, and including or excluding price patterns on a per-cluster basis. The overall best performing forecast is nonparametric using all available information, suggesting the presence of nonlinear relations between the overnight price patterns and the opening gaps.
In economics we often face a system which intrinsically imposes a structure of hierarchy of its components, i.e., in modeling trade accounts related to foreign exchange or in optimization of regional air protection policy. A problem of reconciliation of forecasts obtained on different levels of hierarchy has been addressed in the statistical and econometric literature many times and concerns bringing together forecasts obtained independently at different levels of hierarchy. This paper deals with this issue with regard to a hierarchical functional time series. We present and critically discuss a state of art and indicate opportunities of an application of these methods to a certain environment protection problem. We critically compare the best predictor known from the literature with our own original proposal. Within the paper we study a macromodel describing the day and night air pollution in Silesia region divided into five subregions.
Multidimensional exploratory techniques, such as the Principal Component Analysis (PCA), have been used to analyze long-term changes in the flow regime and quality of water of the lowland dam reservoir Turawa (south-west Poland) in the catchment of the Mała Panew river (a tributary of the Odra). The paper proves that during the period of 1998–2016 the Turawa reservoir was equalizing the river’s water flow. Moreover, various physicochemical water quality indicators were analyzed at three measurement points (at the tributary’s mouth into the reservoir, in the reservoir itself and at the outflow from the reservoir). The water quality assessment was performed by analyzing physicochemical indicators such as water temperature, TSS, pH, dissolved oxygen, BOD5, NH4+, NO3-, NO2-, N, PO43-, P, electrolytic conductivity, DS, SO42- and Cl- . Furthermore, the correlations between all these water quality indicators were analyzed statistically at each measurement point, at the statistical signifi cance level of p ≤ 0.05. PCA was used to determine the structures between these water quality variables at each measurement point. As a result, a theoretical model was obtained that describes the regularities in the relationships between the indicators. PCA has shown that biogenic indicators have the strongest influence on the water quality in the Mała Panew. Lastly, the differences between the averages of the water quality indicators of the inflowing and of the outflowing water were considered and their significance was analyzed. PCA unveiled structure and complexity of interconnections between river flow and water quality. The paper shows that such statistical methods can be valuable tools for developing suitable water management strategies for the catchment and the reservoir itself.
The aim of the article to assess the functioning of the NewConnect market over 10 years from the organizer’s and participants’ perspective. This helps to diagnose the most important organizational advantages and problems of the Polish MTF, determine further development prospects and propose potential changes to neutralize the negative factors. To illustrate the problem, a comprehensive analysis will be made of aggregated statistical data from 2007–2017, which show the changes and trends on this market, and additionally include the data comparing the current state of the NewConnect market with other alternative markets organized by European stock exchanges.
The conducted research does not allow to view the NewConnect market as an organizational success. The analysis identified a number of problems in the functioning of the Polish MTF, ranging from the inappropriate organization of the primary market, resulting in the admittance of too high a number of issuers of dubious credibility, to the consequences appearing on the secondary shares market. It does not give unambiguous grounds to expect positive prospects for the market development in the future. In order to stop unfavorable trends and to improve the issuers’ quality, a discussion on the regulations regarding issuers’ admission, i.e. the size of the minimum equity, IPO, capitalization and the issue price of the debuting company, should be initiated.
The paper examines the usage of Convolutional Bidirectional Recurrent Neural Network (CBRNN) for a problem of quality measurement in a music content. The key contribution in this approach, compared to the existing research, is that the examined model is evaluated in terms of detecting acoustic anomalies without the requirement to provide a reference (clean) signal. Since real music content may include some modes of instrumental sounds, speech and singing voice or different audio effects, it is more complex to analyze than clean speech or artificial signals, especially without a comparison to the known reference content. The presented results might be treated as a proof of concept, since some specific types of artefacts are covered in this paper (examples of quantization defect, missing sound, distortion of gain characteristics, extra noise sound). However, the described model can be easily expanded to detect other impairments or used as a pre-trained model for other transfer learning processes. To examine the model efficiency several experiments have been performed and reported in the paper. The raw audio samples were transformed into Mel-scaled spectrograms and transferred as input to the model, first independently, then along with additional features (Zero Crossing Rate, Spectral Contrast). According to the obtained results, there is a significant increase in overall accuracy (by 10.1%), if Spectral Contrast information is provided together with Mel-scaled spectrograms. The paper examines also the influence of recursive layers on effectiveness of the artefact classification task.
Together with the dynamic development of modern computer systems, the possibilities of applying refined methods of nonparametric estimation to control engineering tasks have grown just as fast. This broad and complex theme is presented in this paper for the case of estimation of density of a random variable distribution. Nonparametric methods allow here the useful characterization of probability distributions without arbitrary assumptions regarding their membership to a fixed class. Following an illustratory description of the fundamental procedures used to this end, results will be generalized and synthetically presented of research on the application of kernel estimators, dominant here, in problems of Bayes parameter estimation with asymmetrical polynomial loss function, as well as for fault detection in dynamical systems as objects of automatic control, in the scope of detection, diagnosis and prognosis of malfunctions. To this aim the basics of data analysis and exploration tasks - recognition of outliers, clustering and classification - solved using uniform mathematical apparatus based on the kernel estimators methodology were also investigated