Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 16
items per page: 25 50 75
Sort by:

Abstract

The method that is proposed in the present paper is a special case of squared M split estimation. It concerns a direct estimation of the shift between the parameters of the functional models of geodetic observations. The shift in question may result from, for example, deformation of a geodetic network or other non-random disturbances that may influence coordinates of the network points. The paper also presents the example where such shift is identified with a phase displacement of a wave. The shift is estimated on the basis of wave observations and without any knowledge where such displacement took place. The estimates of the shift that are proposed in the paper are named Shift- M split estimators.
Go to article

Abstract

Because of the value of time, investors are interested in obtaining economic benefits rather early and at a highest return. But some investing opportunities, e.g. mineral projects, require from an investor to freeze their capital for several years. In exchange for this, they expect adequate remuneration for waiting, uncertainty and possible opportunities lost. This compensation is reflected in the level of interest rate they demand. Commonly used approach of project evaluation – the discounted cash flow analysis – uses this interest rate to determine present value of future cash flows. Mining investors should worry about project’s cash flows with greater assiduousness – especially about those arising in first years of the project lifetime. Having regard to the mining industry, this technique views a mineral deposit as complete production project where the base sources of uncertainty are future levels of economic-financial and technical parameters. Some of them are more risky than others – this paper tries to split apart and weigh their importance by the example of Polish hard coal projects at the feasibility study. The work has been performed with the sensitivity analysis of the internal rate of return. Calculations were made using the ‘bare bones’ assumption (on all the equity basis, constant money, after tax, flat price and constant operating costs), which creates a good reference and starting point for comparing other investment alternatives and for future investigations. The first part introduces with the discounting issue; in the following sections the paper presents data and methods used for spinning off risk components from the feasibility-stage discount rate and, in the end, some recommendations are presented.
Go to article

Abstract

This study attempted to examine the impacts of academic locus of control and metacognitive awareness on the academic adjustment of the student participants. The convenient sampling was applied to select the sample of 368 participants comprising 246 internals with age ranging from 17 to 28 years (M = 20.52, SD = 2.10) and 122 externals with age spanning from 17 to 28 years (M = 20.57, SD = 2.08). The findings indicated that there were significant differences in the various dimensions of metacognition, academic lifestyle and academic achievement of the internals and externals except for academic motivation and overall academic adjustment. There were significant gender differences in declarative knowledge, procedural knowledge, conditional knowledge, planning, information management, monitoring, evaluation and overall metacognitive awareness. Likewise, the internals and externals differed significantly in their mean scores of declarative knowledge, procedural knowledge, conditional knowledge, planning, information management, monitoring, debugging, evaluation and overall metacognitive awareness, academic lifestyle and academic achievement. The significant positive correlations existed between the scores of metacognitive awareness and academic adjustment. It was evident that the internal academic locus of control and metacognitive awareness were significant predictors of academic adjustment of the students. The findings have been discussed in the light of recent findings of the field. The findings of the study have significant implications to understand the academic success and adjustment of the students and thus, relevant for teachers, educationists, policy makers and parents. The future directions for the researchers and limitations of the study have also been discussed.
Go to article

Abstract

The process of railway track adjustment is a task which includes bringing, in geometrical terms, the actual track axis to the position ensuring safe and efficient traffic of rail vehicles. The initial calculation stage of this process is to determine approximately the limits of sections of different geometry, i.e. straight lines, arcs and transition curves. This allows to draw up a draft alignment design, which is subject to control the position relative to the current state. In practice, this type of a project rarely meets the requirements associated with the values of corrective alignments. Therefore, it becomes necessary to apply iterated correction of a solution in order to determine the final project, allowing to introduce minor corrections while maintaining the assumed parameters of the route. The degree of complexity of this process is defined by the quality of determining a preliminary draft alignment design. Delimitation of the sections for creation of creating such a design, is usually done by using the curvature diagram (InRail v8.7 Reference Guide [1], Jamka et al [2], Strach [3]), which is, however, sensitive to the misalignment of the track and measurement errors. In their paper Lenda and Strach [4] proposed a new method for creating curvature diagram, based on approximating spline function, theoretically allowing, inter alia, to reduce vulnerability to interference factors. In this study, the method to determine a preliminary draft alignment design for the track with severe overexploitation was used, and thus in the conditions adversely affecting the accuracy of the conducted readings. The results were compared to the ones obtained using classical curvature diagram. The obtained results indicate that the method allows to increase the readability of a curvature graph, which at considerable deregulation of a track takes an irregular shape, difficult to interpret. The method also favourably affects the accuracy of determining the initial parameters of the project, reducing the entire process of calculation.
Go to article

Abstract

A geodesic survey of an existing route requires one to determine the approximation curve by means of optimization using the total least squares method (TLSM). The objective function of the LSM was found to be a square of the Mahalanobis distance in the adjustment field ν . In approximation tasks, the Mahalanobis distance is the distance from a survey point to the desired curve. In the case of linear regression, this distance is codirectional with a coordinate axis; in orthogonal regression, it is codirectional with the normal line to the curve. Accepting the Mahalanobis distance from the survey point as a quasi-observation allows us to conduct adjustment using a numerically exact parametric procedure. Analysis of the potential application of splines under the NURBS (non-uniform rational B-spline) industrial standard with respect to route approximation has identified two issues: a lack of the value of the localizing parameter for a given survey point and the use of vector parameters that define the shape of the curve. The value of the localizing parameter was determined by projecting the survey point onto the curve. This projection, together with the aforementioned Mahalanobis distance, splits the position vector of the curve into two orthogonal constituents within the local coordinate system of the curve. A similar system corresponds to points that form the control polygonal chain and allows us to find their position with the help of a scalar variable that determines the shape of the curve by moving a knot toward the normal line.
Go to article

Abstract

The work presents the results of studies on dependence of effectiveness of chosen robust estimation methods from the internal reliability level of a geodetic network. The studies use computer-simulated observation systems, so it was possible to analyse many variants differing from each other in a planned way. Four methods of robust estimation have been chosen for the studies, differing substantially in the approach to weight modifications. For comparative reasons, the effectiveness studies have also been conducted for the very popular method in surveying practice, of gross error detection basing on LS estimation results, the so called iterative data snooping. The studies show that there is a relation between the level of network internal reliability and the effectiveness of robust estimation methods. In most cases, in which the observation contaminated by a gross error was characterized by a low index of internal reliability, the robust estimation led to results being essentially far from expectations.
Go to article

Abstract

The paper addresses the problem of the automatic distortion removal from images acquired with non-metric SLR camera equipped with prime lenses. From the photogrammetric point of view the following question arises: is the accuracy of distortion control data provided by the manufacturer for a certain lens model (not item) sufficient in order to achieve demanded accuracy? In order to obtain the reliable answer to the aforementioned problem the two kinds of tests were carried out for three lens models. Firstly the multi-variant camera calibration was conducted using the software providing full accuracy analysis. Secondly the accuracy analysis using check points took place. The check points were measured in the images resampled based on estimated distortion model or in distortion-free images simply acquired in the automatic distortion removal mode. The extensive conclusions regarding application of each calibration approach in practice are given. Finally the rules of applying automatic distortion removal in photogrammetric measurements are suggested
Go to article

Abstract

Generally, gross errors exist in observations, and they affect the accuracy of results. We review methods to detect the gross errors by Robust estimation method based on L1-estimation theory and their validity in adjustment of geodetic networks with different condition. In order to detect the gross errors, we transform the weight of accidental model into equivalent one using not standardized residual but residual of observation, and apply this method to adjustment computation of triangulation network, traverse network, satellite geodetic network and so on. In triangulation network, we use a method of transforming into equivalent weight by residual and detect gross error in parameter adjustment without and with condition. The result from proposed method is compared with the one from using standardized residual as equivalent weight. In traverse network, we decide the weight by Helmert variance component estimation, and then detect gross errors and compare by the same way with triangulation network In satellite geodetic network in which observations are correlated, we detect gross errors transforming into equivalent correlation matrix by residual and variance inflation factor and the result is also compared with the result from using standardized residual. The results of detection are shown that it is more convenient and effective to detect gross errors by residual in geodetic network adjustment of various forms than detection by standardized residual.
Go to article

Abstract

The paper considers a private ownership economy in which economic agents could realize their aims at given prices, Walras Law is satisfied but agents’ optimal plans of action do not lead to an equilibrium in the economy. It means that the market clearing condition is not satisfied for agents’ optimal plans of action. In this context, the paper puts forward three specific adjustment processes resulting in equilibrium in a transformation of the initial economy. Specifically, it is shown, by the use of strict mathematical reasoning, that if there is no equilibrium in a private ownership economy at given prices, then, under some natural economic assumptions, after a mild evolution of the production sector, equilibrium at unchanged prices can be achieved.
Go to article

Abstract

The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.
Go to article

Abstract

A method of the improvement of the total station observations 3D adjustment by using precise geoid model is presented. The novel concept of using the plumb line direction obtained from the precise geoid model in combined GPS/total station data adjustment is applied. It is concluded that results of the adjustment can be improved if data on plumb line direction is used. Theoretical background shown in the paper was proved with an experiment based on the total station and GPS measurements referred to GRS80 geocentric reference system and with the use of GUGIK2001 geoid model for Poland.
Go to article

Abstract

The paper presents an empirical comparison of performance of three well known M – estimators (i.e. Huber, Tukey and Hampel’s M – estimators) and also some new ones. The new M – estimators were motivated by weighting functions applied in orthogonal polynomials theory, kernel density estimation as well as one derived from Wigner semicircle probability distribution. M – estimators were used to detect outlying observations in contaminated datasets. Calculations were performed using iteratively reweighted least-squares (IRLS). Since the residual variance (used in covariance matrices construction) is not a robust measure of scale the tests employed also robust measures i.e. interquartile range and normalized median absolute deviation. The methods were tested on a simple leveling network in a large number of variants showing bad and good sides of M – estimation. The new M – estimators have been equipped with theoretical tuning constants to obtain 95% efficiency with respect to the standard normal distribution. The need for data – dependent tuning constants rather than those established theoretically is also pointed out.
Go to article

Abstract

The article describes the process of creating 3D models of architectural objects on the basis of video images, which had been acquired by a Sony NEX-VG10E fixed focal length video camera. It was assumed, that based on video and Terrestrial Laser Scanning data it is possible to develop 3D models of architectural objects. The acquisition of video data was preceded by the calibration of video camera. The process of creating 3D models from video data involves the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning (TLS), generating a TIN model using automatic matching methods. The above objects have been measured with an impulse laser scanner, Leica ScanStation 2. Created 3D models of architectural objects were compared with 3D models of the same objects for which the self-calibration bundle adjustment process was performed. In this order a PhotoModeler Software was used. In order to assess the accuracy of the developed 3D models of architectural objects, points with known coordinates from Terrestrial Laser Scanning were used. To assess the accuracy a shortest distance method was used. Analysis of the accuracy showed that 3D models generated from video images differ by about 0.06 ÷ 0.13 m compared to TLS data.
Go to article

Abstract

The purpose of the article is to verify a hypothesis about the asymmetric pass-through of crude oil prices to the selling prices of refinery products (unleaded 95 petrol and diesel oil). The distribution chain is considered at three levels: the European wholesale market, the domestic wholesale market and the domestic retail market. The error correction model with threshold cointegration proved to be an appropriate tool for making an empirical analysis based on the Polish data. As found, price transmission asymmetry in the fuel market is significant and its scale varies depending on the level of distribution. The only exception is the wholesale price transmission to the domestic refinery price. All conclusions are supported by the cumulative response functions. The analysis sheds new light on the price-setting processes in an imperfectly competitive fuel market of a medium-sized, non-oil producing European country in transition.
Go to article

This page uses 'cookies'. Learn more