The increasing threat of terrorist attacks in Europe and social demands for governmental actions towards facilitating an information exchange between the national authorities responsible for public security, lead to the spectacular shift towards collection of passengers’ data. Initially, the idea had concerned mainly aviation passengers’ data and was limited to international flights only. But soon it was extended in order to include the Passenger Name Records (PNR) from domestic transport. Recently, we can see tensions to expand the PNR collection scheme to other means of transport including maritime routes. The paper studies the most developed system created in Belgium and assesses its influence on possible all-European solutions. When presenting the main problems connected with profiling the passengers and data sharing between institutions, it discusses a lack of precise privacy impact assessment and the need for necessity and proportionality studies to be carried out both at the level of Member States and in the EU discussion on the implementation of the so called PNR Directive and on the new requirements for the digital registration of passengers and crew sailing on board European passenger ships included in 2017 amendments to Directive 98/41/EC.
This article examines the process of opening datasets accumulated by public institutions, and its impact on the rise of new types of journalism, in particular data journalism.
As of the spring of 2017, the HAŁDY Database is available on the Polish Geological Institute – NRI website. The geodatabase contains information and data on waste mineral raw materials collected on old heaps, industrial waste stock-piles and in post-mining settlers, from the Polish part of the Sudety Mountains. The article presents the types of data and information contained in the geodatabase and the methodology for their collection. As a result of four-year research works, field reconnaissance, archives and geological basic research, 445 objects of former mining and mineral processing were inventoried. There are 403 mine heaps, 16 industrial settlers, 23 stock-piles and 3 external dumps. These are mainly objects after coal mining and metal ores, including post-uranium. The greatest opportunities for the economic use of waste are associated with coal sludge accumulated in settlers of the liquidated Lower Silesian Coal Basin. The material from stone heaps after polymetallic, iron and fluorite ore mining is also easy to use. The issue of the economic use of post-flotation copper ore waste or the recovery of metals (including gold) from dumps of arsenic mining remains open. The limitation here is the efficiency of metal recovery technologies and environmental restrictions. Some of the objects are located in protected areas, which excludes the possibility of waste management. Some stock-piles and heaps should be carefully reclaimed and covered by environmental monitoring, due to their harmful impact on environmental components.
W artykule opisano spory wokół sposobów zbierania danych na temat smogu w Polsce. Ramą teoretyczną analizy są studia nad nauką i technologią, a w szczególności badania nad rolą infrastruktur, standardów i danych. W części opisowej przedstawiono rolę infrastruktury pomiarowej w kształtowaniu relacji pomiędzy różnymi podmiotami zajmujący się pomiarem jakości powietrza, a następnie zanalizowano dwa wymiary konfliktu. Pierwszy z nich dotyczy kwestii metodologicznych i związany jest z rzetelnością pomiaru. Drugi natomiast – ontologii smogu, to znaczy odmiennego ujmowania problemu zanieczyszczenia przez ekspertów i obywateli, co ma przełożenie na praktyki społeczne.
The problem of poor quality of traffic accident data assembled in national databases has been addressed in European project InDeV. Vulnerable road users (pedestrians, cyclists, motorcyclists and moped riders) are especially affected by underreporting of accidents and misreporting of injury severity. Analyses of data from the European CARE database shows differences between countries in accident number trends as well as in fatality and injury rates which are difficult to explain. A survey of InDeV project partners from 7 EU countries helped to identify differences in their countries in accident and injury definitions as well as in reporting and data checking procedures. Measures to improve the quality of accident data are proposed such as including pedestrian falls in accident statistics, precisely defining minimum injury and combining police accident records with hospital data.
A common observation of everyday life reveals the growing importance of data science methods, which are increasingly more and more important part of the mainstream of knowledge generation process. Digital technologies and their potential for data collection and data processing have initiated the birth of the fourth paradigm of science, based on Big Data. Key to these transformations is datafication and data mining that allow the discovery of knowledge from contaminated data. The main purpose of the considerations presented here is to describe the phenomena that make up these processes and indicate their possible epistemological consequences. It has been assumed that increasing datafication tendencies may result in the formation of a data- centric perception of all aspects of reality, making data and the methods of their processing a kind of higher instance shaping human thinking about the world. This research is theoretical in nature. Such issues as the process of datafication and data science have been analyzed with a focus on the areas that raise doubts about the validity of this form of cognition.
Terminology is significant for professional communication and ipso facto for translation quality assurance (QA). To deliver a translation of high quality, it is crucial to have all new terms that occur in professional discourse collected, stored and managed properly by means of terminology databases (TDBs). In this paper I will try to define ‘quality’ in relation to TDBs and to determine the methodology and criteria that need to be considered by evaluating a TDB in the context of its reliability.
W obliczu rewolucji technologii informatycznych badacze nauk społecznych mają przed sobą nie lada wyzwanie. Oto bowiem wraz ze zwiększającą się popularnością Internetu pojawiły się ogromne ilości danych zawierających opinie, poglądy i zainteresowania jego użytkowników. Chociaż analiza tych danych stawia przed badaczami poważne problemy metodologiczne, za ich użyciem przemawia fascynujący materiał powstający bez ingerencji badaczy. Dużą część tego materiału stanowią dane z najpopularniejszej na świecie wyszukiwarki Google. Co minutę jej użytkownicy ze wszystkich miejsc na świecie zadają ponad 3 miliony zapytań, które są następnie klasyfikowane i udostępniane za pomocą aktualizowanych na bieżąco narzędzi. W artykule tym omówione są próby adaptacji tych danych do potrzeb nauk społecznych, a także dotychczasowe badania na ten temat. Omówione są także praktyczne aspekty pracy z narzędziami Google’a: Google Trends oraz Google Keyword Planner. Artykuł jest przeznaczony przede wszystkim dla badaczy nauk społecznych zainteresowanych internetowymi źródłami Big Data oraz wykorzystaniem tych danych w pracy naukowej.
Decision-making processes, including the ones related to ill-structured problems, are of considerable significance in the area of construction projects. Computer-aided inference under such conditions requires the employment of specific methods and tools (non-algorithmic ones), the best recognized and successfully used in practice represented by expert systems. The knowledge indispensable for such systems to perform inference is most frequently acquired directly from experts (through a dialogue: a domain expert - a knowledge engineer) and from various source documents. Little is known, however, about the possibility of automating knowledge acquisition in this area and as a result, in practice it is scarcely ever used. lt has to be noted that in numerous areas of management more and more attention is paid to the issue of acquiring knowledge from available data. What is known and successfully employed in the practice of aiding the decision-making is the different methods and tools. The paper attempts to select methods for knowledge discovery in data and presents possible ways of representing the acquired knowledge as well as sample tools (including programming ones), allowing for the use of this knowledge in the area under consideration.