Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 8
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

The application of the 5S methodology to warehouse management represents an important

step for all manufacturing companies, especially for managing products that consist of

a large number of components. Moreover, from a lean production point of view, inventory

management requires a reduction in inventory wastes in terms of costs, quantities and time

of non-added value tasks. Moving towards an Industry 4.0 environment, a deeper understanding

of data provided by production processes and supply chain operations is needed:

the application of Data Mining techniques can provide valuable support in such an objective.

In this context, a procedure aiming at reducing the number and the duration of picking

processes in an Automated Storage and Retrieval System. Association Rule Mining is applied

for reducing time wasted during the storage and retrieval activities of components

and finished products, pursuing the space and material management philosophy expressed

by the 5S methodology. The first step of the proposed procedure requires the evaluation

of the picking frequency for each component. Historical data are analyzed to extract the

association rules describing the sets of components frequently belonging to the same order.

Then, the allocation of items in the Automated Storage and Retrieval System is performed

considering (a) the association degree, i.e., the confidence of the rule, between the components

under analysis and (b) the spatial availability. The main contribution of this work is

the development of a versatile procedure for eliminating time waste in the picking processes

from an AS/RS. A real-life example of a manufacturing company is also presented to explain

the proposed procedure, as well as further research development worthy of investigation.

Go to article

Authors and Affiliations

Maurizio Bevilacqua
Filippo Emanuele Ciarapica
Sara Antomarioni
Download PDF Download RIS Download Bibtex

Abstract

Power big data contains a lot of information related to equipment fault. The analysis and processing of power big data can realize fault diagnosis. This study mainly analyzed the application of association rules in power big data processing. Firstly, the association rules and the Apriori algorithm were introduced. Then, aiming at the shortage of the Apriori algorithm, an IM-Apriori algorithm was designed, and a simulation experiment was carried out. The results showed that the IM-Apriori algorithm had a significant advantage over the Apriori algorithm in the running time. When the number of transactions was 100 000, the running of the IM-Apriori algorithm was 38.42% faster than that of the Apriori algorithm. The IM-Apriori algorithm was little affected by the value of supportmin. Compared with the Extreme Learning Machine (ELM), the IM-Apriori algorithm had better accuracy. The experimental results show the effectiveness of the IM-Apriori algorithm in fault diagnosis, and it can be further promoted and applied in power grid equipment.

Go to article

Authors and Affiliations

Jianguo Qian
Bingquan Zhu
Ying Li
Zhengchai Shi
Download PDF Download RIS Download Bibtex

Abstract

The effective utilisation of monitoring data of the coal mine is the core of realising intelligent mine. The complex and challenging underground environment, coupled with unstable sensors, can result in “dirty” data in monitoring information. A reliable data cleaning method is necessary to figure out how to extract high-quality information from large monitoring data sets while minimising data redundancy. Based on this, a cleaning method for sensor monitoring data based on stacked denoising autoencoders (SDAE) is proposed. The sample data of the ventilation system under normal conditions are trained by the SDAE algorithm and the upper limit of reconstruction errors is obtained by Kernel density estimation (KDE). The Apriori algorithm is used to study the correlation between monitoring data time series. By comparing reconstruction errors and error duration of test data with the upper limit of reconstruction error and tolerance time, cooperating with the correlation rule, the “dirty” data is resolved. The method is tested in the Dongshan coal mine. The experimental results show that the proposed method can not only identify the dirty data but retain the faulty information. The research provides effective basic data for fault diagnosis and disaster warning.
Go to article

Authors and Affiliations

Dan Zhao
1
ORCID: ORCID
Zhiyuan Shen
1
ORCID: ORCID
Zihao Song
1
ORCID: ORCID
Lina Xie
2
ORCID: ORCID

  1. Liaoning Technical University, College of Safety Science & Engineering, Fuxin 123000, China
  2. Shenyang Institute of Technology, Shenyang 110000, China
Download PDF Download RIS Download Bibtex

Abstract

As the delivery of good quality software in time is a very important part of the software development process, it's a very important task to organize this process very accurately. For this, a new method of the searching associative rules were proposed. It is based on the classification of all tasks on three different groups, depending on their difficulty, and after this, searching associative rules among them, which will help to define the time necessary to perform a specific task by the specific developer.

Go to article

Authors and Affiliations

Tamara O. Savchuk
Natalia V. Pryimak
Nina V. Slyusarenko
Andrzej Smolarz
Saule Smailova
Yedilkhan Amirgaliyev
Download PDF Download RIS Download Bibtex

Abstract

Increasing development in information and communication technology leads to the generation of large amount of data from various sources. These collected data from multiple sources grows exponentially and may not be structurally uniform. In general, these are heterogeneous and distributed in multiple databases. Because of large volume, high velocity and variety of data mining knowledge in this environment becomes a big data challenge. Distributed Association Rule Mining(DARM) in these circumstances becomes a tedious task for an effective global Decision Support System(DSS). The DARM algorithms generate a large number of association rules and frequent itemset in the big data environment. In this situation synthesizing highfrequency rules from the big database becomes more challenging. Many algorithms for synthesizing association rule have been proposed in multiple database mining environments. These are facing enormous challenges in terms of high availability, scalability, efficiency, high cost for the storage and processing of large intermediate results and multiple redundant rules. In this paper, we have proposed a model to collect data from multiple sources into a big data storage framework based on HDFS. Secondly, a weighted multi-partitioned method for synthesizing high-frequency rules using MapReduce programming paradigm has been proposed. Experiments have been conducted in a parallel and distributed environment by using commodity hardware. We ensure the efficiency, scalability, high availability and costeffectiveness of our proposed method.
Go to article

Authors and Affiliations

Sudhanshu Shekhar Bisoyi
1
Pragnyaban Mishra
2
Saroja Nanda Mishra
3

  1. Department of Computer Science and Information Technology, Siksha ’O’ Anusandhan Deemed to be University (SOA), Institute of Technical Education and Research (ITER), Bhubaneswar, Odisha, India
  2. Dept. of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur, AP, India
  3. Dept. of CSE&A, IGIT, Sarang, Dhenkanal, Odisha, India
Download PDF Download RIS Download Bibtex

Abstract

Production problems have a significant impact on the on-time delivery of orders, resulting in deviations from planned scenarios. Therefore, it is crucial to predict interruptions during scheduling and to find optimal production sequencing solutions. This paper introduces a selflearning framework that integrates association rules and optimisation techniques to develop a scheduling algorithm capable of learning from past production experiences and anticipating future problems. Association rules identify factors that hinder the production process, while optimisation techniques use mathematical models to optimise the sequence of tasks and minimise execution time. In addition, association rules establish correlations between production parameters and success rates, allowing corrective factors for production quantity to be calculated based on confidence values and success rates. The proposed solution demonstrates robustness and flexibility, providing efficient solutions for Flow-Shop and Job-Shop scheduling problems with reduced calculation times. The article includes two Flow-Shop and Job-Shop examples where the framework is applied.
Go to article

Authors and Affiliations

Mateo DEL GALLO
Filippo Emanuele CIARAPICA
Giovanni MAZZUTO
Maurizio BEVILACQUA
Download PDF Download RIS Download Bibtex

Abstract

Traditional clustering algorithms which use distance between a pair of data points to calculate their similarity are not suitable for clustering of boolean and categorical attributes. In this paper, a modified clustering algorithm for categorical attributes is used for segmentation of customers. Each segment is then mined using frequent pattern mining algorithm in order to infer rules that helps in predicting customer’s next purchase. Generally, purchases of items are related to each other, for example, grocery items are frequently purchased together while electronic items are purchased together. Therefore, if the knowledge of purchase dependencies is available, then those items can be grouped together and attractive offers can be made for the customers which, in turn, increase overall profit of the organization. This work focuses on grouping of such items. Various experiments on real time database are implemented to evaluate the performance of proposed approach.
Go to article

Authors and Affiliations

Juhi Singh
1
Mandeep Mittal
2

  1. Department of Computer Science, Amity School of Engineering and Technology, Delhi, India
  2. Department of Mathematics, Amity Institute of Applied Sciences, Amity University Uttar Pradesh, Noida, India
Download PDF Download RIS Download Bibtex

Abstract

Electrocution is one of the main causes of workplace deaths in the construction industry. This paper presents a framework for identifying electrocution risk factors and exploring the correlations between them, with the aim of assisting accident prevention research. Specifically, the Haddon Matrix is used to extract the risk factors from 193 investigation reports of electrical shock accidents from 2012-2019, and the Apriori algorithm is applied to examine the potential relationships between these factors. Based on association rules using three criteria: support ( S), confidence ( C) and lift ( L), the betweenness centrality is then introduced to optimize association rules and find the most important rules though comparison. The results show that after optimization, some of these critical rules rise significantly in rank, such as Workplace: indoor → No CPR provided. Through these ranking changes, the focus of safety management is clarified, and finally, based on a comprehensive analysis of association rules, targeted accident prevention measures are suggested.
Go to article

Authors and Affiliations

Jue Li
1
ORCID: ORCID
Feifei Chen
1
Shijie Li
1

  1. School of Traffic and Transportation Engineering, Changsha University of Science & Technology, Changsha, Hunan, P.R.China

This page uses 'cookies'. Learn more