人工神经网络毕业论文外文翻译(适用于毕业论文外文翻译 中英文对照).doc

上传人:精*** 文档编号:826892 上传时间:2023-09-05 格式:DOC 页数:19 大小:154.93KB
下载 相关 举报
人工神经网络毕业论文外文翻译(适用于毕业论文外文翻译 中英文对照).doc_第1页
第1页 / 共19页
人工神经网络毕业论文外文翻译(适用于毕业论文外文翻译 中英文对照).doc_第2页
第2页 / 共19页
人工神经网络毕业论文外文翻译(适用于毕业论文外文翻译 中英文对照).doc_第3页
第3页 / 共19页
人工神经网络毕业论文外文翻译(适用于毕业论文外文翻译 中英文对照).doc_第4页
第4页 / 共19页
人工神经网络毕业论文外文翻译(适用于毕业论文外文翻译 中英文对照).doc_第5页
第5页 / 共19页
点击查看更多>>
资源描述

1、Textile Research Journal Article Use of Artificial Neural Networks for Determining the LevelingAction Point at the Auto-leveling Draw FrameAssad Farooq1and Chokri CherifInstitute of Textile and Clothing Technology, Technische Universitt Dresden. Dresden, GermanyAbstractArtificial neural networks wit

2、h their ability of learning from data have been successfully applied in the textile industry. The leveling action point is one of the important auto-leveling parameters of the drawing frame and strongly influences the quality of the manufactured yarn. This paper reports a method of predicting the le

3、veling action point using artificial neural networks. Various leveling action point affecting variables were selected as inputs for training the artificial neural networks with the aim to optimize the auto-leveling by limiting the leveling action point search range. The Levenberg Marquardt algorithm

4、 is incorporated into the back-propagation to accelerate the training and Bayesian regularization is applied to improve the generalization of the networks. The results obtained are quite promising. Key words: artificial neural networks; auto-lev-eling; draw frame; leveling action point The evenness

5、of the yarn plays an increasingly significant role in the textile industry, while the sliver evenness is one of the critical factors when producing quality yarn. The sliver evenness is also the major criteria for the assessment of the operation of the draw frame. In principle, there are two approach

6、es to reduce the sliver irregularities. One is to study the drafting mechanism and recognize the causes for irregularities, so that means may be found to reduce them. The other more valuable approach is to use auto-levelers 1, since in most cases the doubling is inadequate to correct the variations

7、in sliver. The control of sliver irregularities can lower the dependence on card sliver uniformity, ambient conditions, and frame parameters. At the auto-leveler draw frame (RSB-D40) the thickness variations in the fed sliver are continually monitored by a mechanical device (a tongue-groove roll) an

8、d subsequently converted into electrical signals. The measured values are transmitted to an electronic memory with a variable, the time delayed response. The time delay allows the draft between the mid-roll and the delivery roll of the draw frame to adjust exactly at that moment when the defective s

9、liver piece, which had been measured by a pair of scanning rollers, finds itself at a point of draft. At this point, a servo motor operates depending upon the amount of variation detected in the sliver piece. The distance that separates the scanning rollers pair and the point of draft is called the

10、zero point of regulation or the leveling action point (LAP) as shown in Figure 1. This leads to the calculated correction on the corresponding defective material 2,3. In auto-leveling draw frames, especially in the case of a change of fiber material, or batches the machine settings and process contr

11、olling parameters must be optimized. The LAP is the most important auto-leveling parameter which is influenced by various parameters such as feeding speed, material, break draft gauge, main draft gauge, feeding tension, break draft, and setting of the sliver guiding rollers etc.Use of Artificial Neu

12、ral Networks for Determining the Leveling Action Point A. Farooq and C. CherifFigure 1 Schematic diagram of an auto-leveler drawing frame. Previously, the sliver samples had to be produced with different settings, taken to the laboratory, and examined on the evenness tester until the optimum LAP was

13、 found (manual search). Auto-leveler draw frame RSB-D40 implements an automatic search function for the optimum determination of the LAP. During this function, the sliver is automatically scanned by adjusting the different LAPs temporarily and the resulted values are recorded. During this process, t

14、he quality parameters are constantly monitored and an algorithm automatically calculates the optimum LAP by selecting the point with the minimum sliver CV%. At present a search range of 120 mm is scanned, i.e. 21 points are examined using 100 m of sliver in each case; therefore 2100 m of sliver is n

15、ecessary to carry out the search function. This is a very time-consuming method accompanied by the material and production losses, and hence directly affecting the cost parameters. In this work, we have tried to find out the possibility of predicting the LAP, using artificial neural net-works, to li

16、mit the automatic search span and to reduce the above-mentioned disadvantages.Artificial Neural NetworksThe motivation of using artificial neural networks lies in their flexibility and power of information processing that conventional computing methods do not have. The neural network system can solv

17、e a problem “by experience and learning” the inputoutput patterns provided by the user. In the field of textiles, artificial neural networks (mostly using back-propagation) have been extensively studied during the last two decades 46. In the field of spinning previous research has concentrated on pr

18、edicting the yarn properties and the spinning process performance using the fiber properties or a combination of fiber properties and machine settings as the input of neural networks 712.Back-propagation is a supervised learning technique most frequently used for artificial neural network training.

19、The back-propagation algorithm is based on the Widrow-Hoff delta learning rule in which the weight adjustment is carried out through the mean square error of the output response to the sample input 13. The set of these sample patterns is repeatedly presented to the network until the error value is m

20、inimized. The back-propagation algorithm uses the steepest descent method, which is essentially a first-order method to determine a suitable direction of gradient movement.OverfittingThe goal of neural network training is to produce a network which produces small errors on the training set, and whic

21、h also responds properly to novel inputs. When a network performs as well on novel inputs as on training set inputs, the network is said to be well generalized. The generalization capacity of the network is largely governed by the network architecture (number of hidden neurons) and this plays a vita

22、l role during the training. A network which is not complex enough to learn all the information in the data is said to be underfitted, while a network that is too complex to fit the “noise” in the data leads to overfitting. “Noise” means variation in the target values that are unpredictable from the

23、inputs of a specific network. All standard neural network architectures such as the fully connected multi-layer perceptron are prone to overfitting. Moreover, it is very difficult to acquire the noise free data from the spinning industry due to the dependence of end products on the inherent material

24、 variations and environmental conditions, etc. Early stopping is the most commonly used technique to tackle this problem. This involves the division of training data into three sets, i.e. a training set, a validation set and a test set, with the drawback that a large part of the data (validation set

25、) can never be the part of the training.Regularization The other solution of overfitting is regularization, which is the method of improving the generalization by constraining the size of the network weights. Mackay 14 discussed a practical Bayesian framework for back-propagation networks, which con

26、sistently produced networks with good generalization. The initial objective of the training process is to mini-mize the sum of square errors: (1)Where are the targets and are the neural network responses to the respective targets. Typically, training aims to reduce the sum of squared errors F=Ed. Ho

27、wever, regularization adds an additional term, the objective function, (2)In equation (2), is the sum of squares of the network weights, and and are objective function parameters. The relative size of the objective function parameters dictates the emphasis for training. If , training will emphasize

28、weight size reduction at the expense of network errors, thus producing a smoother network response 15.The Bayesian School of statistics is based on a different view of what it means to learn from data, in which probability is used to represent the uncertainty about the relationship being learned. Be

29、fore seeing any data, the prior opinions about what the true relationship might be can be expressed in a probability distribution over the network weights that define this relationship. After the program conceives the data, the revised opinions are captured by a posterior distribution over network w

30、eights. Network weights that seemed plausible before, but which do not match the data very well, will now be seen as being much less likely, whilethe probability for values of the weights that do fit the data well will have increased 16. In the Bayesian framework the weights of the network are consi

31、dered random variables. After the data is taken, the posterior probability function for the weights can be updated according to Bayes rule: (3)In equation (3), D represents the data set, M is the particular neural network model used, and w is the vector of network weights. is the prior probability,

32、which represents our knowledge of the weights before any data is collected. is the likelihood function, which is the probability of data occurring, given the weights w.is a normalization factor, which guarantees that the total probability is 1 15. In this study, we employed the MATLAB Neural Net-wor

33、ks Toolbox function “trainbr” which is an incorporation of the LevenbergMarqaurdt algorithm and the Bayesian regularization theorem (or Bayesian learning) into back-propagation to train the neural network to reduce the computational overhead of the approximation of the Hessian matrix and to produce

34、good generalization capabilities. This algorithm provides a measure of the network parameters (weights and biases) being effectively used by the network. The effective number of parameters should remain the same, irrespective of the total number of parameters in the network. This eliminates the gues

35、swork required in determining the optimum network size.Experimental The experimental data was obtained from Rieter, Ingolstadt,the manufacturer of draw frame RSB-D40 17. For these experiments, the material selection and experimental design was based on the frequency of particular material use in the

36、 spinning industry. For example, Carded Cotton is the most frequently used material, so it was used as a standard and the experiments were performed on carded cotton with all possible settings, which was not the case with other materials. Also, owing to the fact that all the materials could not be p

37、rocessed with same roller pressure and draft settings, different spin plans were designed. The materials with their processing plans are given in Table 1. The standard procedure of acclimatization was applied to all the materials and the standard procedure for auto leveling settings (sliver linear d

38、ensity, LAP, leveling intensity) was adopted. A comparison of manual and automatic searches was performed and the better CV% results were achieved by the automatic search function from RSBD-40.Therefore the LAP searches were accomplished by the Rieter Quality Monitor (RQM). An abstract depiction of

39、the experimental model is shown in Figure 2.Use of Artificial Neural Networks for Determining the Leveling Action Point A. Farooq and C. CherifFigure 2 Abstract neural network model.Here the point to be considered is that there is no possibility in the machine to adjust the major LAP influencing par

40、ameter, i.e. feeding speed. So feeding speed was considered to be related to delivery speed and number of doublings according to equation (4). The delivery speed was varied between 300 and 1100 m/min and the doublings were 5 to 7, to achieve the different values of the feeding speed:Feeding Speed=De

41、livered Count Delivery speed/(Doublings Feed Count) (4)Training and Testing Sets For training the neural network, the experimental data was divided into three phases. The first phase included the experimental data for the initial compilation of the data and subsequent analysis. The prior knowledge r

42、egarding the parameters influencing LAP, i.e. feeding speed, delivery speed, break draft, gauges of break and main draft, and the settings of the sliver guide, was used to select the data. So the first phase contained the experiments in which the standard settings were taken as a foundation and then

43、 one LAP influencing parameter was changed in each experiment.In the second phase, the experiments were selected in which more than one influencing parameter was changed and the network was allowed to learn the complex interactions. This selection was made on the basis of ascertained influencing par

44、ameters, with the aim to increase or decrease the LAP length. The third phase involved the experiments conducted on the pilot scale machine. These pilot scale experiments were carried out by the machine manufacturer to get the response for different settings. So these results were selected to assess

45、 the performance of the neural networks.Pre-processing of dataNormalizing the input and target variables tends to make the training process better behaved by improving the numerical condition of the problem. Also it can make training faster and reduce the chances of getting stuck in local minima. He

46、nce, for the neural network training, because of the large spans of the network-input data, the inputs and targets were scaled for better performance. At first the inputs and targets were normalized between the interval 1, 1, which did not show any promising results. Afterwards the data was normaliz

47、ed between the interval 0, 1 and the networks were trained with success.Neural Network Training We trained five different neural networks to predict the LAP by increasing the number of training data sets gradually. The data sets were divided into training and test sets as shown in Table 2. The train

48、ing was performed with training sets and test sets were reserved to judge the prediction performance of the neural network in the form of error. Figure 3 depicts the training performance of the neural network NN 5. Figure 3 Training performance of NN 5.Results and DiscussionAs already mentioned in Table 2, different combinations of the data sets were used to train five networks keeping a close look on their test performance, i.e. performance on the unseen

展开阅读全文
相关资源
相关搜索
资源标签

当前位置:首页 > 学术论文 > 外文翻译(毕业设计)

版权声明:以上文章中所选用的图片及文字来源于网络以及用户投稿,由于未联系到知识产权人或未发现有关知识产权的登记,如有知识产权人并不愿意我们使用,如有侵权请立即联系:2622162128@qq.com ,我们立即下架或删除。

Copyright© 2022-2024 www.wodocx.com ,All Rights Reserved |陕ICP备19002583号-1 

陕公网安备 61072602000132号     违法和不良信息举报:0916-4228922