輔助人工采摘蘋果裝置設(shè)計(jì)【含7張CAD圖帶開(kāi)題報(bào)告】
【需要咨詢購(gòu)買全套設(shè)計(jì)請(qǐng)加QQ1459919609】圖紙預(yù)覽詳情如下:
附錄 A 英文原文Auto Recognition of Navigation Path for Harvest Robot Based on Machine VisionBei He1, Gang Liu1, Ying Ji1,2, Yongsheng Si1,2, and Rui Gao11 Key laboratory of Modern Precision Agriculture System Integration Research, Ministry of Education, China Agricultural University, Beijing, 100083, China2 College of Information Science & Technology, Agricultural University of Hebei,Baoding 071001, Chinachujining@163.comAbstract:An algorithm of generating navigation path in orchard for harvesting robot based on machine vision was presented. According to the features of or-chard images, a horizontal projection method was adopted to dynamically rec-ognize the main trunks area. Border crossing points between the tree and the earth were detected by scanning the trunks areas, and these points were divided into two clusters on both sides. Resorting to least-square fitting, two border lines were extracted. The central clusters were gained by the two lines and this straight line was regarded as the navigation path.Matlab simulation result shows that the algorithm could effectively extract navigation path in complex orchard environment, and correct recognition rate was 91.7%. The method is proved to be stable and reliable, and with the deviation rate of simulation navigation angle compared with the artificial recognition angle is around 2%.Keywords: Navigation path, Machine vision, Orchard environment, Image segmentation, Least square-fitting.1 IntroductionAs a type of agricultural robot, fruit-picking robot has great potential application prospect. Picking robot technology mainly includes three aspects: recognition, picking and movement[1]. Recognition contains recognition of ripe fruits, and acquiring the location of fruits; picking mainly includes the design of the mechanical arms and motion control; movement mainly refers to robot navigation. At present, the recogni-tion has been studied and researched by many research institutions and has become a relatively mature method to many varieties of fruits and vegetables like apples, or-anges, cucumbers etc[2-6]. However, the research about robot navigation based on open environment of orchard is rare on report[7].As the development of automation, generally used navigation sensors include global positioning system (GPS), vision sensor, ultrasonic wave sensor, laser scanner and geomagnetic direction sensor at present[8-9]. Current research mainly focuses on two promising methods, machine vision and GPS navigation. Most of these studies of automatic guidance systems dealt with spatial positioning-sensing systems and steer-ing control systems for following a predetermined path[10]. And there are only a few researches on the field of the orchard navigation robots. Among them, applying ma-chine vision to orchard navigation has lots of advantages, and can effectively solve the problems in autonomous navigation of agricultural robot as explained below. First of all, it does not require a specific navigation aid. Secondly, machine vision can adapt complex environment, including complex terrain, unknown and variable envi-ronment parameter, etc. Lastly, more flexible visual field, integrate information and high reliability and accuracy will be used. The robot can move autonomously more effectively with vision technique applied to the navigation of harvesting robot.The research of the paper is mainly about traveling device, vision system, Arm & gripper of the apple picking robot. Machine vision system is to recognize and locate fruits and the navigation system is to provide the moving route shown as the Figure 1. This paper presents a method which adopts machine vision to acquire orchard images which are studied to obtain the navigation route to achieve robot visual navigation.Fig. 1. Components of the robot2 Materials and Methods2.1 Characteristics of the Orchard Environment ImageOrchard navigation, like farmland Navigation, can also use the ridge boundary line detection that makes the aerial view of the whole orchard as target to obtain the navi-gation route through its visual system so as to complete autonomous navigation op-eration[11]. Actually, the orchard environment is complex with its non-structural characteristics and diverse background. Besides, the orchard is affected by natural sunshine, temperature and other natural aspects. Different size of fruit trees and varied growth patterns make the orchard more uncertain, which raises the real-time require-ment of orchard navigation.The plants of early crops in the farmland such as wheat, soybeans, etc. are rela-tively short and are cultivated neatly by row. Each row is parallel to another[12]. Meanwhile, the crops are usually green, the crop rows are consecutive and in line shape or small curvature and the navigation characteristics detected do not mutate。within a short time[13].However, fruit plants have different height, complex levels and random spatial arrangements, the vision system can hardly detect the obvious and consecutive navigation characteristics and cannot use the algorithm of current farm-land visual navigation system directly.The plants of standard fruit trees are really tall with a trunk height of 70-80cm as usual, which makes it more obvious to make a distinction between trunk and back-ground in visual. Based on these characteristics of standard fruit trees, the intersection points of trunks and the ground can be found after highlighting the main trunks. And these points can be used to generate the navigation path.2.2 Image Acquisition and Laboratory EquipmentThe Images used in the experiments are collected from the apple orchard in Nanlang village, Qinshui County, Shanxi province and taolin village, Changping District Bei-jing. The resolution of the collected images is 640 × 480. The image processing com-puter is configured to be 2.2GHz frequency, 1.25G memory. The simulation platform is Matlab R2009a2.3 Identification of the Main AreaAt present, most apple trees are planted in dense manner, but different standard trees have different growth patterns. The apple trees in Nanlang village, Shanxi province are the mixture of both Gala and Fushi, which have similar characteristic with Qiao-hua tree whose trunk is about 70-80cm high. Based on this characteristic, the main area of apple tree can be made more visible through the color characteristics of image segmentation. Through the analysis of profile control line graph shown as follows, a suitable partition factor can be found to study the color characteristics of trunks and the background area. As shown in the Figure 2.Fig. 2. Result of line profile mapAs to the study of line L, R and B pixels are the components of their gray value re-spectively. The yellow curve represents R-B value. It is easy to find that there is little difference between Red R and Blue B of the trunk area, while R and B of the soil differ a lot and green component of the leaves is a little bit more obvious. R-B can separate the trunk area and the orchard background effectively. It can be seen from the yellow curve that the R-B components of the trunk are in a peak region, while the R-B components of the background area are in the much falter region. The little noise produced in the process of orchard pruning trimming where the branches fell on the ground will be eliminated in later algorithm.Having the original image been transformed into gray level by using the R-B color factors, the optimal segmentation threshold value can be obtained by two-dimensional OTSU algorithm[14]. And then get the gray image diarized, so as to acquire the re-quested information in the trunk area. The algorithm not only uses the intensity distri-bution information of the points but also consider the relevant pixel space information among the points, which makes it better than one-dimensional OTSU segmentation algorithm.(a) (b) (c)(a) Original image (b) Gray image (c) two-dimensional OTSUFig. 3. Binary image of the orchardThere will be a small amount of dry branches, weeds, etc., on the ground, which can be regarded as noise. The small branches are also easy to produce noise. Before accessing to the trunk region, morphological image processing is needed to avoid effects which noise has on the extraction of trunk’s characteristics. The above images are corroded and dilated by 3*1 respectively. The corrosiveness is to remove the ef-fects of small and dry branches, the dilation in the growth direction of trunk is to eliminate empty. After this process, there will still be some noise left, which should be removed to avoid the misunderstanding of the extraction of future characteristics. First of all, give the morphology image area mark, and calculate the area of each re-gion, then remove the area of land that is less than 1/15 of the largest area, finally we can get the binary image shown as Figure 3 and 4(a).(a) (b)(a) Image by post-processing (b) Distribution diagram of each line horizontal projectionFig. 4. The main trunk area detection by horizontal projectionFrom the binary image in Figure 4(a), it can be seen that the intersection of the main trunk and the ground are concentrated in the lower part of the image, the further the main trunk area is, the smaller the image is. The upper part of the image is small branches, sky, and so on. In order to highlight the main trunk area, the local character-istics of fruit trees can be neglected, and the horizontal projection method is used to extract the main trunk area. The steps are as follows:Set image resolution to M ? N , I (i , j) as the image gray value point (i, j ) , then scan the binary image progressively, and calculate the horizontal projection value of each line s (i ) ,Ms (i ) ? ∑ I (i, j ) (i ? 1, 2 , L , N )j ? 1which is to select the appropriate threshold, if it is appropriate, it is the main trunk area. Otherwise, it is not.From Figure 4(b), it is easy to see that the horizontal line value in the central area is really low. That is the line where the trunk, small branches and the sky separate from each other. In actual practice, first set threshold T ? 200 , record the row num-ber and keep the pixel value below the line when the horizontal line number is smaller than the threshold value to extract the trunk area. In the image processing, extract main parts of the image area if the horizontal projection value is smaller than the threshold, the image resolution in the follow-up process is M ? h.2.4 Main Feature Point ExtractionAccording to the features of standard tree that the trunk are upright and obviously easy to distinguish, the intersection of the trunk and the ground can be regarded asfeature points to represent fruit trees, to reduce calculation. As the impact of small branches, a very small amount of noise is still existed in the picture after trunk extrac-tion. By using the area threshold method, an area of less than the maximum area of 1/80 of the region is removed to get a binary image whose trunk area is clear as Figure 5(a) shows.Feature point extraction algorithm is described as follows:Set an empty matrix P, the size is the same as the size of the trunk extraction re-gional image, marked as M ? h .Mark regionally the images whose noise has been removed. Scan each region which represents each fruit tree that has trunk feature. Suppose that there are n re-gions exist in this image.Scan the marked region k line by line. Setting the current row is row i , tested them one by one by order of the columns. If the pixel value of the current detec-tion point (i, j ) is k, meanwhile, the meeting point (i, j ? 1) of the pixel values and point (i ? 1, j ) values are both 0, the tested point is suit to the features that the intersection of the trunk and the ground.(Referred as the candidate point).Put the coordinates of the suitable candidate points in region k into the empty ma-trix P. Test again. Search the point has the largest abscissa value, which means finding the point that is closest to the ground to represent the fruit trees. Set the remaining points as background points.If the detecting area k ? n , stop searching. Otherwise, return (3).After scanning the feature points by the above steps, each region will have a unique feature point to represent the fruit tree. By comparing 60 apple pictures which were taken in similar position, different time, different parts, fruit trees--the intersection of the trunk and the ground distribute on both sides of the image ac-cording to the probability. Therefore, when the feature points are classified, firstly the vertical midline of the image (half of the total number of columns the image) is taken as the base line of the feature points to classify. When the feature points in the image are on the left, the corresponding coordinates will be saved into array Q1; otherwise, when the feature point in the image are on the right, the corre-sponding coordinate values will be saved into array Q2.2.5 Navigation Path Line DetectionSimilar to crops, fruit trees are naturally formed in a straight line. Similarly, the path of mobile robot in a short time can be approximately seen as a straight line. So the straight-line path model can be used on the study [13].The most generally used line detection methods are the least square method and the Hough transform and some methods based on these forms. This paper takes the intersection points of fruit trees and the ground as the feature points. Feature points in the vision field are limited, so the least square method which has high speed and accuracy, are adopted to test the two junction lines. Finally the robot's navigation path is generated by extracting the center points of the junction line.(a)(b) (c)(a) The main trunk area (b) Feature points (c) Navigation line detectionFig. 5. Guidance path line of an apple orchard (Qinshui, Shanxi)3 Results and AnalysisThe angle between geometric central line which serves as navigation line and hori-zontal line is an important factor of navigation. It decides the angle that the robot needs to adjust. It means the robot is walking along the best direction of safe moving in visual field, when the angle is close to 90 °[11].60 images were used to test the algorithm. 20 images are taken in orchards in Nanlang Village, Qinshui County, Shanxi province, and the rest are taken in orchards in Taolin Village, Changping District, Beijing. Correct recognition rate was 91.7%.Table 1. Comparison of navigation lines in different orchardsThe number of TheThe Weather Figure Segmente extracted feature points simulationorchard Condition number d result navigationLeft line Right lineangle /(°)Taolin, Sunny EffectiveFig.6 3 3 114.2°Beijing (Front lit) Fig.6(b)Qinshui, Sunny EffectiveFig.5 5 6 82.5°Shanxi (Front lit) Fig.3(c)Qinshui, EffectiveCloudy Fig.7 5 5 95.8°Shanxi Fig.7(b)Table 1 shows the segmentation, extracted feature points and simulation angles gen-erated by this algorithm under different orchard environment and the automatic extrac-tion of navigation lines in the two different orchard backgrounds. It can be seen from the table, the navigation line could be auto generated in various orchards environment. The angles of the simulation navigation lines are nearly 90°and can fulfill the path extracting request of autonomous navigation in complex orchard environment.(a) (b) (c)(d)(e) (f)(a) Original Image (b) Segmentation Image (c) Image by post-processing (d) The main trunk area (e) Feature points (f) Navigation line detectionFig. 6. Guidance path line of an apple orchard ( Taolin, Beijing)Five images which are failure to extract the navigation line are all taken in Taolin Village, Changping District of Beijing. Firstly the main failure reason is the complex-ity of background. And there are the iron rods next to fruit trees, which can cause the adjacent segmentation of trunk regions, reduce the feature points, result in detection errors. Secondly, because of the light effect, there are many shaded area in trees, re-sulting in the similar color of dry twigs and leaves, causing false segmentation.To verify the reliability of the algorithm, the simulation result was compared with manual recognition. Four images were taken from each orchard. Calculate the hori-zontal level of artificial fitting navigation under Matlab to get the deviation between the artificial recognition angle and simulation navigation angle. (shown as Table 2), and the deviation turns out to be around 2%.(a) (b) (c)(d)(e) (f)(a) Original Image (b) Segmentation Image (c) Image by post-processing (d) The main trunk area (e) Feature points (f) Navigation line detectionFig. 7. Guidance path line of an apple orchard ( Xinshui, Shanxi)Table 2. Comparison between simulation and artificial recognitionSimulation Artificial Devia-The orchard DeviationNumber navigation recognition tionEnvironment rate /%angle /(°) angle /(°) /(°)1 113.2 110.6 2.6 2.35Taolin, Beijing 2 106.9 103.6 3.3 3.19(Fig.6 etc) 3 94.1 97.2 3.1 3.194 80.7 83.8 3.1 3.705 82.5 83.9 0.6 0.72Qinshui, Shanxi 6 86.1 85.7 0.3 0.35(Fig.7 etc) 7 93.8 92..6 1.2 1.308 87.5 86.8 0.7 0.814 ConclusionsThe color difference R-B and two-dimensional OTSU algorithm was employed to segment the trunk from the background. Dead leaves and soil background did not affect the segmentation of the trunk region. But the algorithm is not effective when processing green weeds on the ground. Morphological method was adopted to elimi-nate the noises such as tiny branches and fading leaves, horizontal projection method was adopted to dynamically recognize the tree trunks, and the region segmentation was used to eliminate the influence of tiny branches for a second time. This algorithm can extract the main trunk area effectively.By scanning the trunks areas, border crossing points of the bottom of the tree and ground were detected, and these points were divided into two clusters on both sides based on neighboring relationship. Resorting to least-square fitting, two border lines were extracted. The central line was gained by the two lines. It is robust and effective in many orchard environments. The recognition rate is 91.6%.The simulation result is compared with artificial recognition in two orchard envi-ronment. The result shows that the generated navigation path is reliable, safe and can satisfy the moving request of harvesting robot.This algorithm is suitable for the orchard where the ground had less weed and the main area of standard trees were more visible. For the stunted trees, if there are more weeds and the background is extremely complex in an orchard, it is better to improve the algorithm or use another method.Acknowledgments. This research is sponsored by the project 2006AA10Z255 and National Natural Science Foundation of China (Grant No.30900869). All of the men-tioned support is gratefully acknowledged.References[1] Kitamura, S., Oka, K.: Recognition and Cutting System of Sweet Pepper for Picking Robot in Greenhouse Horticulture. In: Proceeding of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada, pp. 1807–1812 (2005)[2] Tarrio, P., Bernardos, A.M., Casar, J.R., Besada, A.: A Harvesting Robot for small Fruit in Bunches Based on 3-D Stereoscopic Vision. In: 4th World Congress Conference on Com-puters in Agriculture and Natural Resources, USA (2006)[3] Kondo, N., Yamamoto, K., Yata, K., Kurita, M.: A Machine Vision for Tomato Cluster Harvesting Robot. In: ASABE Annual International Meeting, Rhode Island (2008)[4] Fangming, Z., Naiqian, Z.: Applying Joint Transform Correlator in Tomato Recognition. In: ASABE Annual International Meeting, Rhode Island, pp. 1–9 (2008)[5] Hannan, M.W., Burks, T.F., Bulanon, D.M.: A Real-time Machine Vision Algorithm for Robotic Citrus Harvesting. In: ASABE Annual International Meeting (2007)[6] ulanon, D.M., Kataoka, T., Ota, Y.: A Segmentation Algorithm for the Automatic Rec-ognition of Fuji Apples at Harv