基于果萼图像的猕猴桃果实夜间识别方法
2017-02-17傅隆生孙世鹏zquezArellanoManuel李石峰崔永杰
傅隆生,孙世鹏,Vázquez-Arellano Manuel,李石峰,李 瑞,崔永杰
基于果萼图像的猕猴桃果实夜间识别方法
傅隆生1,孙世鹏1,Vázquez-Arellano Manuel2,李石峰1,李 瑞1,崔永杰1※
(1. 西北农林科技大学机械与电子工程学院,杨凌 712100;2. Institute of Agricultural Engineering, University of Hohenheim, Stuttgart 70599, Germany)
根据猕猴桃的棚架式栽培方式,提出了一种适用于猕猴桃采摘机器人夜间识别的方法。采用竖直向上获取果实图像的拍摄方式,以果萼为参考点,进行果实的识别,并测试该方法对光照的鲁棒性。试验结果表明:基于果萼能够有效的识别猕猴桃果实,成功率达94.3%;未识别和误识别的果实一般出现在5果及5果以上的簇中,原因是果实相互挤压导致的果萼部分不在果实图像的中心区域,以及果实之间的三角区形成暗色封闭区域;光照过小或过大会导致成像模糊或过曝,对正确率有细微影响;识别速度达到了0.5 s/个。因此,基于果萼的猕猴桃果实夜间识别方法在正确识别率和速度上都有很大提升,更接近实际应用。
机器人;图像识别;农作物;猕猴桃;果萼;夜间识别;毗邻果实
0 引 言
中国是世界上猕猴桃种植面积最大的国家[1-2],但目前靠人工采摘,是猕猴桃种植中最费时费力的环节[3]。此外,中国用于水果采摘的劳动力占整个生产过程所用劳动力的1/3以上[4-6]。因此,研究猕猴桃等果实的采摘机器人具有重要意义[7]。
猕猴桃一般采用棚架式的栽培方式,猕猴桃果实颜色与枯草、枯叶、枝干、果柄等复杂背景的颜色相近[8]。因此自然环境下对目标果实的准确分割、特征提取、识别和定位是猕猴桃采摘机器人视觉系统需要解决的一个关键问题[9]{傅隆生, 2015 #56}。
针对猕猴桃果实的日间识别,已有研究多是参考其它果实[10-15],从果实斜侧面获取图像进行识别。丁亚兰等[16]提出利用R-B颜色因子,采用固定阈值93进行图像分割,获得果实区域,但未涉及单个猕猴桃的识别;崔永杰等[17]利用L*a*b*颜色空间a*通道对猕猴桃图像进行分割,采用椭圆形Hough变换拟合单个果实轮廓从而识别每个果实;武涛等[18]运用Otsu算法在a*通道进行图像分割,结合分水岭算法识别单个果实;慕军营等[19]同样使用Otsu算法在a*通道进行图像分割,但采用正椭圆Hough变换提取针对Canny算子获取的边缘图像识别猕猴桃果实,平均时间为3.98 s,成功率为88.5%;崔永杰等[20]在对比猕猴桃果实及其背景的颜色特征基础上,提出利用0.9R-G进行图像分割,再采用椭圆形Hough变换进行单个果实的识别,成功率是89.1%。以上研究主要利用猕猴桃的椭圆形特征进行目标识别,但未有效解决果实的重叠遮挡问题。
由于机器人拥有全天候工作的优势[21-23],故也需要进行夜间果实的识别[24]。根据猕猴桃的棚架式栽培方式而形成果实自然下垂且成簇位于枝叶下方的特点,Fu等[25]提出从地面竖直向上获取图像进行果实识别的方法,并研究了最佳照明设置。该研究根据试验结果,提出1.1R-G颜色特性进行图像分割,针对Canny算子获得的图像边缘,采用最小外接矩形法和椭圆形Hough变换识别每个果实。研究结果表明光照为50 lux时的识别效果最好,达88.3%,平均每个果实的识别时间为1.64 s。但识别时间过长,相比于采摘机器人研究中苹果的0.35 s[26]和草莓的0.44 s[21],该研究还有很大提升空间,且成功率也有望提高。
因此,本文针对前述研究存在的问题,结合夜间环境下猕猴桃成像后的特点,每个果实的果萼都显现,提出基于果萼的猕猴桃果实夜间识别方法,以期达到提高识别率和速度的目的。
1 材料与方法
1.1 图像采集
在西北农林科技大学眉县猕猴桃试验站(34°07'39''N,107°59'50''E,海拔648 m),以当地最为广泛种植的海沃德品种为研究对象,于每年的收获季节10月下旬采集图像。具体场景如图1所示,将一个常规的摄像头(Microsoft, LifeCam studio,分辨率640像素×360像素)通过三脚架置于果实下方20 cm处(实际被成像果实区域约为 36 cm×20 cm),接往笔记本计算机(ThinkPad T400,2.53GHz)用于保存图像。为实现目标果实处的均匀光照,照明由一台离果实1 m远的无级可调光LED影视平板灯提供(CM-LED 1200HS, 武汉珂玛影视灯光科技有限公司,最大照度为1 m远的1 200 lux),利用平板照明的柔和均匀特点。同时测试不同光照下的识别效果并提出最佳光照度,设置了12个不同光照水平(10、30、50、80、110、150、200、300、400、500、800、1 200 lux),以果实附近的3次光照测量值的平均值为准(TES-1332A数字式照度计,台湾泰仕电子工业股份有限公司)。由于猕猴桃的特殊生长模式产生果实下垂的特点和末端执行器自下向上包络分离的工作方式,当采用底部成像的方式时,为一次性将多个果实纳入图片中,视觉系统和光源离果实不会很近,因此产生局部曝光过度的可能性较低。
1.2 图像预处理
猕猴桃本身呈棕色[27-28],背景大多是绿色或者是浅绿色的叶子、藤蔓等,以及少许的支架和晚上黑色的背景信息,如图2a所示。由前述研究[25]可知,提取RGB图像1.1R-G合成的灰度图像,最有利于分割猕猴桃与背景,如图2b所示。通过中值滤波去除灰度图像中的噪声,采用最大类间方差法(Otsu)[29]获取自动阈值,将灰度图像转化二值图像,如图2c所示。再标记连通域,由于连通域为多个或单个猕猴桃区域,取像素面积最大的连通域为参考,去除所有小于参考像素面积0.2倍的连通域,如图2d所示。最后,按最大面积的1/45进行膨胀,再利用空洞填充函数填充空洞,最后获取果实所在区域,如图2e所示。与原始的RGB图像按位做与运算,即可获得果实区域图像,如图2f所示。
1.3 图像识别
由于猕猴的果实区域与果萼部分的亮度值有很大的不同,因此利用亮度分量图像提取果萼部分,图3为图像识别过程。具体步骤如下:
1)把RGB颜色空间转换到HSV颜色空间,提取V(亮度)分量图像,如图3a所示。
2)图像中值滤波后提取果实区域的Otsu阈值进行二值化处理,如图3b所示。
3)取最大白色区域像素数的平方根的1/90进行形态学开运算,获得包含果萼部分的果实区域图像,如图3c所示。图中黑色部分,最大为背景区域,其次为果萼部分,最小的为噪声。因此对所有黑色区域求取像素面积N,从大至小进行排序为1、2、…N(为黑色区域的数量,大于1时表示图像中有猕猴桃果实),最大值1为黑色背景的像素数,次大值2则为某个猕猴桃果萼部分的像素数。由于猕猴桃果萼部分面积最大值与最小值之比一般在10倍以内,而噪声面积必定小于任意一个猕猴桃果萼部分,因此取面积次大的黑色区域的像素数2作为果萼部分的参考面积。若小于参考面积的0.1倍,为噪声区域;介于参考面积和0.1倍的参考面积之间,为其他果萼,从而获得果萼的数量,即为猕猴桃个数。
1.噪声 2.果萼区域
1. Noise 2. Fruit calyx area
注:图3f中的亮点表示果萼位置,圆圈表示识别的每个果实区域。下同。
Note: Bright points and circles in Fig. 3f are identified fruit calyx and recognized fruit area. Same as below.
图3 果实识别过程
Fig.3 Fruit recognition process
当N>0.1×2且N+1≤0.1×2,则=–1(为2,3,…,)。
由于每个果萼部分都近似一个圆形或者椭圆形,所以对每一组标记的点求取中位数,即为该果萼的中心坐标。针对图2e所示的二值图像利用Canny算子获得猕猴桃果实边缘图像,如图3d所示。对每一个猕猴桃果萼中心,遍历对应的边缘的像素点,寻找最近的边缘像素点,如图3e所示。以该距离为半径、果萼中心为原点,绘制猕猴桃区域圆,即可确定果实位置和区域,如图3f所示。
此时虽然不能精确的分割猕猴桃果实所占的区域,但由于果萼位置和主要区域已确定,采用本项目开发的猕猴桃采摘机器人末端执行器[9]能够实现果实的采摘。该末端执行器根据猕猴桃的生长特点,采用仿形设计的理念,利用猕猴桃果实相邻之间在底部形成的人字形毗邻间隙和垂直耷拉拥有摆动空间的优势,从果实底部旋转上升伸入毗邻间隙,逐渐包络分离毗邻果实,实现前后夹持和抓取。最后,末端执行器向上转动实现果实-果柄的分离。该末端执行器[9]由于采用逐渐包络的方式分离毗邻果实并抓持,试验结果表明允许的误差半径为10 mm,因此只需要知道果萼位置和果实的大部分区域即可,避免了果实实际区域难以精确定位的问题。
2 结果与分析
试验共采集36簇猕猴桃果实的图像(2013年10月26日,5簇20个果实;2014年10月23日,10簇40个果实;2014年10月27日,10簇35个果实;2015年10月25日,11簇45个果实;共140个果实,平均每簇4个果实),每簇分别在12个不同光照水平下采集了1幅夜视图像,共432幅猕猴桃夜视图像。图4为其中1簇猕猴桃对应的12幅夜视图像。当光照度较低时,如图4a和4b所示的10和30 lux,此时图像较暗且有些模糊。当光照度大于50 lux后,图像都比较清晰,本文以50 lux光照下的图像进行分析。
a. 10 luxb. 30 luxc. 50 luxd. 80 lux e. 110 luxf. 150 luxg. 200 luxh. 300 lux i. 400 luxj. 500 luxk. 800 luxl. 1 200 lux
2.1 果萼的识别效果
果实的识别率取决于果萼的识别结果,因此以50 lux下的果萼识别效果为例,先分析某一光照下的结果。根据每簇所包含的猕猴桃果实数,将36簇样本分为5类:2果簇、3果簇、4果簇、5果簇、5果以上簇,每类的簇数和果实数如表1所示。大部分猕猴桃簇都是包含3个、4个或5个果实,占试验样本总簇数的80.6%。2果簇和5果以上簇相对较少,分别占11.1%和8.3%。与实际调研中发现的大部分簇包含3至5个果实的结果一致。
根据试验结果,识别效果分为3类:未识别的果萼(果萼存在,却未识别出来)、误识别的果萼(将不是果萼的位置识别为果萼)、正确识别的果萼,具体识别结果如表1所示。以正确识别率做为评价指标,定义为
=/(+)×100%
式中为正确识别的果萼数,为误识别的果萼数,为正确识别率,%。
正确识别的果萼数占果实数的比例随着猕猴桃簇包含的果实数量增多而降低,2果簇和3果簇的比例达到了100%,4果簇、5果簇和5果以上簇分别为97.9%、94.3%和78.9%。误识别的果萼数占果实数的比例随着猕猴桃簇包含的果实数量增多而上升,最高达到了31.6%(5果以上簇)。与Fu等[25]的研究结果相比,3果族、4果簇和5果簇都在正确识别率上有显著的提升(Fu等[25]未研究5果以上的识别),总的正确识别率由88.3%提高到了94.3%。
果萼未正确识别的情况主要出现在4果及以上的簇中。当果实分布存在某一个果实与3个或更多果实相接触时,相互之间挤压严重,使得部分果实并非竖直向下,而是有所倾斜,导致果萼部分不在果实图像的中心区域,如图5所示。因此,在图3c所示的果萼区域形态学运算中,易将该部分处理为非封闭区域,引起未识别。
表1 光照为50 lux下不同果簇的果萼识别结果
a. 原始图像a. Original imageb. 果萼判别图像b. Image for detecting fruit calyxes c. 果实识别结果c. Fruit recognition results
1.未识别果萼 2.误识别果萼
1. Undetected fruit calyx 2. Wrongly detected fruit calyx
图5 未识别和误识别的果实示例
Fig.5 Example of undetected and wrongly detected fruits
此外,3个及以上的果实相互接触时,会在接触的三角区形成暗色封闭区域,且面积与果萼部分相当。在图3c所示的果萼区域识别过程中,会被误识别为果萼,如图5b所示。这也是果萼的误识别率同样随着猕猴桃簇包含的果实数量增多而升高的主要原因。但是,由于外部两侧的果实周围环境相对简单,一般都能被正确识别。因此,在实际的采摘过程中,对于5果以上的猕猴桃簇,可以先采摘外侧的果实后,再次成像并识别,有望降低未识别率和误识别率,从而提高识别率。
2.2 不同光照下的识别效果
为了验证算法对光照的鲁棒性,测试了所有果实在12种不同光照下的识别效果,结果如图6所示。当光照较低时(10和30 lux),由于图像较暗导致果萼和果实区域对比不是非常明显,如图4a和图4b所示,正确识别率有所降低,分别为91.4%和93.6%。当光照度能保证清晰成像时,识别率比较稳定,从50~400 lux,正确识别率都是最高的94.3%。当光照度增大到500 lux后,识别率开始减小。原因是光照强时,可能使得部分区域发生过曝,导致果萼部分的亮度增大,影响了图像分割,果萼部分区域过小而被作为噪声去除,如图7所示。因此,在实际应用中,光照过低或过高都会影响识别效果,需使光照度维持在50至400 lux之间。从能源节约的角度出发,在保证识别率的前提下,使光照维持在50 lux比较合理。
2.3 果实识别速度
本研究的另一个目的是提高识别速度,尽可能贴近实际应用的需求。采用同一台笔记本电脑(ThinkPad T400,2.53 GHz),在Matlab 7.10.0(R2010a)的编程环境下,分别测试了本文算法和Fu等[25]算法的图像预处理时间和果实识别时间,结果如表2所示。
a. 原始图像a. Original imageb. 果萼判别图像b. Image for detecting fruit calyxesc. 果实识别结果c. Fruit recognition results
表2 本文算法与参考算法的果实识别速度对比
根据每幅图像处理所需时间,以平均每簇包含4个果实为依据,计算每个果实从图像获取后至正确识别的平均时间。由于采用相同的图像预处理方法,所以该部分的时间相同,都是0.83 s/幅。但在图像识别算法,本文算法有了很大提升,平均只需1.16 s识别一幅图像中的猕猴桃,是对比算法所需时间的20.2%。总体而言,本文算法在平均每个果实的识别时间上,达到了0.50 s/幅,获得了3倍左右的提升。同时,也更接近采摘机器人研究中苹果(0.35 s)[26]和草莓(0.44 s)[21]的识别水准。此外,在实际应用中,将使用执行效率更高的C++编写代码,并采用OpenCV等计算机视觉库构造算法,可能在速度上还会有所提升[30]。
3 结论与讨论
1)测试了猕猴桃夜间图像的机器识别能力,为完善猕猴桃采摘机器人的能力,使其具有夜间采摘的能力,提高工作效率和环境适应能力进行了有益探讨。
2)证实了利用猕猴桃果萼进行果实识别的可行性,50~400 lux下的正确识别率达94.3%。未识别和误识别的果实一般出现在5果及5果以上的簇中,原因是果实相互挤压导致的果萼部分不在果实图像的中心区域,以及果实之间的三角区形成暗色封闭区域。在实际的采摘过程中,对于5果以上的猕猴桃簇,可以先采摘外侧的果实后,再次成像并识别,有望降低未识别率和误识别率,从而提高识别率。
3)该算法对光照有较好的鲁棒性,从10至1 200 lux,都能取得91.4%以上的识别率。从正确率和节约能源的角度出发,使光照维持在50 lux比较合理。在后期的实际采摘系统设计和研究中,应将光源、末端执行器、视觉系统进行综合考虑,合理分布。
4)本文算法在果实识别速度上有了很大提升,达到了平均0.50 s识别一个果实。
为了减小定位误差对采摘成功率的影响,本项目中研发的末端执行器采用的仿形设计具有一定的误差允许范围。但电子图像传感器的物距过近可能造成图像畸变以及光线遮挡等问题,后期研究中将测试视场较小的镜头,减小定位误差。本文算法的前提是根据猕猴桃的栽培方式,从底部拍摄图像,与常规的侧面成像有所不同。该方法能否在日间使用,还需进一步研究和试验验证。
[1] 孙兆军. 陕西水果面积和产量实现“十二连增”[J]. 中国果业信息,2013,30(1):44-45.
[2] 张计育,莫正海,黄胜男,等. 21世纪以来世界猕猴桃产业发展以及中国猕猴桃贸易与国际竞争力分析[J]. 中国农学通报,2014,30(23):48-55.
Zhang Jiyu, Mo Zhenghai, Huang Shengnan, et al. Development of kiwifruit industry in the world and analysis of trade and international competitiveness in china entering 21st century[J]. China Agricultural Science Bulletin, 2014, 30(23): 48-55. (in Chinese with English abstract)
[3] 陈军,王虎,蒋浩然,等. 猕猴桃采摘机器人末端执行器设计[J]. 农业机械学报,2012,43(10):151-154.
Chen Jun, Wang Hu, Jiang Haoran, et al. Design of end-effector for kiwifruit harvesting robot[J]. Transactions of the Chinese Society for Agricultural Machinery, 2012, 43(10): 151-154. (in Chinese with English abstract)
[4] 毕昆,赵馨,侯瑞锋,等. 机器人技术在农业中的应用方向和发展趋势[J]. 中国农学通报,2011,27(4):469-473.
Bi Kun, Zhao Xin, Hou Ruifeng, et al. The trend of application and development of robot technology in agriculture[J]. China Agricultural Science Bulletin, 2011, 27(4): 469-473. (in Chinese with English abstract)
[5] 徐丽明,张铁中. 果蔬果实收获机器人的研究现状及关键问题和对策[J]. 农业工程学报,2004,20(5):38-42.
Xu Liming, Zhang Tiezhong. Present situation of fruit and vegetable harvesting robot and its key problems and measures in application[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2004, 20(5): 38-42. (in Chinese with English abstract)
[6] 赵匀,武传宇,胡旭东,等. 农业机器人的研究进展及存在的问题[J]. 农业工程学报,2003,19(1):20-24.
Zhao Yun, Wu Chuanyu, Hu Xudong, et al. Research progress and problems of agricultural robot[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2003, 19(1): 20-24. (in Chinese with English abstract)
[7] Bechar A, Vigneault C. Agricultural robots for field operations: concepts and components[J]. Biosystems Engineering, 2016, 149: 94-111.
[8] 王景红,梁轶,柏秦凤,等. 陕西猕猴桃高温干旱灾害风险区划研究[J]. 中国农学通报,2013,29(7):105-110.
Wang Jinghong, Liang Yi, Bai Qinfeng, et al. Study on risk zoning of high temperature and drought disaster for kiwifruit in shaanxi[J]. China Agricultural Science Bulletin, 2013, 29(7): 105-110. (in Chinese with English abstract)
[9] 傅隆生,张发年,槐岛芳德,等. 猕猴桃采摘机器人末端执行器设计与试验[J]. 农业机械学报,2015,46(3):1-8.
Fu Longsheng, Zhang Fanian, Gejima Yoshinori, et al. Development and experiment of end-effector for kiwifruit harvesting robot[J]. Transactions of the Chinese Society for Agricultural Machinery, 2015, 46(3): 1-8. (in Chinese with English abstract)
[10] 马翠花,张学平,李育涛,等. 基于显著性检测与改进Hough变换方法识别未成熟番茄[J]. 农业工程学报,2016,32(14):219-226.
Ma Cuihua, Zhang Xueping, Li Yutao, et al. Identification of immature tomatoes base on salient region detection and improved hough transform method[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2016, 32(14): 219-226. (in Chinese with English abstract)
[11] 王粮局,张立博,段运红,等. 基于视觉伺服的草莓采摘机器人果实定位方法[J]. 农业工程学报,2015,31(22):25-31.
Wang Liangju, Zhang Libo, Duan Yunhong, et al. Fruit localization for strawberry harvesting robot based on visual servoing[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2015, 31(22): 25-31. (in Chinese with English abstract)
[12] 贾伟宽,赵德安,刘晓洋,等. 机器人采摘苹果果实的K-means和GA-RBF-LMS神经网络识别[J]. 农业工程学报,2015,31(18):175-183.
Jia Weikuan, Zhao De’an, Liu Xiaoyang, et al. Apple recognition based onk-means and ga-rbf-lms neural network applicated in harvesting robot[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2015, 31(18): 175-183. (in Chinese with English abstract)
[13] 项荣,应义斌,蒋焕煜,等. 基于双目立体视觉的番茄定位[J]. 农业工程学报,2012,28(5):161-167.
Xiang Rong, Ying Yibin, Jiang Huanyu, et al. Localization of tomatoes based on binocular stereo vision[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2012, 28(5): 161-167. (in Chinese with English abstract)
[14] Hannan M, Burks T, Bulanon D M. A machine vision algorithm combining adaptive segmentation and shape analysis for orange fruit detection[J]. Agricultural Engineering International: CIGR Journal, 2009, 11(1): 1281.
[15] Bulanon D A, Burks T E, Alchanatis V. Fruit visibilty analysis for robotic citrus harvesting[J]. Transactions of the ASABE, 2009, 52(1): 277-283.
[16] 丁亚兰,耿楠,周全程. 基于图像的猕猴桃果实目标提取研究[J]. 微计算机信息,2009,18(4):294-295.
Ding Yalan, Geng Nan, Zhou Quancheng. Research on the object extraction of kiwifruit based on images[J]. 2009, 18(4): 294-295. (in Chinese with English abstract)
[17] 崔永杰,苏帅,吕志海,等. 基于 Hough 变换的猕猴桃毗邻果实的分离方法[J]. 农机化研究,2012,34(12):166-169.
Cui Yongjie, Su Shuai, Lü Zhihai, et al. A method for separation of kiwifruit adjacent fruits based on hough transformation[J]. Journal of Agricultural Mechanization Research, 2012, 34(12): 166-169. (in Chinese with English abstract)
[18] 武涛,袁池,陈军. 基于机器视觉的猕猴桃果实目标提取研究[J]. 农机化研究,2012,34(12):21-26.
Wu Tao, Yuan Chi, Chen Jun. Research on the object extraction of kiwifruit based on machine vision[J]. Journal of Agricultural Mechanization Research, 2012, 34(12): 21-26. (in Chinese with English abstract)
[19] 慕军营,陈军,孙高杰,等. 基于机器视觉的猕猴桃特征参数提取[J]. 农机化研究,2014,36(6):138-142.
Mu Junying, Chen Jun, Sun Gaojie, et al. Characteristic parameters extraction of kiwifruit based on machine vision[J]. Journal of Agricultural Mechanization Research, 2014, 36(6): 138-142.(in Chinese with English abstract)
[20] 崔永杰,苏帅,王霞霞,等. 基于机器视觉的自然环境中猕猴桃识别与特征提取[J]. 农业机械学报,2013,44(5):247-252.
Cui Yongjie, Su Shuai, Wang Xiaxia, et al. Recognition and feature extraction of kiwifruit in natural environment based on machine vision[J]. Transactions of the Chinese Society for Agricultural Machinery, 2013, 44(5): 247-252. (in Chinese with English abstract)
[21] Hayashi S, Shigematsu K, Yamamoto S, et al. Evaluation of a strawberry-harvesting robot in a field test[J]. Biosyst. Eng., 2010, 105(2): 160-171.
[22] Hayashi S, Takahashi K, Yamamoto S, et al. Gentle handling of strawberries using a suction device[J]. Biosystems Engineering, 2011, 109(4): 348-356.
[23] Scarfe A J, Flemmer R C, Bakker H, et al. Development of an autonomous kiwifruit picking robot[C]//4th International Conference on Autonomous Robots and Agents. Wellington, New Zealand: IEEE, 2009: 380-384.
[24] 赵德安,刘晓洋,陈玉,等. 苹果采摘机器人夜间识别方法[J]. 农业机械学报,2015,46(3):15-22.
Zhao De’an, Liu Xiaoyang, Chen Yu, et al. Image recognition at night for apple picking robot[J]. Transactions of the Chinese Society for Agricultural Machinery, 2015, 46(3): 15-22. (in Chinese with English abstract)
[25] Fu L, Wang B, Cui Y, et al. Kiwifruit recognition at nighttime using artificial lighting based on machine vision[J]. International Journal of Agricultural and Biological Engineering, 2015, 8(4): 52-59.
[26] Ji W, Zhao D, Cheng F, et al. Automatic recognition vision system guided for apple harvesting robot[J]. Computers & Electrical Engineering, 2012, 38(5): 1186-1195.
[27] Huang H, Ferguson A R. Review: Kiwifruit in China[J]. New Zealand Journal of Crop and Horticultural Science, 2001, 29(1): 1-14.
[28] Huang H, Ferguson A R. Kiwifruit (Actinidia chinesis and A. deliciosa) plantings and production in China, 2002[J]. New Zealand Journal of Crop and Horticultural Science, 2003, 31(3): 197-202.
[29] Otsu N. A threshold selection method from gray-level histograms[J]. Automatica, 1975, 11(4): 23-27.
[30] Matuska S, Hudec R, Benco M. The comparison of CPU time consumption for image processing algorithm in Matlab and OpenCV[C]//9th Internation Conference on Elektro. Rajecke Teplice, Slovakia, 2012: 75-78.
Kiwifruit recognition method at night based on fruit calyx image
Fu Longsheng1, Sun Shipeng1, Vázquez-Arellano Manuel2, Li Shifeng1, Li Rui1, Cui Yongjie1※
(1.712100;2.)
China is the largest country for cultivating kiwifruits, and Shaanxi Province provides the largest production, which accounts for approximately 70% of the nationwide production and 33% of the global production. Harvesting kiwifruits in this region relies mainly on manual picking which is labor-intensive. Therefore, the introduction of robotic harvesting is highly desirable and suitable. Most researches involved so far in kiwifruit harvesting robots suggest the scenario of daytime harvesting for taking advantage of the sunlight. Robot picking at night can overcome the problem of low work efficiency and will help to minimize fruit damage. In addition, artificial lights can be used to ensure constant illumination instead of the variable natural sunlight for image acquisition. The study object of this paper was a kiwifruit recognition system at night using artificial lighting by identifying the fruit calyx. According to kiwifruits’ growth characteristics, which were grown on sturdy support structures, an RGB (red, green, blue) camera was placed underneath the canopy so that kiwifruits clusters could be included in the images. An image processing algorithm was developed to recognize kiwifruits by identifying the fruit’s calyx. Firstly, it subtracted 1.1R-G gray image, and then segmentation was done using the Otsu method for the thresholding. A morphological operation was applied to remove the noise that adhered to the target fruits (such as branches). Afterwards, an area thresholding method was employed to eliminate the remaining noises. This method is based on finding the biggest area of neighboring white pixels in the image and eliminating all areas which are smaller than 1/5 of the biggest area. Using this image as the mask, a fruit image without background was obtained. After that, V (value) component of HSV (hue, saturation, value) color model was calculated for segmenting the fruit’s calyx from the fruit, also using the Otsu method for thresholding. Black areas were then labeled and sorted by their pixels numbers. The first largest black area was the image background and the second largest black areas was a fruit calyx area that used as the reference area. Since the fruit calyx areas varies in a small range in one image, the fruit calyx areas are judged by comparing with the reference area. If a black area in the image was smaller than the reference area and larger than 1/10 of the reference area, it is a fruit calyx; otherwise, it is not. Finally, the nearest edge pixel for each fruit’s calyx was searched and their distance calculated was as radius, and a circle around the fruit calyx was drawn. Finally, the algorithm was also tested for the robustness under 12 different light illuminations (10, 30, 50, 80, 110, 150, 200, 300, 400, 500, 800 and 1 200 lux). The fruits illumination was estimated by averaging the illumination values, which were measured for 3 times at 3 different locations around the target fruit cluster. Results showed that the image processing algorithm based on the calyx could recognize kiwifruit and reached a success rate of 94.3%. Undetected and wrongly detected fruits appeared mostly at the same cluster where one fruit was adjacent to 3 or more fruits. The calyxes of those fruits sometimes were not in the centers of their fruits’ images, thus, causing undetected fruits. Those fruits also formed dark areas among them, which were wrongly recognized as calyx. On the other hand, most clusters were linearly arranged on the branches, which made them suitable for the proposed algorithm. The algorithm was robust to different illuminations although the success rates were slightly decreased under extremely weak or strong illuminations. It only took 0.5 s in average to recognize a fruit, which is a great step toward filed robotic harvesting of kiwifruit.
robots; image recognition; crops; kiwifruit; fruit calyx; night recognition; adjacent fruits
10.11975/j.issn.1002-6819.2017.02.027
TP391.41
A
1002-6819(2017)-02-0199-06
2016-08-23
2016-11-22
国家自然科学基金资助项目(61175099);陕西省资助国外引进人才经费(Z111021303);西北农林科技大学国际科技合作种子基金(A213021505)。
傅隆生,男,江西吉安人,副教授,博士,主要从事农业智能化技术与装备研究。杨凌 西北农林科技大学机械与电子工程学院,712100。Email:fulsh@nwafu.edu.cn。中国农业工程学会会员:傅隆生(E042600025M)。
崔永杰,男,吉林图们人,副教授,博士生导师,博士,主要从事果蔬生产自动化研究。杨凌 西北农林科技大学机械与电子工程学院,712100。Email:cuiyongjie@nwafu.edu.cn。
傅隆生,孙世鹏,Vázquez-Arellano Manuel,李石峰,李 瑞,崔永杰. 基于果萼图像的猕猴桃果实夜间识别方法[J]. 农业工程学报,2017,33(2):199-204. doi:10.11975/j.issn.1002-6819.2017.02.027 http://www.tcsae.org
Fu Longsheng, Sun Shipeng, Vázquez-Arellano Manuel, Li Shifeng, Li Rui, Cui Yongjie. Kiwifruit recognition method at night based on fruit calyx image[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(2): 199-204. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2017.02.027 http://www.tcsae.org