[ieee its applications (cspa) - kuala lumpur, malaysia (2009.03.6-2009.03.8)] 2009 5th international...

4
Designing a Hybrid Sensor System for a Housekeeping Robot Hema C.R., Sim Kwoh Fung, Poo Tarn Shi School of Mechatronic Engineering Universiti Malaysia Perlis, Jejawi, Perlis, Malaysia [email protected] Abstract- Housekeeping robots are service robots specially designed to perform housekeeping tasks such as cleaning and vacuuming, our research focuses on the design of a housekeeping robot to pick up waste objects in a home or office environment. In this paper a hybrid sensor system for the house keeping robot is proposed using vision and ultrasonic sensors to navigate around obstacles and to pick up objects. To pickup objects, ascertaining the exact location of the object is of prime importance. This can be accomplished by computing the 3D coordinates of the object. The navigation task for robots involves the detection of obstacles or objects in the traversable path. Images of objects and obstacles are captured using vision sensors, segmented from background and processed to extract features which are fed to a neural network to recognize and differentiate between obstacles and objects. A recognition accuracy of 100% with an error tolerance of 0.001 is achieved. The centroid of the segmented object is computed to give the x, y and z coordinates of the object location. I. INTRODUCTION Commercial service robots are designed for vacuuming and lawn moving. The capabilities of the mobile service robot require more sensors for navigation and task performing in an unknown environment. Most of the service robots only rely on single input sensor to interact with the surrounding environments thus limiting their range of capabilities to interact with the environment; task performance will also be less convincing. The tasks of service robots include office automation, lawn moving, assistance to handicapped and elderly, and domestic services [1]. Economic and industrial experts expect service robots can make a new and huge market in near future. South Korean government selected intelligent robots including service robots as one of ten key future technologies in 2003. Home is an important space where service robots can be employed popularly in near future. There are factors encouraging the use of robots at home in many countries; e.g., increase of household income, population increase of elderly, rapid increase of labour costs, decrease of the number people in a family, and decrease of time used for house work. A number of companies already introduced robots designed for working at home. Examples include PaPeRo ('Partner-type Personal Robot') from NEC, MARON from Fujitsu, iRobot- LE from iRobot, Spy-Cye from Cye, and BN-7 from Bandai. Many home robots are internet-based. Sensor signal processing and computation for control are done by a computer separated from the robot but connected via wireless communication. By this way, home robots need not have a high-performance computer, and their sales price can be minimized while the working efficiency required can be maintained. This is an important difference between home service robots and cleaning robots because cleaning robots are mostly working under simple control programme running on a board processor although they can be classified as one kind of home robots. Our goal is to design a novel hybrid sensor system which facilitates a housekeeping robot to navigate and perform tasks in unknown environment. A home service robot has to confront uncertainties. Causes of the uncertainties include people moving around, objects brought to different positions, and changing conditions. A home robot, thus, needs high flexibility and intelligence. A vision sensor is particularly important in such working conditions because it provides rich information on surrounding space and people interacting with the robot. Conventional video cameras, however, have limited fields of view (FOV). Thus, a mobile robot with a conventional camera must look around continuously to see its whole surroundings [2]. The hybrid sensor system proposed combines the performance of two sensors namely a vision sensor and an ultrasonic sensor. The vision sensor is used to recognize objects and obstacles in front of the robot. The ultrasonic sensor helps to avoid obstacles around the robot and to estimate the distance of a detected object. The output of the proposed sensor system aids the mobile robot with a gripper system to navigate and to pick and place waste objects. II. HYBRID SENSOR SYSTEM A. Ultrasonic Sensors The proposed ultrasonic system uses five pairs of digital sensors. The sensor detects objects and is controlled by a host microcontroller to detect the distance of the object [3]. The design of ultrasonic system is to detect obstacles / objects and to provide distance information to the gripper system. The robot has five ultrasonic sensors which cover the front and sides fields of the mobile robot. Two set of ultrasonic sensor is located at the bottom of the front side of the robot to detect small objects. Besides this is also a good position to detect obstacles. The ultrasonic sensor at the top measures the height 55 2009 5th International Colloquium on Signal Processing & Its Applications (CSPA) 978-1-4244-4152-5/09/$25.00 ©2009 IEEE

Upload: nguyendang

Post on 09-Feb-2017

219 views

Category:

Documents


0 download

TRANSCRIPT

Designing a Hybrid Sensor System for a Housekeeping RobotHema C.R., Sim Kwoh Fung, Poo Tarn Shi

School of Mechatronic EngineeringUniversiti Malaysia Perlis, Jejawi, Perlis, Malaysia

[email protected]

Abstract- Housekeeping robots are service robots specially designed to perform housekeeping tasks such as cleaning and vacuuming, our research focuses on the design of a housekeeping robot to pick up waste objects in a home or office environment. In this paper a hybrid sensor system for the house keeping robot is proposed using vision and ultrasonic sensors to navigate around obstacles and to pick up objects. To pickup objects, ascertaining the exact location of the object is of prime importance. This can be accomplished by computing the 3D coordinates of the object. The navigation task for robots involves the detection of obstacles or objects in the traversable path. Images of objects and obstacles are captured using vision sensors, segmented from background and processed to extract features which are fed to a neural network to recognize and differentiate between obstacles and objects. A recognition accuracy of 100% with an error tolerance of 0.001 is achieved. The centroid of the segmented object is computed to give the x, y and z coordinates of the object location.

I. INTRODUCTION

Commercial service robots are designed for vacuumingand lawn moving. The capabilities of the mobile service robot require more sensors for navigation and task performing in an unknown environment. Most of the service robots only rely on single input sensor to interact with the surrounding environments thus limiting their range of capabilities to interact with the environment; task performance will also beless convincing. The tasks of service robots include office automation, lawn moving, assistance to handicapped and elderly, and domestic services [1]. Economic and industrial experts expect service robots can make a new and huge market in near future. South Korean government selected intelligentrobots including service robots as one of ten key futuretechnologies in 2003. Home is an important space where service robots can beemployed popularly in near future. There are factorsencouraging the use of robots at home in many countries; e.g., increase of household income, population increase of elderly, rapid increase of labour costs, decrease of the number people in a family, and decrease of time used for house work. A number of companies already introduced robots designed for working at home. Examples include PaPeRo ('Partner-type Personal Robot') from NEC, MARON from Fujitsu, iRobot-LE from iRobot, Spy-Cye from Cye, and BN-7 from Bandai. Many home robots are internet-based.

Sensor signal processing and computation for control aredone by a computer separated from the robot but connectedvia wireless communication. By this way, home robots neednot have a high-performance computer, and their sales pricecan be minimized while the working efficiency required canbe maintained. This is an important difference between home service robots and cleaning robots because cleaning robots are mostly working under simple control programme running on a board processor although they can be classified as one kind of home robots. Our goal is to design a novel hybrid sensor system which facilitates a housekeeping robot to navigate and perform tasks in unknown environment. A home service robot has to confront uncertainties. Causes of the uncertainties include people moving around, objects brought to different positions, and changing conditions. A home robot, thus, needs high flexibility and intelligence. A vision sensor is particularly important in such working conditions because it provides rich information on surrounding space and people interacting with the robot. Conventional video cameras, however, have limited fields of view (FOV). Thus, a mobile robot with a conventional camera must look around continuously to see its whole surroundings [2].

The hybrid sensor system proposed combines the performance of two sensors namely a vision sensor and anultrasonic sensor. The vision sensor is used to recognizeobjects and obstacles in front of the robot. The ultrasonic sensor helps to avoid obstacles around the robot and to estimate the distance of a detected object. The output of the proposed sensor system aids the mobile robot with a gripper system to navigate and to pick and place waste objects.

II. HYBRID SENSOR SYSTEM

A. Ultrasonic Sensors The proposed ultrasonic system uses five pairs of digital sensors. The sensor detects objects and is controlled by a host microcontroller to detect the distance of the object [3]. The design of ultrasonic system is to detect obstacles / objects and to provide distance information to the gripper system. The robot has five ultrasonic sensors which cover the front and sides fields of the mobile robot. Two set of ultrasonic sensor is located at the bottom of the front side of the robot to detect small objects. Besides this is also a good position to detect obstacles. The ultrasonic sensor at the top measures the height

55

2009 5th International Colloquium on Signal Processing & Its Applications (CSPA)

978-1-4244-4152-5/09/$25.00 ©2009 IEEE

from the sensor to the object. This information is important to the gripper system; one ultrasonic sensor is placed on the each left and right side of the robot. This sensor is to help the robot to decide which side the robot should turn. The maximum range of detection of this ultrasonic sensor is 3 m and the minimum detection range is 3 cm. Due to uneven propagation of the transmitted wave, the sensor is unable to detect in the certain condition [4]. See figure 1. Besides, size of the object being detected affects the maximum range of ultrasonic sensors. The sensor must detect a certain level of sound to activate its output. A large object reflectsmost of the sound to the ultrasonic sensor, so the sensor can detect the object at its maximum sensing distance. A small object reflects a much smaller portion of the sound resulting in a significant reduction in sensing range [4]. Therefore it has the difficulty to detect small object. The position of the object is also important. Whenever the surface of the object is at 45 degree of the sensor, the transmit wave will totally be reflect to other direction (Figure 2). In such condition, the object is said to be hidden. Therefore two or more ultrasonic sensor is needed to overcome this problem. In addition, objects that absorb sound or have a soft or irregular surface, such as a stuffed animal, may not reflect enough sound to be detected accurately. In this analysis, irregular circular objects are chosen the object for height estimation. Therefore the reflected wave is not reflected from the top of the surface. This will contribute to small error which is taken into account for the gripper system.

Fig. 1. Uneven propagation of ultrasound wave.

Fig. 2. Ineffective angle for object detection. B. Vision Sensors In order, for the robot to see an object a distinguishable area of object is required which is generically will have perimeter, area, center point and distinguishable edge. Although ultrasonic sensor has gained a lot of popularity

among the robot researched community [3-4] but it has its own fundamental drawbacks that limit the usefulness of the sensor. One of the drawbacks is the uncertainty ultrasonic sensors to measure the width (Y- coordinate) of the object or the true location of the object locating at the detectable zone of the sensor. Some may try alternative solution by applying more ultrasonic sensor but too many ultrasonic sensor may result a crosstalk [3] and expensive when the size of robot is big. Digital camera, on the hand, has been widely used in many applications like object localization, visual servoing, and recognition and object representation [5]. Therefore, a single digital camera can be used to overcome the drawbacks of the ultrasonic sensor. For our approach, we integrate both ultrasonic and vision sensor to perform the object localization. And the vision sensor itself is used to perform visual servoing and object recognition.

C. Visual Servoing Visual servoing is referring to the use of computer vision to control the motion of the robot. We implement this concept to locate the object that fall out the gripper range after the object is detected by the ultrasonic sensor. The object) will be detected at the range of 25cm (X coordinate) from the robot.But due inability of the ultrasonic sensor to locate the exact Y-position of the object, we apply visual servoing technique to move the robot base so that the object will fall into the graspable zone for the gripper. In the imaging window, vertical pixel tracking is perform at 4 fixed pixel locations where 2 fixed pixel locations are used to track object that fall into left hand side on the image window and 2 fixed pixel location are used to track the object that fall into right hand side on the image window. 2 fixed pixel locations for each side (left/right) tracking is being used to minimize possibility of missing tracking due to the uncertain size of the object. The vertical pixel tracking will

trace the maximum ),( maxmax ji , and minimum ),( minmin jiof the pixel. Hence

),( centrecentre ji = )2

,2

( minmaxminmax jjii (1)

From the centre pixel value of the object, we can calculate the required angle (Figure 3) for servoing by using geometry method.

= )(tan 1

centre

centre

i

j (2)

p = tanx (3)

y

py

(4)

where: x and y are constant

56

Fig. 3. Geometrical drawing when the object fall at the left hand side on the imaging window

C. Object Recognition The robot has a USB digital camera installed at the front of the robot fitted at 17cm from the ground. Image of objects and obstacles are captured and dimensionally seized to 150 x 150 sizes to minimize memory and processing time. The sized imaged are further processed to segment the object and suppress the background. Figure 4 shows the image processing technique employed for segmenting the object. The segmented images are further processed to detect the edge using a canny operator. Single values features are extracted from the edge image. Single value decomposition is very useful tool to save storage space and it also gives algebraic features of an image. [6, 7]

D. Classification Neural networks are used as classification in this paper. Simple feed forward neural network architecture is proposed[8]. The neural network has 50 input neurons and hidden layer neurons are chosen experimentally as 4, the output layer has 2 neurons. The network is trained using the back propagation learning algorithm [9].

Object Obstacle

(a)

(b)

(c)

(d)

Fig. 4. Image segmentation for the object and obstacle (a) Original image. (b) Thresholding (c) Morphologically open binary image (d ) Morphological

closing with flat Disk structuring element

E. Object Localization Once an object or obstacle has been identified by the neural network system, the system will compute the centroid of the object to determine its x , y co-ordinate position. Theultrasonic sensor estimates the z co-ordinate shown in figure 5.

Fig. 5. setup for experiment to measure the z co-ordinate.

III. EXPERIMENTAL RESULT AND DISCUSSION

A. Object Recognition A total of 31 input images were used for the training the

neural network where they are 20 images for the objects and 11 images for the obstacles. In the testing phase, the back propagation network is tested with all 31 input data containing the object and obstacle feature data. The proposed network

Z

57

was able to successfully recognize objects and obstacles.Table 1 shows the training parameter and the test result.

TABLE 1TRAINING PARAMETER AND THE RESULT OF THE NEURAL

NETWORK

Parameter Valuesno. of input neuron 50no of hidden neuron 4no. of output neuron 2learning rate 0.0001training epoch 13 epoch

training time 6.5 seconds

success rate 100 percent

B. Object Distance EstimationOnce the object is recognized by the sensor system, the

ultrasonic sensor on the front panel estimates the distance of the object. Initially the bottom ultrasonic sensor will measure the x distance and store in the variable. The Finally only the x and z information is sent to the gripper for gripper process. The z distance estimation has an accuracy of 98%.

IV. CONCLUSION AND FUTURE WORK

In this paper, we have described an hybrid sensor system for a housekeeping robot. Camera images are analyzed for obstacle and object detection. Ultrasonic sensors are used to detect objects and to measure the z co-ordinate of the object location while the x, y co-ordinate is provided by the vision sensor. This information can be used by the gripper system to effectively pick and place objects.

REFERENCES

[1]. M. Asada, and H. I. Christensen, “Robotics in the home, office, and playing field,” Proc. IJCAI, pp. 1385-1392, 1999.

[2]. J. You, et al., “Development of a home service robot ‘ISSAC’,” Proc. IEEE/RSJ IROS, pp. 2630-2635, 2003.

[3]. Zou Yi, Ho Yeong Khing, Chua Chin Seng, and Zhou Xiao Wei “Multi-ultrasonic sensor fusion for autonomous mobile robots”, School of Electrical and Electronic Engineering, Nanyang Technological University.

[4]. Shraga Shoval and Johann Borenstein, “Using coded signal to benefit from ultrasonic sensor crosstalk in mobile robot obstacle avoidance”, 2001 IEEE International Conference on Robotics and Automation,Seoul, Korea, May 21-26, pp. pp. 2879-2884.

[5]. Don Murray and Jim Little “Using real-time stereo vision for mobile robot navigation” available onlinehttp://www.cs.ubc.ca/~donm/pubs/wpma.pdf

[6]. Hema C.R., Paulraj M.P., Nagarajan R., Sazali Yaacob “Segmentation and Location Computation of Bin Objects” International Journal of Advanced Robotic Systems Vol. 4 No.1, pp.57-62, 2007

[7]. Zi-Quan Hong, “Algebraic Feature Extraction of Image for Recognition”, IEEE Transactions on Pattern Recognition, 1991 vol. 24 No. 3 pp: 211-219.

[8]. Hema C.R., Paulraj M.P., Nagarajan R., Sazali Yaacob “Object Localization using Stereo Sensors for Adept SCARA Robot” Proc. of Intl Conf. on Robotics, Automation and Mechatronics, IEEE, pp.1-5, 2006

[9]. S.N.Sivanandam, M.Paulraj, Introduction to Artificial Neural Networks Vikas Publishing House, India. 2003

58