An Image Guided Robotic Feeding System for Disabled People
##TLDR I made a prototype system that consists of a robotic arm that is guided by a stereo camera and a computer vision system to allow people without the use of their hands or arms to feed themselves. Two videos of the system in operation are shown below.
##About All final year students in engineering schools in South Africa are required by ECSA (the Engineering Counsel of South Africa) to complete a large individual project. Electrical, Electronic and Computer Engineering Students at the University of Pretoria or Tuks have a year long project in the final year.
My project was an “Image Guided Robotic Feeding System for Disabled People”. The main idea behind the project was to enable people without the use of their arms to more easilly feed themselves.
The requirement of the project was for a prototype robotic system to enable this. Two videos of the finished system are shown bellow.
The image below shows a functional block diagram of the system.
Functional Unit 1 consists of a Computer Vision system to find the location, in 3D space, of the the user’s mouth. Functional Unit 2 is a user interface built on Mac OSX using the accessibility features to allow disabled users the interact with the system. Functional Unit 3 is a controller for the robotic arm implemented on an Arduino Due. Functional Unit 4 is a 4 DOF (Degrees of Freedom) robotic arm.
##Computer Vision System
The computer vision system consists of a stereo camera built with two off the shelf webcams and an algorithm to find the position of the user’s mouth in 3D space. The theory of stereo vision is well described in the book “Learning OpenCV: Computer Vision with the OpenCV Library” by Bradski and Kaehler. Alternatively, a brief discussion can be found here. A flow chart of the overall algorithm is shown below.
The algorithm gets the left and right frames and applies the necessary corrections to them to allow the triangulation to produce accurate results. The centre of the mouth is found in each frame and the position in 3D space is triangulated.
To find the centre of the user’s mouth in individual frames the algorithm described in the flowchart below is used.
The user’s face and then their mouth are found using two Haar Cascade Classifiers (These are essentially the algorithm described by Viola and Jones with some improvements).
A Gausian blur is then applied to the image and a binary threshold sets all pixels darker than a certain value to black and the rest to white. Contours are then found around all the black parts of the image and the largest contour is assumed to be around the mouth. The centre of mouth is then found by finding a minimally sized rotated rectangle that contains the contour and then finding the centre of that rectangle.
The controller for the robotic arm is implemented on an Arduino Due with a custom shield. It has 3 functions:
It controls the motion of the robotic arms’ servos. It calculates the joint angles and angular velocities required to move the end effector of the arm to the desired position safely. It implements a protocol to allow a computer to command the arm to move to a position at a desired speed. The servos used in the robotic arm are Dynamixel MX28T and AX12 servos. These servos communicate using a half duplex 5 volt serial bus. The UART (Universal Asynchronous Receiver/Transmitter) on the Arduino is a 3.3 volt full duplex port so a translator circuit was implemented on a custom shield. This consisted of a TXB0108 level shifter to translate between the 5 v and 3.3 v levels and the circuit shown in the following image.
This allows The EnT line to enable transmission to the servos when it is taken high and to enable reception from the servos when it is taken low.
A library by Josue Savage was used, with modifications for the MX series servos, to send messages to the servos.
To calculate the required joint angles for a given position required the development of a simple IK (Inverse Kinematics) algorithm. For a 4 DOF arm the IK solution is easilly soluble with simple trigonometry. Society of Robots has a good explanation of IK in the robotic arm tutorial so does wikipedia.
For this system a very naive motion planning algorithm was used. It assumes that the end effector will move in a straight line and calculates the joint speeds based on this assumption. This works relatively well for the needs of this system.
A 4 DOF arm was chosen because it had the minimum number of joints to enable the robot to scoop food from a bowl.
The first step in the design of the arm was to calculate the torque requirements for the actuators and then to choose appropriate actuators. By assuming that the mass of the arms segments was linearly distributed and that the actuators were point masses on their axis of rotation an approximation of the torque requirements for the servos could be made. A free body diagram showing the forces acting on the arm in its fully extended pose is shown below:
Dynamixel AX12 servos were chosen for M0 and M3 and MX28T servos for M1 and M2. These satisfy the torque requirements and have features such as built in control systems with user adjustable parameters.
The arm was designed in Autodesk Inventor. The use of CAD allowed for parts to be virtually test fitted and helped to ensure that the assembly of the arm went without a problem. A render of the CAD assembly is shown bellow.
The servo brackets and some of the parts for the joints were 3D printed in nylon. The flat plates were laser cut from acrylic and the rest of the parts were standard hardware.
The system has a 91% success rate in putting food into the users mouth. It does not spill food and operates a safe speed and will not move fast enough to injure user’s in normal conditions.