Inspiring, Machine learning, Raspberry Pi, Robotics

Raspberry Pi Spotter

 

pic4

What is it?

Rapberry Pi Spotter basically recognizes the features in a given image. It analyses the images taken from a webcam connected to the Raspberry Pi and tells us what is in the image. Please see the video of jetpac in my previous post.

Why it is important?

The importance of this software is that, it does all this image analysis in offline. One might wonder how come the Raspberry Pi with little CPU power can do this. As I have explained in the previous post, Pi does it through its powerful GPU.

I thought it is really cool, if I can implement it on a simple Robotic platform. The robot goes around and keep  informing us what it sees in its surroundings, just like in the video shown by Jetpac.

What did I do ?

SP_A0030

So I have constructed a basic robot with an RC car, and connected a webcam to Pi (The construction details and better pics will be detailed in the next post).  Then  I have Installed the webiopi to get the video stream. Then I have implemented the browser interface as shown  in the above pic for the Raspberry Pi.  Now the Robot can be controlled from any device with a browser. I will upload the  demonstration video soon.

How does it work?

The arrow buttons on the interface control the Robot’s movement. In the above, the video is streamed just like the cambot developed by Eric / trouch. The yellow button above the video stream is to implement the deepbelief software.

When we click on this spotter button, webiopi will search for the latest image captured by the webcam and deepbelief will find the features in that image. Then it displays the results in the text box above the spotter button.

How to install it?

In my previous post, I have explained how to install the deepbelief network on Raspberry Pi. You can find the instructions to install the webiopi here.

I have uploaded my code into github. Note, that the code is really very rough at the moment. This code will work, even if one does not have a Robot. In that case the arrow buttons do nothing. However, video feed will be analyzed as in the original video of jetpac.

First, one has to download the files of cambot.py and index.html into a folder, where the deep belief software was installed previously. We also need to configure the settings of motion, so that it will stream the images directly into that folder. I will write detailed instructions with videos when I get more time.

Comments:

Results are not that great.  One reason might be I am not looking at real life objects. Anyway, I am not concerned about the quality of results yet.  Also, my coding skills are very limited. I hope people with better knowledge of python, in particular webiopi and deepbelief can make it work even better. One concern is that my Pi has only 256 MB of RAM. So it usually keep freezing under load. May be 512 MB RAM Pi will give a smoother experience.

Credits:

I would like to thank Pete Warden (CTO of Jetpac) and Toshi Bass (in Webiopi google group) for helping me throughout this project. I am also thankful to guys at stackoverflow for their help with the python doubts.

 

Continue reading

Advertisements
Standard
Machine learning, Raspberry Pi, Robotics

Machine Learning with Raspberry Pi

Although Machine learning appears as a high-tech term, we come across it every day without knowing it. For example tasks such as filtering of spam mails, automatic tagging of facebook photos are accomplished by Machine learning algorithms.   In recent years a new area of Machine learning, known as deep learning is getting lot of  attention, as a promising route for achieving artificial intelligence.

Until recently this knowledge of deep learning is confined to only big data centers. This is because,  the deep learning technology requires large amount of data sets, which only big data mining firms such as Google, Facebook, Microsoft have access to. To keep this technology in every one’s hand, a new startup  Jetpac  has given the access to their deep learning technology to everyone with a computing device (check their app). This is exciting because, many people have  mobile phones, which have so much of computing power. Just see what can be done with such kind of democratization of technology in the above video.

Now coming to the Raspberry Pi, it has roughly 20 GFLOPS (almost same as the Parallela board offers) of computing power, thanks to its GPU. After Broadcam has released the documentation for the GPU specs, Pete Warden has done a great job by porting his Deep Belief Image recognition SDK to the Raspberry Pi. Today after seeing about this post in the Raspberry Pi Blog, I have tried to follow his instructions and successfully run the first example on my Pi.

Instructions to install Deep Belief on Raspberry Pi

This algorithm requires at least 128 MB of RAM dedicated to GPU. To allocate that, we need to edit /boot/config.txt. We do it by using the following command

sudo nano /boot/config.txt

Then add the following line at the end of  /boot/config.txt

gpu_mem=128

save and exit the editor. Now we have to reboot the Raspberry Pi to get a ‘clean’ area of memory.

sudo reboot

Install git by using the following command

sudo apt-get install git

We are now ready to install the Deep belief. Just follow the below instructions

git clone https://github.com/jetpacapp/DeepBeliefSDK.git
cd DeepBeliefSDK/RaspberryPiLibrary
sudo ./install.sh

That’s it. You have installed one of the best Machine learning algorithm on Pi. Now to test whether everything is working or not, hit the following commands.

cd ../examples/SimpleLinux/
make
sudo ./deepbelief 

If everything goes well, you should see the following output

0.016994    wool
0.016418    cardigan
0.010924    kimono
0.010713    miniskirt
0.014307    crayfish
0.015663    brassiere
0.014216    harp
0.017052    sandal
0.024082    holster
0.013580    velvet
0.057286    bonnet
0.018848    stole
0.028298    maillot
0.010915    gown
0.073035    wig
0.012413    hand blower
0.031052    stage
0.027875    umbrella
0.012592    sarong

Pete Warden has explained how to implement this algorithm for various applications on Gitgub. I would like to use this for my Robotic project, so that my robot can recognize the objects around it, just like the Romo in the above video. I am planning to do this with Open CV.

Note:

If you don’t allocate sufficient RAM for GPU, you may get the following error.

Unable to allocate 7778899 bytes of GPU memory
mmap error -1

References:

https://github.com/jetpacapp/DeepBeliefSDK/tree/master#getting-started-on-a-raspberry-pi

http://petewarden.com/2014/06/09/deep-learning-on-the-raspberry-pi/

http://www.raspberrypi.org/gpgpu-hacking-on-the-pi/

http://rpiplayground.wordpress.com/

Standard