Thursday, 28 February 2019

Raspberry Pi Zero W on a 3-Axis Brushless Gimbal transmitting Low latency Video to a Laptop

When in doubt, put eyes on it


Here's an example of my Raspberry Pi Zero W mounted onto my DIY 3-Axis Gimbal. It is transmitting video to a Laptop running Ubuntu 16.04 and receiving video using a combination of UDP/Gstreamer/OpenCV, and written in Python. The Gimbal is being controlled manually via a Sony PSP thumbstick connected to the gimbal.


With enough time we will see some cool things happen. It's fair to say that during my time in japan I felt a great deal of influence in how their robot platforms have been developed.

Pepper was a little bit underwhelming for me



Honda 3E-C18 had nice eyes


Kirobo Mini was kinda cute


But equally it helped me to understand how Japan uses perceived empathy in it's robot platforms in order to 'help' the user feel more comfortable in accepting the technology. Japan is a really manipulative place you see it everywhere

It uses empathy in train signs to try to persuade you to conform


It makes sense that in robotics, the use of empathy to persuade the user to behave as requested is also implemented

Question is, do you want to behave?


Friday, 8 February 2019

Object Tracking with OpenCV



Just a quick note to show my progress with drone object tracking this time using Opencv. The tracking quality is getting better... Note there is no object classification going on with this example, it is just object tracking.

Tracking of a dirt bike:


Tracking of Person standing in playing field:

I really like some of the different tracking methods in Opencv, some work much better than others in different ways than others.

I hope to eventually add image classification as an optional element.

Friday, 18 January 2019

APM Plane Person Detection: Automating Search & Rescue.




With the advent of small-scale autonomous flight we have witnessed amazing video footage. Often these small UAVs fly a pre-planned route set from the ground station and uploaded to the vehicle. The vehicle follows this route from start to finish recording video footage as it moves.

Route setting using a Ground Station software (Mission Planner)

Broad-Area flights are typically performed using wing aircraft which offer greater flight times and cover larger areas than multi-rotor vehicles
Moving forward from just recording video footage UAVs are increasingly being used in search and rescue situations. Winter proves particularly challenging for rescue workers to locate lost persons quickly and with minimal danger to life.


One of the challenges of using UAVs in search and rescue operations is that the operator must sit (or stand) during each flight and visually observe the flight footage in real-time in hope that they can detect a lost or injured person. This can be time consuming, causing the operator fatigue, tiredness, boredom, and eventually loss of interest in accurately detecting lost individuals.


This can have a negative impact on the search and rescue operation.  


In the case of a Lost Female hiker in 2018 a drone operator flew, recorded and uploaded hours of recorded video footage hoping this would help in the location and detection of the missing hiker.

It is a challenging task which not only causes search fatigue, but can detract the search volunteers from continuing their search in organised groups believing that a drone is covering area. It is important to separate drone search from land search.


Automatic Person detection is emerging in robotics and UAV development, using small lightweight systems that incorporate object detection and classification algorithms. These systems are light enough to mount to fixed wing vehicles and embed into ground station hardware without significantly reducing flight times. 


Here we can see the use of live object detection with a flight ground control system (APM Planner). 


Mapping an autonomous flight path using a UAV Ground Station and launching the UAV, we can use the Person detection algorithm to take the hard work out of search and rescue drone flights.

We can read the serial data from the UAV and upon detection of a missing person, trigger a snap shot photo, record the GPS coordinates, and timestamp. This data can be recorded as .csv file

(Longitude, Latitude, Time, File Name) 

Accuracy and detection threshold can be adjusted in the python .py file allowing the software to be customised according to how the terrain and conditions are affecting detection rates. 

As an early stage proof of concept this is simply to demonstrate the possibility of Machine Learning and Artificial Intelligence in simplifying difficult tasks that may be too time consuming to be viable as a human-oriented task. Image quality and flight altitude will factor towards detection accuracy, as will weather and landscape conditions. However, by using this in a methodical and co-ordinated setup utilising multiple UAV each using a different flight pattern we could potentially reduce search and rescue times over challenging terrain environments. 


Links:
https://dronebelow.com/2018/11/20/drones-swarms-and-ai-for-search-and-rescue-operations-at-sea/



Tuesday, 15 January 2019

TK1 with APM Planner

Exploring the modified TK1 a little more today, here are the system specs

Updating to 16.04 it also seems to successfully update the correct GPU drivers for the Jetson TK1 (GK20A). It's really nice to be able to have this update, even though it is a 32-bit system, I can still make use of it.  For example, previously on the Nvidia Jetson TK1 14.04 I was not able to install APM Planner for controlling my drone, plane or robot vehicle. Now however it seems to install just fine.

Be aware that this is APM Planner for ARMv7

Installation was as follows:
1) On your TK1 download the APM Planner software here
2) cd home/ubuntu/downloads
3) sudo dpkg -i apm_planner_2.0.26_xenial_armhf.deb
4) sudo apt-get -f install
5) sudo dpkg -i apm_planner_2.0.26_xenial_armhf.deb

APM Planner should how up in your software now, and run just fine. There is more information here about installation.

I have a wireless joystick 



and APM Planner allows use of gamepad controllers all you have to do is enable it under File>Joystick


Connecting the Gamesir gamepad involves removing the wireless usb dongle and plugging into the TK1. Turn on the gamesir controller and voila


We can now control the throttle, yaw, pitch, roll, flight mode once connected to our vehicle. I like having no wires from the gamepad to the computer. 

Now to program an autonomous flight and trigger a response to a detected object.

Using the modified Jetson TK1 we can have access to a range of devices such as ai using a Movidius/Intel Neural Compute Stick, or RPLidar for SLAM gps-free navigation. I couldn't utilise these on-vehicle devices before but now I can.

Thanks for reading.

Friday, 4 January 2019

360 degree Video with Image Classification

Hello, today I would like to show you how to capture 360 degree live video with an image classifier. It is useful in robotics platforms to detect objects in all directions.






Traditionally in robotics to achieve a 360 degree view we will use an Omnidirectional Lens



Previously, I used a cheap dollar store phone lens to create a wider image capture to perform AI image classification.

We can also use a much wider Fisheye lens to achieve spherical video capture. These are available cheaply on various online stores. Look around for them.

We can capture images that are much wider field of view


Using stream_infer.py on the Movidius Neural Compute Stick we can improve on our object detection and image classification area coverage. Initially we just rely on fisheye video to show us objects, but this is not great as it does not provide general direction. Adding a digital compass to our raspberry pi will also provide direction to our objects
Allowing us to turn the robot to face the object or gesture in that direction.

Another approach is to dewarp the image. There are some resources on github to allow us to do this.


It is also possible to dewarp live video using the Movidius Neural Compute Stick in real time

Thanks for Reading.


Running Ai Scripts on the Nvidia Jetson TK1 + Movidius Neural Compute Stick

So, what now that we have built our Ai Franken-computer? Now that we are free from Wall plug sockets and are now battery mobile? I guess we should run some scripts for the Neural compute stick.

In this video we see the DIY ai computer we built running an AI script which captures live streaming wireless camera and detecting objects in the video at a very fast, and smooth rate (Video30fps/Latency Less Than 0.08secs, Object detection is faster than 60fps)

We should go ahead and install Intel code examples from NCAPPZOO to allow us to test and experiment more than just checking to see if the the NCS is connected. Remembering that in our previous article we changed the ncsdk.conf file to allow us to have the compute stick running on ARM devices, now we

Install ncappzoo examples:
cd /home/ubuntu/workspace/ncsdk
git clone https://github.com/movidius/ncappzoo
cd /home/pi/workspace/ncsdk/ncappzoo
make all

Some examples will fail, but these will be the ones using tensorflow, we are not using it right now.

Once installed and compiled, we can look at how to run code examples.

If we try to run many examples we are usually presented with error messages. It usually means that we need a 'graph' file for whatever ai we are using (Googlenets/Squeezenets/Yolo/Facenets/Etc), each needs a graph file which is missing on ARM platforms. We need to use a Ubuntu 16.04 Laptop and make install the full NCSDK and make all examples on it. This will then create the graph files. Go ahead and look in the NCSDK folder on the laptop, copy them to a usb stick and transfer them to the TK1 in the same folder in NCAPPZOO/Caffe.

Running the examples, we can now see them working. In the video we are running stream_infer.py. We can use stream_infer.py to allow us to experiment with different image classifiers such as:
1)AlexNet
2)GenderNet
3)GoogleNet
4)SqueezeNet

We can also add our own image classifiers such as SSDMobileNet or YoloV2/V3 to do this we will cover this in a future article

Using Stream_infer.py script also allows us to experiment with:
1) a video file (mp4)
2) USB Webcam
3) DIY Wireless Streaming Camera


I built my Wireless Streaming Camera using:
1) Raspberry Pi Zero W
2) Raspberry Pi Zero Case (from Vstone Robot Shop in Tokyo - Akihabara)
3) Wide angle camera phone lens from Poundland/Dollar Store/100 Yen shop
4) 5v Power Bank (Any will do)

The wireless streaming camera allows us to walk around with it and capture and classify objects within a radius of 100m, or with a wifi repeater I get 600m in open space. I can also mount it to a drone or RC airplane to fly over spaces and classify objects.



 



Next article I will show how to stream wireless camera to Ai TK1 computer

Thanks for reading


Using Movidius Neural Compute Stick with Nvidia Jetson TK1

Here I show how to use the Movidius Neural Compute Stick with the Nvidia TK1 Board.


Most of us are ready to throw the Jetson TK1 into the trash. It doesn't really do much. But if we update the software to Ubuntu 16.04 it might allow us to use the Movidius Neural Compute Stick with it's USB 3.0 port.



First up, after updating to 16.04, I tried to install the standard NCSDK

Get Started:
mkdir -p ~/workspace
cd ~/workspace
git clone https://github.com/movidius/ncsdk.git
cd ~/workspace/ncsdk
make install

Make the Examples:
cd ~/workspace/ncsdk
make examples

Test it's Working:
cd /home/ubuntu/workspace/ncsdk/examples/apps/hello_ncs_py
python3 hello_ncs.py

Should Give:
"Hello NCS! Device opened normally.
Goodbye NCS! Device closed normally.
NCS device working."

However It doesn't work like this for ARMv7 devices.

We need to follow the Raspberry Pi method of installing Neural Compute stick. This means that we cannot install:
1) Full NCSDK software
2) Tensorflow

So before making examples, we have to edit ncsdk.conf file. Find it's location, and open it in text editor.

Original:
MAKE_PROCS=1
SETUPDIR=/opt/movidius
VERBOSE=yes
SYSTEM_INSTALL=yes
CAFFE_FLAVOR=ssd
CAFFE_USE_CUDA=no
INSTALL_TENSORFLOW=yes
INSTALL_TOOLKIT=yes

New Edited:
MAKE_PROCS=1
SETUPDIR=/opt/movidius
VERBOSE=yes
SYSTEM_INSTALL=yes
CAFFE_FLAVOR=ssd
CAFFE_USE_CUDA=no
INSTALL_TENSORFLOW=no
INSTALL_TOOLKIT=no

Now rerun:
cd ~/workspace/ncsdk

make examples

This allows us to now re run the test to see if it is working:
cd /home/ubuntu/workspace/ncsdk/examples/apps/hello_ncs_py
python3 hello_ncs.py

We should have a good result now! It is connected :)

This process should be the same also for the Raspberry pi 2/3 using the latest Raspbian software which is equivalent to Ubuntu 16.04

Now how to run examples?

Next Article I show how to run example code on the TK1+Neural Compute Stick in a clever way.

Tegra Hardware Specs on 16.04

System Information when running Ai Python Code on the TK1/Neural Compute Stick. It runs much much much more smoothly than the Raspberry pi or the former TK1 14.04 board, and is all suddenly back in the race again without spending anything.
How to beat the tech AI spending race...