Friday 18 January 2019

APM Plane Person Detection: Automating Search & Rescue.




With the advent of small-scale autonomous flight we have witnessed amazing video footage. Often these small UAVs fly a pre-planned route set from the ground station and uploaded to the vehicle. The vehicle follows this route from start to finish recording video footage as it moves.

Route setting using a Ground Station software (Mission Planner)

Broad-Area flights are typically performed using wing aircraft which offer greater flight times and cover larger areas than multi-rotor vehicles
Moving forward from just recording video footage UAVs are increasingly being used in search and rescue situations. Winter proves particularly challenging for rescue workers to locate lost persons quickly and with minimal danger to life.


One of the challenges of using UAVs in search and rescue operations is that the operator must sit (or stand) during each flight and visually observe the flight footage in real-time in hope that they can detect a lost or injured person. This can be time consuming, causing the operator fatigue, tiredness, boredom, and eventually loss of interest in accurately detecting lost individuals.


This can have a negative impact on the search and rescue operation.  


In the case of a Lost Female hiker in 2018 a drone operator flew, recorded and uploaded hours of recorded video footage hoping this would help in the location and detection of the missing hiker.

It is a challenging task which not only causes search fatigue, but can detract the search volunteers from continuing their search in organised groups believing that a drone is covering area. It is important to separate drone search from land search.


Automatic Person detection is emerging in robotics and UAV development, using small lightweight systems that incorporate object detection and classification algorithms. These systems are light enough to mount to fixed wing vehicles and embed into ground station hardware without significantly reducing flight times. 


Here we can see the use of live object detection with a flight ground control system (APM Planner). 


Mapping an autonomous flight path using a UAV Ground Station and launching the UAV, we can use the Person detection algorithm to take the hard work out of search and rescue drone flights.

We can read the serial data from the UAV and upon detection of a missing person, trigger a snap shot photo, record the GPS coordinates, and timestamp. This data can be recorded as .csv file

(Longitude, Latitude, Time, File Name) 

Accuracy and detection threshold can be adjusted in the python .py file allowing the software to be customised according to how the terrain and conditions are affecting detection rates. 

As an early stage proof of concept this is simply to demonstrate the possibility of Machine Learning and Artificial Intelligence in simplifying difficult tasks that may be too time consuming to be viable as a human-oriented task. Image quality and flight altitude will factor towards detection accuracy, as will weather and landscape conditions. However, by using this in a methodical and co-ordinated setup utilising multiple UAV each using a different flight pattern we could potentially reduce search and rescue times over challenging terrain environments. 


Links:
https://dronebelow.com/2018/11/20/drones-swarms-and-ai-for-search-and-rescue-operations-at-sea/



Tuesday 15 January 2019

TK1 with APM Planner

Exploring the modified TK1 a little more today, here are the system specs

Updating to 16.04 it also seems to successfully update the correct GPU drivers for the Jetson TK1 (GK20A). It's really nice to be able to have this update, even though it is a 32-bit system, I can still make use of it.  For example, previously on the Nvidia Jetson TK1 14.04 I was not able to install APM Planner for controlling my drone, plane or robot vehicle. Now however it seems to install just fine.

Be aware that this is APM Planner for ARMv7

Installation was as follows:
1) On your TK1 download the APM Planner software here
2) cd home/ubuntu/downloads
3) sudo dpkg -i apm_planner_2.0.26_xenial_armhf.deb
4) sudo apt-get -f install
5) sudo dpkg -i apm_planner_2.0.26_xenial_armhf.deb

APM Planner should how up in your software now, and run just fine. There is more information here about installation.

I have a wireless joystick 



and APM Planner allows use of gamepad controllers all you have to do is enable it under File>Joystick


Connecting the Gamesir gamepad involves removing the wireless usb dongle and plugging into the TK1. Turn on the gamesir controller and voila


We can now control the throttle, yaw, pitch, roll, flight mode once connected to our vehicle. I like having no wires from the gamepad to the computer. 

Now to program an autonomous flight and trigger a response to a detected object.

Using the modified Jetson TK1 we can have access to a range of devices such as ai using a Movidius/Intel Neural Compute Stick, or RPLidar for SLAM gps-free navigation. I couldn't utilise these on-vehicle devices before but now I can.

Thanks for reading.

Friday 4 January 2019

360 degree Video with Image Classification

Hello, today I would like to show you how to capture 360 degree live video with an image classifier. It is useful in robotics platforms to detect objects in all directions.






Traditionally in robotics to achieve a 360 degree view we will use an Omnidirectional Lens



Previously, I used a cheap dollar store phone lens to create a wider image capture to perform AI image classification.

We can also use a much wider Fisheye lens to achieve spherical video capture. These are available cheaply on various online stores. Look around for them.

We can capture images that are much wider field of view


Using stream_infer.py on the Movidius Neural Compute Stick we can improve on our object detection and image classification area coverage. Initially we just rely on fisheye video to show us objects, but this is not great as it does not provide general direction. Adding a digital compass to our raspberry pi will also provide direction to our objects
Allowing us to turn the robot to face the object or gesture in that direction.

Another approach is to dewarp the image. There are some resources on github to allow us to do this.


It is also possible to dewarp live video using the Movidius Neural Compute Stick in real time

Thanks for Reading.


Running Ai Scripts on the Nvidia Jetson TK1 + Movidius Neural Compute Stick

So, what now that we have built our Ai Franken-computer? Now that we are free from Wall plug sockets and are now battery mobile? I guess we should run some scripts for the Neural compute stick.

In this video we see the DIY ai computer we built running an AI script which captures live streaming wireless camera and detecting objects in the video at a very fast, and smooth rate (Video30fps/Latency Less Than 0.08secs, Object detection is faster than 60fps)

We should go ahead and install Intel code examples from NCAPPZOO to allow us to test and experiment more than just checking to see if the the NCS is connected. Remembering that in our previous article we changed the ncsdk.conf file to allow us to have the compute stick running on ARM devices, now we

Install ncappzoo examples:
cd /home/ubuntu/workspace/ncsdk
git clone https://github.com/movidius/ncappzoo
cd /home/pi/workspace/ncsdk/ncappzoo
make all

Some examples will fail, but these will be the ones using tensorflow, we are not using it right now.

Once installed and compiled, we can look at how to run code examples.

If we try to run many examples we are usually presented with error messages. It usually means that we need a 'graph' file for whatever ai we are using (Googlenets/Squeezenets/Yolo/Facenets/Etc), each needs a graph file which is missing on ARM platforms. We need to use a Ubuntu 16.04 Laptop and make install the full NCSDK and make all examples on it. This will then create the graph files. Go ahead and look in the NCSDK folder on the laptop, copy them to a usb stick and transfer them to the TK1 in the same folder in NCAPPZOO/Caffe.

Running the examples, we can now see them working. In the video we are running stream_infer.py. We can use stream_infer.py to allow us to experiment with different image classifiers such as:
1)AlexNet
2)GenderNet
3)GoogleNet
4)SqueezeNet

We can also add our own image classifiers such as SSDMobileNet or YoloV2/V3 to do this we will cover this in a future article

Using Stream_infer.py script also allows us to experiment with:
1) a video file (mp4)
2) USB Webcam
3) DIY Wireless Streaming Camera


I built my Wireless Streaming Camera using:
1) Raspberry Pi Zero W
2) Raspberry Pi Zero Case (from Vstone Robot Shop in Tokyo - Akihabara)
3) Wide angle camera phone lens from Poundland/Dollar Store/100 Yen shop
4) 5v Power Bank (Any will do)

The wireless streaming camera allows us to walk around with it and capture and classify objects within a radius of 100m, or with a wifi repeater I get 600m in open space. I can also mount it to a drone or RC airplane to fly over spaces and classify objects.



 



Next article I will show how to stream wireless camera to Ai TK1 computer

Thanks for reading


Using Movidius Neural Compute Stick with Nvidia Jetson TK1

Here I show how to use the Movidius Neural Compute Stick with the Nvidia TK1 Board.


Most of us are ready to throw the Jetson TK1 into the trash. It doesn't really do much. But if we update the software to Ubuntu 16.04 it might allow us to use the Movidius Neural Compute Stick with it's USB 3.0 port.



First up, after updating to 16.04, I tried to install the standard NCSDK

Get Started:
mkdir -p ~/workspace
cd ~/workspace
git clone https://github.com/movidius/ncsdk.git
cd ~/workspace/ncsdk
make install

Make the Examples:
cd ~/workspace/ncsdk
make examples

Test it's Working:
cd /home/ubuntu/workspace/ncsdk/examples/apps/hello_ncs_py
python3 hello_ncs.py

Should Give:
"Hello NCS! Device opened normally.
Goodbye NCS! Device closed normally.
NCS device working."

However It doesn't work like this for ARMv7 devices.

We need to follow the Raspberry Pi method of installing Neural Compute stick. This means that we cannot install:
1) Full NCSDK software
2) Tensorflow

So before making examples, we have to edit ncsdk.conf file. Find it's location, and open it in text editor.

Original:
MAKE_PROCS=1
SETUPDIR=/opt/movidius
VERBOSE=yes
SYSTEM_INSTALL=yes
CAFFE_FLAVOR=ssd
CAFFE_USE_CUDA=no
INSTALL_TENSORFLOW=yes
INSTALL_TOOLKIT=yes

New Edited:
MAKE_PROCS=1
SETUPDIR=/opt/movidius
VERBOSE=yes
SYSTEM_INSTALL=yes
CAFFE_FLAVOR=ssd
CAFFE_USE_CUDA=no
INSTALL_TENSORFLOW=no
INSTALL_TOOLKIT=no

Now rerun:
cd ~/workspace/ncsdk

make examples

This allows us to now re run the test to see if it is working:
cd /home/ubuntu/workspace/ncsdk/examples/apps/hello_ncs_py
python3 hello_ncs.py

We should have a good result now! It is connected :)

This process should be the same also for the Raspberry pi 2/3 using the latest Raspbian software which is equivalent to Ubuntu 16.04

Now how to run examples?

Next Article I show how to run example code on the TK1+Neural Compute Stick in a clever way.

Tegra Hardware Specs on 16.04

System Information when running Ai Python Code on the TK1/Neural Compute Stick. It runs much much much more smoothly than the Raspberry pi or the former TK1 14.04 board, and is all suddenly back in the race again without spending anything.
How to beat the tech AI spending race...

DIY Ghetto Ai Development Computer

Here I am showing the next article, how I built a cheap Ai Development computer using spare parts, cardboard, duct tape, and a lipo battery.

On the back we see TK1, on the right Raspberry Pi (I can switch over to), Lipo, Movidius Neural Compute Stick. All working together to make fast AI video detection.

Ingredients

1) Old Nvidia TK1 Board updated to Ubuntu 16.04
2) 17" Laptop LCD Screen with 32 pin edp socket
3) Old Lipo Battery


4) Pocket Keyboard from Raspberry Pi


5) Movidius Neural Compute Stick


6) Duct Tape
7) Cardboard

Mash it all together

And you get this:

DIY ai Computer. Sorry for being lazy on this article but things move fast, money is tight and I have to keep going.


Next article is how to connect Movidius Neural Compute Stick to Jetson TK1.

Thanks for reading

Upgrading Nvidia Jetson TK1 from 14.04 to 16.04

This is a blog which will continue with further posts that leads along a path of hacking, modding, updates and general tinkering.

Since I have had my Movidius Neural Compute Stick I haven't really touched my Nvidia Jetson TK1 board for a while. It has just been collecting dust.

Recently I thought what if I could connect both the TK1 with the Neural Compute Stick?

So I gave it a shot.

Hey, if the Raspberry Pi can handle the Movidius Neural Compute Stick....So can TK1.


Very quickly I discovered that the TK1 board is incompatible due the internet telling us the TK1 uses Ubuntu 14.04 and is 32bit board.

But I don't quit.

Here I found that some have updated the TK1 to 16.04>>>> LINK

They recommend using a clean install of 14.04 before updating to 16.04 as the TK1 only has 16GB of emmc storage and if the update procedure exceeds 16GB, the device will brick, and you will have to connect it to a laptop using usb and compile 14.04 again which will take forever.

I just gave it a shot anyway. I deleted cuda examples, and all large folders, downloads folder contents, everything that is not system files. This left me with 5GB emmc space with which to update.

I followed the commands as per guide:
  1. sudo add-apt-repository main
  2. sudo add-apt-repository universe
  3. sudo add-apt-repository multiverse
  4. sudo add-apt-repository restricted
  5. sudo apt-get update && sudo apt-get upgrade
  6. sudo apt-mark hold xserver-xorg-core
  7. sudo do-release-upgrade
  8. sudo apt-get install gnome-session-flashback

I was very patient, I panicked a couple of times convinced it had bricked, but eventually I booted into Ubuntu 16.04 Xenial on the Jetson TK1. Cool. Now this showed a whole bunch of improvements, I could install Chromium and it ran better than before allowing youtube 1080p and video streaming, spotify, and ublock origin adblocking which was not available on 14.04 before.


Perhaps the TK1 isnt going into the box on the shelf just yet. Perhaps I don't need to blow thousands on the New Nvidia Xavier Board.


So this made me happy. I hope it works for you too.

Next article I will build something with it.