Tuesday, 20 February 2018

Mount Sinabung

Today I saw some spectacular footage showing the dramatic eruption of Mount Sinabung volcano. what really got me was this one video

It looked quite useful.


I extracted the frames as jpegs


Then stitched each frame to get a panorama


I get a better perspective of the scale now and I hope you do too. I liked that the person capturing the video seemed to scan the eruption with his camera, covering the whole scene but not quite being able to have it all in the frame.

Sunday, 18 February 2018

Adventure Van

Having a Van lets me do things like this


Friday, 2 February 2018

World's Smallest Ai Drone

48% Coffee Cup




Here is my experiment with the Movidius Neural Compute Stick to create a tiny drone which can utilise object detection. It is tested using both SqueezeNet and GoogLeNet, Alexnet is a bit slow these days. All are using Caffe as a framework for object detection. Results vary. You can lean more about using NCAPPZOO here.

Video is captured in real-time low latency, enough to fly by video.


I guess as I get more into my Neural Compute stick, I will add experiments here. At the moment i'm impressed with the speed of the Movidius processing power, it's size, cost, forum and support. Intel are really getting in the Ai game. My Nvidia Jetson TK1 is now pretty much redundant for Ai experiments. I can also do all this Live Capture on my Raspberry Pi 2 albeit a little bit more limited in which cnn you can use as tensorflow is not working with the pi. This all works just as well as on a larger mini 230mm drone.

What do you want the drone to do upon detection of what object? snap photo? tweet it?


Did I just create the World's Smallest Ai Drone?


I went to watch The Greatest Showman with my Mother last night and I have that damned song stuck in my head.
...Will ne-ver beee enuuuuuuuuu --- uuuuughhh.

Sing it with me..

Those damn onions again ðŸ˜¢

Here's the longer video, I will make more as I progress.

Sunday, 28 January 2018

Stealth London Home


I have been living in London for some time now but always found myself struggling to meet the cost of living there. Sometimes it would be a good financial month, and others it would push me to the limit. What I have noticed in London are the many number of people living in vans, coaches and buses. The average London rent for a single person in 2017 is around £800 a month and upwards if you want your own place instead of room sharing. For minimum wage workers like myself, those who are self-employed, gig economy workers, low wage earners, and non-state employees, the rents are already too high. most workers in this category are likely to be working to survive, not working for a living, if you know what i'm saying? These many low wage industries are struggling to find employees. Not because of lack of demand but because the costs involved (the maths) of living in the city are not sustainable for us.


Some of the people coming to London are particularly inventive in their methods to try to balance themselves financially. Imagine being able to reduce that huge elephant in the room that is the monthly rent bill, it is by far the largest problem for living in London or any major city. Remove that monthly amount and your life, your worries of eviction, being made homeless, all are alleviated for a while. Social housing is pretty much not going to help you in London so forget that because you have to have lived in the city for at least 2 years to be able to be considered for government help. you also have to show vulnerability, be it health reasons, violence, or seeking asylum from oppression. And you have to also have documented proof for all of these things. There are many more things that the government add to this list of requirements, and just when you think you have them all covered they will add some more. A sad truth right now is that if you are a White, British, and Male, you can ever forget about receiving government assistance, it is a titanic moment, women and children first. But is the country sinking? I'm not so sure it is. Maybe it is just false policy.


Enough politics. Van living in the city. Living in a van will help you as a low-wage worker to earn and keep it in your pocket.

By fitting out the van properly, you can even live well with heating, insulation, washing, cooking internet, lighting, water, all of the things you normally need to get by. You can also rent a postal address these days for a monthly fee.

If you have more money you could possibly buy and live on a houseboat.

So, after working hard for a while eking out a few pounds, I managed to have enough to buy a van. I spent a lot of time thinking about my needs and decided a larger van would be better for longer living in greater comfort. The VW camper is great for weekends and narrow roads, but for longer term it is a cramped place that will quickly make you an unwell employee. A luton-size box van is a popular choice and it is what I decided to opt for. It is a general rule to go for the newest van you can afford. A fully built motorhome of similar size that is showroom new will cost you over £60,000.


The van I bought seemed to be some type of mail van with internal shelving and racking. I had to clear all this out until I had a bare internal unit. A hammer and chisel is good for removing rivets. By this stage the neighbors will be gossiping about the gypsies moving in. Stand strong. London is a conservative area, conversely filled with Labour voting workers aspiring to be conservatives. Arbeit Macht Frei, but not for the creators of this phrase.

Somethings I fitted or self-built into my van included an off grid heater to keep me warm at night, 12v lighting, storage, usb charging, and a shower. It can all be had cheaply if you are prepared to hang in there on postage times, and also install yourself.


When I first started this build I thought it would have a simple completion ending, but it seems that van building is a never ending work in progress, it's a labour of love, all your efforts are your own reward. But, I have a home that I own, I have removed my rent slavery for a while, and I feel safe warm and secure. I am not reliant on government (not that they are going to help anyway), and I feel like I have effectively doubled my wage income. Right now I feel happy, more in control of my present and future, and less under pressure. It's all suddenly become manageable again.

I am hopeful not to get robbed or have it stolen, but I also know that hard times breeds opportunistic thieves, and a certain demographic seems to be performing the robberies right now which is a problem here in London. I have fitted internal locks and and alarm.


Sunday, 14 May 2017

Part 3: Low-Latency Live Wireless Flight Video on all your devices (And Live-Tracking Objects with OpenCV)


So, to explore a little further with live video streaming onto Jetson TK1, I am able to track objects seen from the drone camera.


This is using opencv, the code is written using python script, to track an orange fruit. the code can be changed depending on the color of the object, for example a green apple.

Here is footage of the drone


As time progresses I hope to allow dynamic object tracking selection options.

Things I might use this for include:
- Follow object (ground object or another vehicle in flight)
- Precision landing
- Landing onto moving object
- Payload deployment
- Camera Trigger
- Beacon Launch/Landing/Waypoint
- Drone racing gate targeting




OpenCV runs much more quickly on the Jetson board, than compared to a full Convolutional Neural Network which I can only get 7fps using. With opencv I can get 30fps, again with 0.1sec latency between drone and Linux computer.

OpenCV is kinda neat as an object tracking feature, but I think full-blown CNNs will take over and allow us to be more specific in which object we want to identify and track.



Thanks for reading.

Sunday, 30 April 2017

Part 2: Low-Latency Live Wireless Flight Video on all your devices (And analysing the Live flght footage with a Convolutional Neural Network)

Live Drone Stream Capture & Detection

*Updates added at the bottom of article

Well......

Where are we at today. Some years ago, I remember clumsily mashing together my first remote controlled drone using an arduino and a lot of blind faith. Right now I want to say that I feel like it didn't really do much; but it did, and it does, because not only did it fly and stay in the air, it was able to follow gps co-ordinates and balanced itself against the wind, keeping itself in a single space in the sky.  I thought and still do think that is quite a spectacular and amazing thing for humans to have achieved.
Of course, as time progresses, I got used to it doing all of that, and I decided to walk along the path a little more and try to educate myself in autonomous aviation. I learned to build gimbals, and photgrammetrically map routes that I had plotted my flying-arduino to fly. I experimented with sensors, writing code to teach me about many things from sonar sensing, to radiation activity levels. I even learned how to reverse engineer 32-bit micro controllers. I discovered what does work well, and what does not work so well at-this-period-in-time. I feel happy knowing that there are little elves in the background of life who are working on all these things and refining it all to ensure that next Christmas' toys will work perfectly.
I keep going along the autonomous aviation path, but I find myself having to take more frequent breaks from it all to keep my mind (relatively) sane and healthy - I have worked on gardens, and deviated towards electric ebike building. But I have always felt happy knowing that I will return to autonomous aviation to pick up again.
For a while I have found myself leaning towards wireless video communication. Trying to understand the complexities preventing and enabling us to have fast wireless video feeds with low-latency and good connectivity is a big task, there are some people here who each have differing ways of making this happen, all have varying approaches that have some shortcomings and also better implementations than others. I think an open standard will arrive eventually, but not without some internet bloodshed and flamewars going on.

In the meantime, I discovered recently a way to get a live flight video feed on all my screen devices which has enabled me to move forward quite comfortably towards new experiments such as Live Streaming my flight footage on YouTube, and this weekend I have been teaching myself how to run Convolutional Neural Networks (or CNNs) with my homebuilt live flight stream.

It seems to be becoming this year's buzzword - Neural Networks, Machine Learning, Artificial Intelligence, Deep Dreaming, Deep Learning, Training.

There are people desperate out there to champion themselves as an expert or leader of this emerging field, but it is quite contradictory isn't it, in such a fast evolving area of study how can anyone be an expert. We're all finding our way through the dark, it's just some who are adding "I am finding my way through the dark way better than you" to their resume. I have lots of respect for some inspiring people out there on the internet who do wonderful things and it allows me to have better judgement against others.

So, my forays into Convolutional Neural Networks (CNNs), I have thought about somethings to get me up and running as best I can from the get-go. It helps if you can address some things:

1) Hardware - Is it a good idea to use something cheap and less powerful, or something mighty but more expensive? Does it have to be 32-bit or 64-bit? Is it possible to use cloud processing over an internet connection, or does it have to be processed on the hardware locally/offline? Do your chosen peripheral devices work with the hardware?

2) Software - Is it at a stage where it is functional, and also is it functional on your chosen hardware? Will it compile without a blizzard of errors?

3) Time - Are you making good use of it, or are you wasting it trying to force your way to a result? IS it worth abandoning one hardware/software combination as a lost cause in order to try another setup which might bring better success.

Two systems I decided to use are my Raspberry Pi 2, and my Nvidia Jetson TK1. Both have limitations and a fair amount of compromise is required to get a CNN running on either device. I chose my Jetson as a first and obvious choice to begin.

CNN on Jetson TK1

What am I doing?
-On my Nvidia Jetson TK1 Ubuntu Board, I am running two shell scripts. The first one captures a photo image from the drone during its flight (By running the script, not automated or triggered yet).
-The second shell script takes that photo image and runs it through a convolutional neural network (CNN) trained to detect objects, the type of object, and the number of objects. For example, Horses, and the number of horses in a field.

For my experiments I knew that my TK1 board is considered an ageing device. But, I believe you must "do what you can, with what you have, where you are." and for me, tackling the problems of using an ageing hardware device to use in a fast-evolving field is not impossible, but an exercise in self-discovery.

Using the Jetson TK1 board, I was somewhat aware of it's limitation as a 32-bit board. Most CNNs are now developed for 64-bit platforms. An immediate hurdle as one tends not to compile well with the other. Perseverence in some code editing and file/folder renaming helped me to solve that problem, and I got my CNN running on my Jetson.

Initially, I ran my first tests using only the TK1 CPU due to not configuring the code properly. This meant that processing an image took up to 35 seconds.

(A drone image of horses I used with CNN..)

Setting the CNN up to run using CUDA gpu on the TK1 (Cuda 6.5), it is clearly much faster. Running the same image, it was processed in 0.8 seconds.

(CNN doing it's thing)

I think I will use GPU processing exclusively.... Or so I thought.

(Killed! Gah!)
(other images used)

(Impressive detection rate: 22 Objects detected, Even a hidden Drone detected as aeroplane)

One of the problems I face using the TK1 with an offline CNN designed for 64-bit is my limited memory availability on the TK1 board. I am constantly facing 'killed' when processing, and I believe this is due to memory limits.

My further experiments will be to try a cloud based CNN or 'Lite' version. However, I can confirm that I am able to capture live flight images from my drone in the sky, capture them on my Jetson TK1
with less than 0.1 millisecond latency, and run my CNN to process the image in under 20 seconds to give me a set of objects identified by the CNN within that image, and it is usually correct.

My goals are:
- To capture live stream video and live process that footage, but without memory limit errors.
- To run a 'take photo' script using a particular visual object source, then process it using the CNN.
- To have the Jetson TK1 perform a command upon detection of chosen object (for example, make LED lights flash, or tweet message, or make the drone do something)

Uses for this other than 'Military or Law Enforcement' - Talking with friends about this stuff, they laugh and call me a military facilitator. But they already probably use CNNs to help them detect threats, make decisions without waking up the General, and to detect civillians within target areas so as to avoid mistakes. But what other uses could this have for normal life?
- Mountain Rescue: Detecting people in Avalanche areas, or lost hill walkers.
- In London it could be used to count the number of houses in the area who use coal fires (heat/smoke identification & counting).
- The number of Diesel vehicles which are in poor condition (billowing excessive smoke)
- Detecting White Rhino Numbers in Chitwan National Park
- The number of tents in a field (Festival safety perhaps)
- Bird Nesting types & numbers

CNN on Raspberry Pi 2

We're not there yet :D

Update:

I have live video neural network running on TK1


As a compromise on intelligence/performance, to allow me to get it running, It thinks horses are sheep:

It's now officially an idiot. But we are up and running, analysing live footage at 6.2fps

Here is video footage, thinking a banana is a carrot

Next up, live flight footage from the drone!

Monday, 24 April 2017

Part 1: Low-Latency Live Wireless Flight Video on all your devices (And YouTube Live Streaming it)



Today I discovered that I am able to have super low-latency Wireless video on all my drone devices and i'm super-happy with that. What this means is:

I can have video on my laptop with Ardupilot's Mission Planner:


I can have video on my Nvidia Jetson:

Also, video on Android using Tower app:


I am particularly happy that I can capture fast wireless video on the Jetson board. I think this enables a host of opportunities to integrate with AI experiments from  the ground ensuring your code runs before mounting. I like it. I like my mini 210mm size Ardupilot drone too. Such a good tiny thing to experiment with without fear of breaking it.

I like that I can use both Tower and Mission Planner to provide both video and telemetry, my range tests have been around 1km. I can effectively launch, view and control using my tiny smartphone alone. Cool.

I can even capture on both smartphone, laptop, and jetson at the same time equally so I can launch and monitor using smartphone or laptop, and have the jetson AI experiments running seperately so as not to affect flight control should my Jetson code crash. Good stuff! Now I'm fairly sure, that DJI won't let you do all that.... Thanks to open source community.


YouTube Live - Streaming.

So what now? one thought I had was to learn about live streaming my live flight footage. There are many options and many platforms to do this - Livestream.com, Ustream.tv, and lots of others listed here, and many more i'm sure, as well as social media platforms which are now allowing live streaming. There are literally gazillions of services out there.

Sticking with just one service for my experiments, making a snap choice, I went with YouTube live to see if I could stream on my many devices. Here's how it worked out.

Android Smartphone

I used Youtube Gaming app on my android device to stream my live drone footage. It is probably the smoothest path to successfully streaming on all my tested devices. Setting up is easy, streaming works, and the compactness of a smartphone means you're not lugging around heavy equipment with wires trailing behind you n the dirt. I suspect this is the way forward for most people.

Laptop

I chose to look at OBSproject.com as my first choice software for this test, mostly because it is free and has no nag-screen or watermarks. I have tried other software in the past such as wirecast and Vidbaster, I found Wirecast to be the least resource heavy software, that is, the one which made my laptop get less-hot than other software. Vidblaster is feature rich but resource heavy.
OBS was a new one to me so I tried it out. It works fine and setup is easy for youtube live streaming on my laptop. I approve of it. Maybe I will retest wirecast and vidblaster for live streaming my live drone footage in future to compare. There are other software listed here to try too.

Jetson TK1

Pushing the envelope a little bit, I decided to try live streaming to youtube live using my Nvidia Jetson TK1 board which uses Ubuntu 14.0.x and is 32-bit. It wasn't successful. Primarily, this was because the go-to Linux streaming & encoding software ffmpeg.org does not appear to be already installed on the Jetson, and when I tried to compile it, I was flooded with errors, so I gave up on it after a day. Fail. There is also a version of OBSproject for Linux which I wanted to install, but alas, it is dependent upon ffmpeg being installed firstly. Fuck it.

Raspberry Pi 2

I decided to give my Raspberry Pi 2 a go, instead of one of my Pi Zero W boards. Mostly because the pi 2 is more powerful than the pi zero W. I was more successful with my installs than with the Jetson board - I compiled and installed easily, learned the appropriate scripts, set them up as .sh scripts to run, and got the live footage to successfully stream/cast/encode to Youtube servers.
However, on my laptop, waiting for the stream to show up, I got the initial green dot of success, the good message 'Stream Starting' and this looked good....I waited. It didn't show up. I considered it a fail.... But such a close success.
Reasons for failure? I don't know. I tried many different encoding and script variations for ffmpeg, but it seemed to fail either at the Youtube server end, or, (guessing) my router blocking it. Why this is, I do not know. It seems to want to work, it gets youtube live to initialise the stream, but I never get the video appearing. Fuck. I spent so may hours on getting Raspberry Pi & ffmpeg running it caused my brain to melt.

So that is where I am now at. I recommend Laptop or Smartphone for streaming your live flight footage, but the Jetson and RPi is a bit of a struggle. Perhaps the Jetson TX2 will work better than the TK1, assuming the TX2 is using 64-bit, Ubuntu 16.04 which I am assuming will work better with ffmpeg than 14.0.4 LTS on the TK1. Discovery is key.

I think the Raspberry Pi board is very close to to being a successful streaming board. I might keep at it. Some questions I have to solve.


Thanks for reading.