Sunday 14 May 2017

Part 3: Low-Latency Live Wireless Flight Video on all your devices (And Live-Tracking Objects with OpenCV)


So, to explore a little further with live video streaming onto Jetson TK1, I am able to track objects seen from the drone camera.


This is using opencv, the code is written using python script, to track an orange fruit. the code can be changed depending on the color of the object, for example a green apple.

Here is footage of the drone


As time progresses I hope to allow dynamic object tracking selection options.

Things I might use this for include:
- Follow object (ground object or another vehicle in flight)
- Precision landing
- Landing onto moving object
- Payload deployment
- Camera Trigger
- Beacon Launch/Landing/Waypoint
- Drone racing gate targeting




OpenCV runs much more quickly on the Jetson board, than compared to a full Convolutional Neural Network which I can only get 7fps using. With opencv I can get 30fps, again with 0.1sec latency between drone and Linux computer.

OpenCV is kinda neat as an object tracking feature, but I think full-blown CNNs will take over and allow us to be more specific in which object we want to identify and track.



Thanks for reading.

Sunday 30 April 2017

Part 2: Low-Latency Live Wireless Flight Video on all your devices (And analysing the Live flght footage with a Convolutional Neural Network)

Live Drone Stream Capture & Detection

*Updates added at the bottom of article

Well......

Where are we at today. Some years ago, I remember clumsily mashing together my first remote controlled drone using an arduino and a lot of blind faith. Right now I want to say that I feel like it didn't really do much; but it did, and it does, because not only did it fly and stay in the air, it was able to follow gps co-ordinates and balanced itself against the wind, keeping itself in a single space in the sky.  I thought and still do think that is quite a spectacular and amazing thing for humans to have achieved.
Of course, as time progresses, I got used to it doing all of that, and I decided to walk along the path a little more and try to educate myself in autonomous aviation. I learned to build gimbals, and photgrammetrically map routes that I had plotted my flying-arduino to fly. I experimented with sensors, writing code to teach me about many things from sonar sensing, to radiation activity levels. I even learned how to reverse engineer 32-bit micro controllers. I discovered what does work well, and what does not work so well at-this-period-in-time. I feel happy knowing that there are little elves in the background of life who are working on all these things and refining it all to ensure that next Christmas' toys will work perfectly.
I keep going along the autonomous aviation path, but I find myself having to take more frequent breaks from it all to keep my mind (relatively) sane and healthy - I have worked on gardens, and deviated towards electric ebike building. But I have always felt happy knowing that I will return to autonomous aviation to pick up again.
For a while I have found myself leaning towards wireless video communication. Trying to understand the complexities preventing and enabling us to have fast wireless video feeds with low-latency and good connectivity is a big task, there are some people here who each have differing ways of making this happen, all have varying approaches that have some shortcomings and also better implementations than others. I think an open standard will arrive eventually, but not without some internet bloodshed and flamewars going on.

In the meantime, I discovered recently a way to get a live flight video feed on all my screen devices which has enabled me to move forward quite comfortably towards new experiments such as Live Streaming my flight footage on YouTube, and this weekend I have been teaching myself how to run Convolutional Neural Networks (or CNNs) with my homebuilt live flight stream.

It seems to be becoming this year's buzzword - Neural Networks, Machine Learning, Artificial Intelligence, Deep Dreaming, Deep Learning, Training.

There are people desperate out there to champion themselves as an expert or leader of this emerging field, but it is quite contradictory isn't it, in such a fast evolving area of study how can anyone be an expert. We're all finding our way through the dark, it's just some who are adding "I am finding my way through the dark way better than you" to their resume. I have lots of respect for some inspiring people out there on the internet who do wonderful things and it allows me to have better judgement against others.

So, my forays into Convolutional Neural Networks (CNNs), I have thought about somethings to get me up and running as best I can from the get-go. It helps if you can address some things:

1) Hardware - Is it a good idea to use something cheap and less powerful, or something mighty but more expensive? Does it have to be 32-bit or 64-bit? Is it possible to use cloud processing over an internet connection, or does it have to be processed on the hardware locally/offline? Do your chosen peripheral devices work with the hardware?

2) Software - Is it at a stage where it is functional, and also is it functional on your chosen hardware? Will it compile without a blizzard of errors?

3) Time - Are you making good use of it, or are you wasting it trying to force your way to a result? IS it worth abandoning one hardware/software combination as a lost cause in order to try another setup which might bring better success.

Two systems I decided to use are my Raspberry Pi 2, and my Nvidia Jetson TK1. Both have limitations and a fair amount of compromise is required to get a CNN running on either device. I chose my Jetson as a first and obvious choice to begin.

CNN on Jetson TK1

What am I doing?
-On my Nvidia Jetson TK1 Ubuntu Board, I am running two shell scripts. The first one captures a photo image from the drone during its flight (By running the script, not automated or triggered yet).
-The second shell script takes that photo image and runs it through a convolutional neural network (CNN) trained to detect objects, the type of object, and the number of objects. For example, Horses, and the number of horses in a field.

For my experiments I knew that my TK1 board is considered an ageing device. But, I believe you must "do what you can, with what you have, where you are." and for me, tackling the problems of using an ageing hardware device to use in a fast-evolving field is not impossible, but an exercise in self-discovery.

Using the Jetson TK1 board, I was somewhat aware of it's limitation as a 32-bit board. Most CNNs are now developed for 64-bit platforms. An immediate hurdle as one tends not to compile well with the other. Perseverence in some code editing and file/folder renaming helped me to solve that problem, and I got my CNN running on my Jetson.

Initially, I ran my first tests using only the TK1 CPU due to not configuring the code properly. This meant that processing an image took up to 35 seconds.

(A drone image of horses I used with CNN..)

Setting the CNN up to run using CUDA gpu on the TK1 (Cuda 6.5), it is clearly much faster. Running the same image, it was processed in 0.8 seconds.

(CNN doing it's thing)

I think I will use GPU processing exclusively.... Or so I thought.

(Killed! Gah!)
(other images used)

(Impressive detection rate: 22 Objects detected, Even a hidden Drone detected as aeroplane)

One of the problems I face using the TK1 with an offline CNN designed for 64-bit is my limited memory availability on the TK1 board. I am constantly facing 'killed' when processing, and I believe this is due to memory limits.

My further experiments will be to try a cloud based CNN or 'Lite' version. However, I can confirm that I am able to capture live flight images from my drone in the sky, capture them on my Jetson TK1
with less than 0.1 millisecond latency, and run my CNN to process the image in under 20 seconds to give me a set of objects identified by the CNN within that image, and it is usually correct.

My goals are:
- To capture live stream video and live process that footage, but without memory limit errors.
- To run a 'take photo' script using a particular visual object source, then process it using the CNN.
- To have the Jetson TK1 perform a command upon detection of chosen object (for example, make LED lights flash, or tweet message, or make the drone do something)

Uses for this other than 'Military or Law Enforcement' - Talking with friends about this stuff, they laugh and call me a military facilitator. But they already probably use CNNs to help them detect threats, make decisions without waking up the General, and to detect civillians within target areas so as to avoid mistakes. But what other uses could this have for normal life?
- Mountain Rescue: Detecting people in Avalanche areas, or lost hill walkers.
- In London it could be used to count the number of houses in the area who use coal fires (heat/smoke identification & counting).
- The number of Diesel vehicles which are in poor condition (billowing excessive smoke)
- Detecting White Rhino Numbers in Chitwan National Park
- The number of tents in a field (Festival safety perhaps)
- Bird Nesting types & numbers

CNN on Raspberry Pi 2

We're not there yet :D

Update:

I have live video neural network running on TK1


As a compromise on intelligence/performance, to allow me to get it running, It thinks horses are sheep:

It's now officially an idiot. But we are up and running, analysing live footage at 6.2fps

Here is video footage, thinking a banana is a carrot

Next up, live flight footage from the drone!

Monday 24 April 2017

Part 1: Low-Latency Live Wireless Flight Video on all your devices (And YouTube Live Streaming it)



Today I discovered that I am able to have super low-latency Wireless video on all my drone devices and i'm super-happy with that. What this means is:

I can have video on my laptop with Ardupilot's Mission Planner:


I can have video on my Nvidia Jetson:

Also, video on Android using Tower app:


I am particularly happy that I can capture fast wireless video on the Jetson board. I think this enables a host of opportunities to integrate with AI experiments from  the ground ensuring your code runs before mounting. I like it. I like my mini 210mm size Ardupilot drone too. Such a good tiny thing to experiment with without fear of breaking it.

I like that I can use both Tower and Mission Planner to provide both video and telemetry, my range tests have been around 1km. I can effectively launch, view and control using my tiny smartphone alone. Cool.

I can even capture on both smartphone, laptop, and jetson at the same time equally so I can launch and monitor using smartphone or laptop, and have the jetson AI experiments running seperately so as not to affect flight control should my Jetson code crash. Good stuff! Now I'm fairly sure, that DJI won't let you do all that.... Thanks to open source community.


YouTube Live - Streaming.

So what now? one thought I had was to learn about live streaming my live flight footage. There are many options and many platforms to do this - Livestream.com, Ustream.tv, and lots of others listed here, and many more i'm sure, as well as social media platforms which are now allowing live streaming. There are literally gazillions of services out there.

Sticking with just one service for my experiments, making a snap choice, I went with YouTube live to see if I could stream on my many devices. Here's how it worked out.

Android Smartphone

I used Youtube Gaming app on my android device to stream my live drone footage. It is probably the smoothest path to successfully streaming on all my tested devices. Setting up is easy, streaming works, and the compactness of a smartphone means you're not lugging around heavy equipment with wires trailing behind you n the dirt. I suspect this is the way forward for most people.

Laptop

I chose to look at OBSproject.com as my first choice software for this test, mostly because it is free and has no nag-screen or watermarks. I have tried other software in the past such as wirecast and Vidbaster, I found Wirecast to be the least resource heavy software, that is, the one which made my laptop get less-hot than other software. Vidblaster is feature rich but resource heavy.
OBS was a new one to me so I tried it out. It works fine and setup is easy for youtube live streaming on my laptop. I approve of it. Maybe I will retest wirecast and vidblaster for live streaming my live drone footage in future to compare. There are other software listed here to try too.

Jetson TK1

Pushing the envelope a little bit, I decided to try live streaming to youtube live using my Nvidia Jetson TK1 board which uses Ubuntu 14.0.x and is 32-bit. It wasn't successful. Primarily, this was because the go-to Linux streaming & encoding software ffmpeg.org does not appear to be already installed on the Jetson, and when I tried to compile it, I was flooded with errors, so I gave up on it after a day. Fail. There is also a version of OBSproject for Linux which I wanted to install, but alas, it is dependent upon ffmpeg being installed firstly. Fuck it.

Raspberry Pi 2

I decided to give my Raspberry Pi 2 a go, instead of one of my Pi Zero W boards. Mostly because the pi 2 is more powerful than the pi zero W. I was more successful with my installs than with the Jetson board - I compiled and installed easily, learned the appropriate scripts, set them up as .sh scripts to run, and got the live footage to successfully stream/cast/encode to Youtube servers.
However, on my laptop, waiting for the stream to show up, I got the initial green dot of success, the good message 'Stream Starting' and this looked good....I waited. It didn't show up. I considered it a fail.... But such a close success.
Reasons for failure? I don't know. I tried many different encoding and script variations for ffmpeg, but it seemed to fail either at the Youtube server end, or, (guessing) my router blocking it. Why this is, I do not know. It seems to want to work, it gets youtube live to initialise the stream, but I never get the video appearing. Fuck. I spent so may hours on getting Raspberry Pi & ffmpeg running it caused my brain to melt.

So that is where I am now at. I recommend Laptop or Smartphone for streaming your live flight footage, but the Jetson and RPi is a bit of a struggle. Perhaps the Jetson TX2 will work better than the TK1, assuming the TX2 is using 64-bit, Ubuntu 16.04 which I am assuming will work better with ffmpeg than 14.0.4 LTS on the TK1. Discovery is key.

I think the Raspberry Pi board is very close to to being a successful streaming board. I might keep at it. Some questions I have to solve.


Thanks for reading.

Monday 20 February 2017

Hololens

"There was also a shark floating around outside in the lobby, and I walked over to the window and looked out at the shark, and the shark swam in the air down the lobby hallway on the opposite side of the glass, and the frame of the window hid the shark as it would if a real person walked down the lobby hallway. I shot scorpions coming out of a hole in the wall, It genuinely feels like it's really there and happening. As you walk around the room everything adjusts to your position, point, location."


These are some of the ways I have been trying to describe hololens to my friends. "How was it?" they ask. And I begin trying to describe to them this experience which isn't really like anything else. It's a bit difficult. Both explaining is difficult and capturing worthy video footage is difficult. It is only really beneficial if you wear hololens headsets; then it might make more sense.

As can be seen in this video, placing a camera to view the lens doesn't really capture the immersion that you feel when wearing the headset. People who don't really understand, will simply dismiss hololens based on this type of footage



I have been in London for a couple of months now, everything is starting to settle. My multiple jobs are going fine, my life is under control, and I have started to realise that London is a good place, somewhat like a library of events that you can google search for anything you wish to do at any given time and in likelihood there will be an event to fit whatever it is you are hoping for. In my case, last week I found myself with some free time and I thought about filling it. Instead of searching for what is happening (What'sOn, Time Out etc), I soul searched a little bit and thought about what I would really like to do? Space flight? Fight tigers in a cage? hmmm. What has my technology experiences been missing? I know! let's google search "Hololens" + "London" and sure enough I was excited to find that there is indeed a company who specialise in delivering hololens seminars and demos right here. So I jumped on the bus, wondered briefly, if I am about to sign up for something akin to a Black Mirror episode, and headed to London's Banking district.
(On my way to the privately owned enclave called: City of London)

It's interesting, the City of London. Most of us these days see it not only as the land of the rich and plenty, but also it is more and more becoming realised as a privately owned piece of land within London itself, with it's own emblem, and even with it's own private police force. It gets weird. Mostly because the City of London doesn't answer people's questions as to how they pulled this whole thing off.
But anyway back to Hololens. following a little bit of phone navigation, I arrive at the entrance to a really nice hotdesking office in Devonshire square, greeted by two very knowledgeable guys from a company called Kazendi; James and Max. On first appearance they seem very like every other very happy, enthusiastic and knowledgeable technologists out there, still with plenty of fire, a bit too immersed and jaded by the technology they have been swimming deeply in for the past number of years. Often gets deep does technology development. My first reaction is that they need to go on holiday.

Hololens.

Hololens is known as a mixed reality headset that combines 3d graphics into what we see through the headset and our real world. It's a bit like how some people describe some psychedelic experiences. Things appearing on walls, miniature sharks swimming down hallways, weight lifters bench pressing on our conference tables. And indeed it is true. After experiencing a demo of hololens you find yourself questioning existence and reality deeply for a while. Out in the street, I find myself asking "Is that pigeon in front of me real? or a graphic? How can I tell? If I can't catch it?" This is a possibility for our futures. It all gets a little bit 'Plato's Cave'. I wonder if anyone has lost their minds using hololens. The more real the games become, will the brain also have difficulty letting go once the headset is put down?

Here are some examples of hololens, which although impressive, doesn't give you the true immersive feeling that you experience when actually wearing the headset


There are a couple more nice video examples here and here too.
well...

There are of course a number of reasons why I wanted to learn more about hololens. It's generally a bit vague as to what it is, and also what is under the hood. It's not that clear at first glance. You have to dig to find out. Of course I wanted to actually experience hololens, just like riding a theme park ride, that's nice and easy. But learning more about the hardware technology is more rewarding for me, and also learning about development. Where it is going and how it is done.

The technology


Hololens it seems, is (in my eyes) a custom FPGA chip system. Microsoft is calling it a HPU. It's a Microsoft product so they can of course call it what they like. They can make it exclusive to microsoft windows if they like, and they can make apps only available through their app store if they like. That's a closed environment. And that is freedom to choose. But anyway, the FPGA/HPU thing is important. While developing drone hardware I learned some things about the value of FPGA chips over x86 or ARM chips. FPGA chips are able to handle multiple hardware peripherals much faster (think multiple web cameras, 3D Depth cameras, Gyroscopes, LIDAR, SONAR, all these things all at once can quickly make smoke blow out of a good old x86 processor. FPGA's are seen as the solution. I see them being used more and more on DJI drones such as the Matrice and it's camera sensing unit. Self-driving vehicles are using them too. DJI seem to be using Altera Cyclone V chips, Microsoft hololens seems to be using custom/disguised Tensilica FPGA, but I can't be entirely certain on that one, just based on observation.

*Update video above



But that's okay, just as much as open source hardware is great too (I'm an advocate). I like open source lots, it doesn't often put food in my belly though, and earning a buck is a nice reward to have for your development work.

(Look at all that connected stuff)

Speaking to Max and James, they offered some details about the headset, that I wrote down in my notebook. It has:
 3D Audio (multiple surround speakers), Multiple microphones (6 I believe, to help noise reduction and isolation), A Bayer-filtered RGB(G) lens projector that handles at 240fps in total, 60fps per colour), SLAM mapping system using depth cameras and IR cameras (think Kinect/ZED/Intel Realsense camera). Snazzy..

However, If you have any experience at all using these types of cameras you will know and understand that there are constraints to SLAM imaging, for example, matt black surfaces seem to affect the imaging results, as does using outdoors (think Sun's Infra-Red light reducing the devices IR capability). But that is not to say these things are anything from functional, it's just currently a problem when handing these types of things to a novice consumer with high expectations, if they feel the device is not to their requirements they will rubbish something fairly quickly. See what is happening with drones when a novice quickly crashes/destroys their £1500 purchase!* (Hint: they are quickly hitting the refund button, and leaving very sour critical reviews). I think device technologists are learning this quickly. Developers have greater skills and acceptance of a new technologies' limitations and don't really mind as such, they enjoy it even.

What all these sensors mean to us, is that the hololens is able to react to world co-ordinates (gps, gyroscope, accelerometer sensors), gaze input (our sight direction), voice input, and gesture input. I know from my time with hololens that developers want to expand the current gesture availability, and to have the choice to develop hardware peripherals. Imagine a cricket bat with haptic feedback that reacts to a hololens cricket bowler?
(Touch Gestures)

During my Demo Experience

I experienced all the great demos that are currently available. There was this weight lifter on the conference table telling me how I should press weights (far from my body type could ever deliver). He was about 12 inches tall. I could resize him to life-size or miniature using gesture control and adjusting resize handles just like you do in Photoshop.

Drop and gimme 10 Pussy!!!! (Or I think he said that to me)

I shot some robots (roboraid) and scorpions which scuttled along the walls. I had a flash back moment to the horror-show spider head in Black Mirror. It might be for the best right now that Microsoft limits it's developers from doing this. I have some concerns over the potential abuse of hololens technology in the future which I won't go into too much, but imagine peddling immersive snuff movies, or interactive child pornography. I'm not wanting to be a party-pooper for hololens and it's wondrous technology, but I'm fairly certain there are some nut-jobs already conceiving things like this. General, high-quality, wholesome pornography however, is going to be quite outstanding though. Imagine your all-time favorite porn star wandering around your room with a twinkle in her eye. I swear right now, that in ten years time you're going to wish you were 16 again.
(Zap!!!)

Developing for Hololens


How might I as a non-hololens owner, develop software for it? That's a decent question to ask. Microsoft has in all fairness, released its SDK (Software Development Kit) freely to anyone who wishes to begin developing for hololens regardless of whether you own a unit or don't. You can download the SDK here. The SDK also has pre-requisite minimum requirements for a usable developer system which can also be read here. You will also need to consider using Unity - Game Engine to help you make nice graphics. But I am wondering if the new QT 3D Studio will do the job too.

There is also a nice developers page linking to forums, samples, codes, and tools here.

If you are not in London, you can find hololens demo locations here: International Demo
However, if you are in London, and wish to give this experience a try (which I think you should just for an understanding of where the future is leading to) I suggest you give these guys a call, or take a look here for more details on the London Demo. Makes a change from the Opera, and you will learn much more about software development too.

Don't say I didn't warn you!

 Replacing James the demo lead with Saturn via the pinch-and-move gesture

How to gesture, not catching invisible grapefruit.
Thanks for reading.

Monday 23 January 2017

Grand Theft Deliveroo


The more I ride my ebike on my deliveroo shifts, the more and more it feels like I am riding the bmx from Grand Theft Auto III just like back in the day.

With similar goals between deliveroo and GTA3; you ride from point to point across the city, picking up and dropping off goods, following the little map in the corner of the screen. Last night, I have officially started to merge this in my dreams, waking up to each new day with thoughts that I am Carl 'CJ' Johnson. There is such a thing as Deliveroo Addiction? The challenge of the game? Leveling up, scoring points, getting there.

One major difference between me and other deliveroo riders in my area is that I am electric. I see others pushing their bike up hills with their big pizza bags on their backs while I am free wheeling up at 18mph. Whereas the push bike riders are exhausted and averaging barely two deliveries an hour in my area, I am averaging four an hour. If everything from a to b goes smoothly and without delay I think I can average five, but this delay thing is starting to gripe me. You see, I'm not getting exhausted at all on my shifts, but i'm getting cold. It is just like riding a petrol scooter (a bit colder due to lack of wind protection) but without the added costs of Tax, annual MOT, Insurance, Gas/Petrol refilling, Driving test, Licence, Zero emissions too. My carbon footprint is tiny. I'm not getting exhausted at all. I don't feel very much difference from when I set off to begin a period of work, to when I finish. I just feel colder and hungrier. But I like it this way, I like that I could begin another shift If I wished to without feeling totally exhausted.

And then there are the annoying delays. Ones which I have no control over. One or two restaurants I arrive at to collect, are always always making me wait 10-15 minutes before I get handed the order and this is starting to annoy me. It is every time. And they do not care. They do not care that I am being paid per delivery and that each delay they serve me is effectively losing me a payment. All I get is barely a weak waiter's apology. I am learning which places are fast and which are slow to hand over, but the App I use does not officially allow me to decline a restaurant order. The app presents an order to me, I have to swipe to accept it. If i sit and wait for three minutes the app (in a round about way) tells me I am slow, and my performance is slipping, and then it cancels that order and makes me wait ten minutes as punishment. Either way, I lose 10-15 minutes waiting. I wish I could do something about that. But I am noticing how restaurants and waitresses look at me like I am lower on the social hierarchy. I get given commands, whereas, as a customer I get treated royally. 

I do enjoy seeing such a wide array of classy restaurants though. I could probably write a guide book. I personally like the Thai restaurants in Streatham and Crystal Palace. The waitresses are really beautiful. I also like some others for their nice atmosphere, but i'm not promoting them right now. Why should I?

So how does one become a Deliveroo rider?

Well, you apply online first of all. You submit an application, then you get a phone call from them briefly discussing what bike you ride, areas, and setting you a time to visit one of their offices (I was told there are three in London at the moment but expanding).

At a later date, you ride your bike to their office, lock it up, and go inside. In the office nobody is over 23 years old. You will feel like a pensioner. They will look at you with confusion. You will then hand over your passport to be copied, and a utility bill showing your home address as proof of eligibility to work. Once they have copied these, you will meet a 21 year old proficiency test rider who will take you on a short proficiency test. Whereas the proficiency test is basically impossible to fail, presenting your ghetto ebike will make the 21 year old feel nervous, almost failing you due to his own fear of assigned obligations. I also feel that you might meet the deliveroo mafia rider crew who don't want ebikes taking their orders away from them. You are all fighting for orders effectively and the greater the number of riders there are, the fewer orders you will get. Put it this way, if you had the keys and were hired to pass or fail new riders, would you also be smart about who you allow to pass? It is surprising how easy it is to create a fault in somebody if you want to.

But, I passed after a little bit of hassle and questioning about my ebike. 

What caught my eye a little about my assessor, was that he explicitly told me that I could not ride on a pay-per-delivery term. When I asked why he (somewhat desperately) claimed "It is just too popular" and "there are too many doing it". My bullshit-o-meter wasn't falling for it. and indeed, when I asked at the office later on there was no problem to do this in my area. The staff showed me a London borough zone map with green highlighted areas which they allow pay-per-delivery, and areas they allow only hourly pay. I attempted to take a photo on my phone of it, but was told off and the map was snatched from me. But basically, in zone 1 boroughs, there is no pay-per-delivery, and in outer surrounding areas (Brixton, Clapham, Tottenham etc) these are green highlighted. So, if you're thinking about earning megabucks in Chelsea and Kensington, then think twice as it is only pay per hour according to the map I was shown. Maybe they have different maps? Maybe they tricked me into agreeing to certain areas, and they allow Chelsea residents pay-per-hour? Seems there is lots of sleight of hand to get you to fit the path they want you to be on. Anyway. Onwards. 

After passing this mafia in-the-club test, you then go back to the office and sit at a computer, to complete a very large number of safety and awareness (think liability) videos and click screen tests. How to stop at a junction, how to handle food, how to manage problems. It goes on for hours, but you're alone and unsupervised so you can eat a sandwich too. You can not really fail these tests as they allow you to redo them until you get the required score. Sometimes, I didn't even watch the videos, I just clicked with common sense to get through them more quickly.

Once these tests are done, almost four or five hours have passed, and you are led to the gear room where you are provided with a selection of deliveroo branded goods for your ride. You can choose to have a backpack or a pannier rack box. I would choose the rack pack everytime due to my own happy personal experience bike touring with panniers. Not having to carry the actual bag on your back is much more free and comfortable to ride with. The gear rep commended me on my wise choice. He feels baffled why most kids choose the backpack. There is a video of the gear here.




Other gear you get includes a jacket, a couple of shirts, a phone holder, This Antec powerbank (which is quite good -You need a powerbank just like you need one when playing Pokemon Go!; it saps your battery and phone data), led lights, and thermal food bags. The company also makes you (flexibly) pay for these items by deducting from your bi-weekly pay an amount until you have paid them £150 cover. This cover is refundable when you return the items back to them should you hang up your boots. Items are also freely replaceable if they get damaged, or if upgrades are offered. It's really not that bad to be fair. They say they will not take large amounts of your pay, to enable you to still have some take home pay at the end of the week.

Once you have all the gear, you are text messaged a link to download the official deliveroo rider app which is not available on any app store. The app is called 'Driveroo' (an .apk file). And you can't login until your first shift a few days later. They have the power to control your login. The app looks like in this video here.

Deliveroo also make you do two 'trial' shifts (paid). Two four hour shifts the first on is designated, the other of your own choice. After this, you just log in when you want to and at the end of the shift you are emailed a summary of your days order tally.

Out on your bike, you wait in a designated area of town (close by to you) until the app offers up a job. You swipe to accept it, then use google navigation to guide you there. I use a bluetooth headset to provide voice directions which I really like. I barely look at the map screen mounted to my handlebars.

Busy days provide lots of jobs, quiet days few jobs. And that's right, common sense tell us that weekends are busy, and Monday to Thursday are quiet. At least in my zone. I expect Brixton, or Soho is busy all week round.

Out on my ebike, it feels easy. I arrive and leave locations without fearing hills or exhaustion. I am easily faster than every other cyclist on the road. I especially enjoy racing past the rich city guy on the carbon Pinarello Dogma fixed-wheel, sometimes they try to keep up, but they eventually fail. My gripes out on the road are winter cold - I have to basically wear ski gear, and the car fumes/pollution - some days it really hurts your lungs making them feel raw the next day. I think London pollution levels are worse than Delhi or Kathmandu.

My other gripe is Deliveroo's network error outages. Almost every night between 8-9:30pm there is a server outage in London. Deliveroo then sends to you an automated text message confirming this and telling you to sit and wait. They then send you another message to tell you that they will 'compensate' you with two gift deliveries per hour of outage. But that is still not good for me as I am aiming for five an hour, effectively making me lose-out for their problem, there is nothing I can doo. If I have just collected a multi order at the moment of server outage, I am stuck with these orders in my box getting cold, until the system comes back online; I am unable to get the delivery location. Once I had to phone call the customer (once back online) and apologise that I was unable to deliver their order, it has gone cold, and they should contact deliveroo. I also tried to call deliveroo rider support but no answer, of course. I gave the cold food order to a homeless man and called it quits on my shift that night. I am still waiting to see if Deliveroo are going to fine me for their impossible situation they set me. I have never been given a set of instructions of what to do when the server is down. I had to improvise (and also foot the phone call costs). I wonder what causes their server outage. Is it a DDos attack? Or just not paying for the server upgrade package to handle peak times? I guess only they know, but the outages are almost to the minute accurate from one day to next, which might suggest an automated ddos attack.
And so, here we are, a few weeks in, and i'm starting to have dreams that I am a walking, living, in-game player of Grand Theft Deliveroo.

Sunday 15 January 2017

Ghetto London Life


Here I am, I am still around, looking and reading at all the wonderful things that are happening in the drone and robotics world. It's great. But for me, my life is a little hectic, and I have moved from the north of England to the big city that is London to a world of gastro pubs, city working bastards, and Tinder dating. Everything that I consider to hold dear to me (SOC boards, soldering irons, my development desktop setup) have all had to be put in a box, away, until I have found space and a little bit of settled life here in the city. Career comes first, and I am just not getting the time to study, or fly my drones. Also I can't just go into the park here and fly. Many parks have no flying notices. Bummer. So, why don't I just find a career in tech in London? (Clue: Hardware Tech companies do not exist in London or Europe it is all Shenzhen and San Fran) Well also, because it is elite. I won't be offered positions that are usually welcome to Cambridge graduates, or internal offerings regardless of my abilities. I'm just not on the pecking order of things. *sniff-sniff*. Some say the only careers that matter in London are Banking, Oil and Property. The rest of us can go eat a shit sandwich :D The UK does not fare too well as a country of innovation according to Bloomberg.

But! I have adapted. I have created a trading company in London which offers SOC boards and parts to the area and it allows me to still have a little bit of a connection to dev boards and parts. I also have taken my ebike with me to the city and it's really awesome to have here. In a month, I have ridden the bus only once. I have not even ridden the tube. I am faster than the bus, and I can cover such mileage to comfortably ride across the whole of greater London without breaking sweat. I make myself visible, and have rider confidence, I also like that I have an ebike throttle to give me a little bit of power if I need to move away from a threatening vehicle to a more safer area. In London they now have the Cycle Superhighways Network which is really great. It separates bicycles from motor vehicles, and can be seen in the city as a network of Blue pathways and mini roads. It's great, and I think it is going to be greater as time progresses. 







London would be mad not to invest in ebike technology and push the motor vehicle out. And quickly. But I can't help but suspect that back hand payments and bungs are going on between Boris and the Saudi's to maintain dependency upon the motor car for as long as possible. I wish I was a fly on the wall there! But on the whole I am enjoying not relying on TFL services and I am saving a whole lot of money to boot. I predict I am saving upto £10/$12 per day, so that's saving me ahhh £300/$360 per month.

You know what is not so cool about London? Primark on Oxford Street. That place is pure Sodom and Gomorrah bedlam.

So, how to utilise my ebike in London more effectively by making a little money from it? Well.....


Deliveroo!


I signed up to be a Deliveroo courier a couple of weeks ago because I learned that I can use my ebike with them, and I can also earn-per-delivery and choose my own hours too, still being my own boss in effect. I will update more about this idea, application, and process soon, in more detail as I feel I would like to share this information to everybody so they can get an understanding of the A-Z process, perks, pitfalls, and if I think it is worth it. So please hang in there for my Super-Deliveroo-Ghetto-Ebike-Guide. I think it will be worth it as it seems to be quite a secret world this whole Deliveroo thing, shrouded in myth, and web-truths. I will give my experience in complete truth you will just have to wait a little while.




I do get some nice kit from them though

If you happen to be in Brixton/Herne Hill at all and want to say hello to me, you might see me zipping by like this.

Thanks for reading