As many of you know, or perhaps not (due to geographic reasons) parts of the world witnessed a solar eclipse – a rare phenomenon in which the sun is completely obscured by the moon. Important to notice that in this region of the world (Scotland) we witnessed approximately 95%-98% coverage.
As seen from the Earth, a solar eclipse is a type of eclipse that occurs when the Moon passes between the Sun and Earth, and the Moon fully or partially blocks (“occults”) the Sun. This can happen only at new moon, when the Sun and the Moon are in conjunction as seen from Earth in an alignment referred to as syzygy. In a total eclipse, the disk of the Sun is fully obscured by the Moon. In partial and annular eclipses, only part of the Sun is obscured.
So, I read a couple of blogs on how to take photos to this great astronomical probability, and spoke with some friends (Murray and Gleb) that have great photography knowledge.
Then I bought 4 filters, 3 ND (neutral density) and a UV one. For those wondering about ND: a neutral density filter orND filter is a filter that reduces or modifies the intensity of all wavelengths or colors of light equally, giving no changes in hue of color rendition. It can be a colorless (clear) or grey filter.
So, at 8:30am I started walking towards Salisbury Crags in Holyrood Park, Edinburgh, to try to get some nice shots… And the result is next, please enjoy. (Don’t forget to watch the video at the end…)
This is about some experiments I’m doing using python and openCV and of course one of my drones… In this case the drone used for taking the videos, was my hexacopter (Alduxhexa).
The point is to explore different techniques and algorithms to track colours, objects and the follow them using a multicopter.
In the next video you will see my hexacopter flying with a GoPro pointing always down (to the ground). I want to be able to identify colours on the ground.
openCV is a library of programming functions mainly aimed at real-time computer vision.
I’m using a HSV (hue-saturation-lightness) range of “whites” and then I use the function “inRange” to then get an region of possible color, if this region is large enough, then I marked and I can say the color has being found. The explanation could be more complex from what I just described, but its better to keep things simple.
In the code on my github, you can see other different examples, like face tracking, motion detection and even object detection, be sure to check them out!
AlduxHexa with a GoPro
Thanks to Murray for the above awesome photo of my hexacopter.
Quick video showing how do I teach an object to the Pixy cam…
Pixy is the current CMUcam, in its version 5… I bought one a long time ago, maybe the 3… but I never had the time to use it, and now this one is easy to use (once you understand) and has very nice computer vision stuff already embedded…
This version of CMUcam has a NXP LPC4330, 204 MHz, dual core processor, actually very meaty 😛 and a Omnivision OV9715, 1/4″, 1280×800 as image sensor.
The must important part is that they say it process a image each 20 milliseconds… so, Pixy processes an entire 640×400 image frame every 1/50th of a second… 50 frames per second. This could work perfectly for putting it onboard one of my vehicles right?? keep posted 😉
This was the object being taught to the Pixy:
iPhone plastic speaker
And this was the result:
Actually very very fast… important to notice how it keep tracking of the object regardless of its orientation.
Tania ask me this Sunday morning if I was able to make a “better” sound out of a .wav file for a project in a course… So, I started googling and found different softwares to do this kind of stuff, but you need to pay for them and they are more like the “professional” kind of software. Not really what we wanted here.
So, I carry on googling but adding the magic word “python” 😉 until I found a great music synthesizer made in python by a guy called Martin C. Doege, the sauce is here.
Its important to notice that this was file comes for a 3D model… so, .stl file to .wav (via importing raw data in Audacity).
This is the original sound file (beware: it sounds horrible!!)
When I tested the examples, I was able to hear the classic synthesizer sound, which I really love. I remembered my padrisinimo Juan who introduced me into the music of Wendy Carlos, which is awesome both in sound and in complexity. For example, the album Switch-on-Bachwas Carlos’ break-through album, one of the first to draw attention to the synthesizer as a genuine musical instrument, and it took 4 years to complete!!!! Listen to this mix.
Soooo… the basic idea was to read the wav file, get the data, which is an array of numbers with lots of precision. That particular wav file (the one Tania gave me), it ranges from -1 to 1, and it was 1519616 on length. So, the idea was to map those values, and the changed them to notes, and then create a synthesizer sound file from it.
Of course there is more music science behind it, but me, being not a music scientist I fixed the octaves, and with that the time changed… the final version sounds like this:
And thats how a 3D part sounds!!!!
Making sounds with objects.
The code is here:
[python title=”Code to ransform a wav file to synthesizer sound”]
#!/usr/bin/env python
"""wav-to-synth.py: Transform a wav file to synthesizer sound"""
"""
– Using the PySynth library from Martin C. Doege
– http://mdoege.github.io/PySynth/
– The sound is similar to a: DX7 e-piano
* Using Audiolab to read the wav file (maybe there us an easier way of doing this…)
* https://pypi.python.org/pypi/scikits.audiolab/
"""
__author__ = "Aldo Vargas"
__copyright__ = "Copyright 2015 Aldux.net"
# Re-maps a number from one range to another.
def map(x, in_min, in_max, out_min, out_max):
return (x – in_min) * (out_max – out_min) // (in_max – in_min) + out_min
# Round and convert to integer a array.
def roundandint(a, decimals=1):
b=np.around(a-10**(-(decimals+5)), decimals=decimals)
return b.astype(int)
# Change the numbers to piano key notes of 2 channels
def toNotes(data):
c1 = ();
c2 = ();
for k in range(100):
c1=((notes[data[k][0]],2),)+c1
c2=((notes[data[k][1]],4),)+c2
return (c1,c2)
if __name__=="__main__":
# Read the wav file
filename = "test2.wav"
data, sample_frequency,encoding = wavread(filename)
# Transform it to notes
c1,c2 = toNotes(roundandint(map(data,-1,1,1,88),0))
# Save channel 1 and 2 and the mix them
pysynth.make_wav(c1, fn = "c1.wav")
pysynth.make_wav(c2, fn = "c2.wav")
pysynth.mix_files("c1.wav", "c2.wav", "final.wav")
[/python]
A journal publication of a research I did last year was recently accepted and published on the International Journal of Unmanned SystemsEngineering (ISSN: 2052-112X).
The title of my research is:
Swing-Free Manoeuvre Controller for Rotorcraft Unmanned Aerial Vehicle Slung-Load System Using Echo State Networks
Abstract:
There is a growing interest in developing Rotorcraft Unmanned Aerial Systems (RUAS) with advanced onboard autonomous capabilities. RUAS is a very versatile vehicle and its unique flying characteristics enable it to carry loads, hanging in wires underneath it. This suspended load alters the flight characteristics of the vehicle. In this paper, an anti-swing manoeuvre controller for a rotorcraft unmanned aerial system with an attached suspended load (slung-load) is proposed. The presented architecture is powered by Echo State Networks (ESN) that enables simple modeling of the controller and outperforms linear techniques in terms of robustness to unmodelled dynamics and disturbances. The external load behaves like a pendulum; this can change the natural frequencies and mode shapes of the low frequency modes of the RUAS. The technique chosen to solve the problem is to achieve both robust performance and computational efficiency. Reservoir Computing (RC) is an alternative to gradient descent methods for training Recurrent Neural Networks (RNN), which represent a very powerful generic tool, integrating both large dynamical memory and highly adaptable computational capabilities. ESN is a type of reservoir computing; the advantage lies in the ability to overcome the difficulties in RNN training, it is conceptually simple and computationally inexpensive. It has been demonstrated that a model and controller design using ESN may be developed. ESN performs well to control unknown nonlinear systems.
AlduxHexa, my hexacopter build was created (at the beginning) with the sole purpose of carrying this 2.8 kilogram infrared camera.
Flying FLIR B620
My business partner Ifeanyi manage to get this B series in order for us to do some tests and some shots inside the West Quadrangle at the University of Glasgow.
In order to be able to fly carrying this massive payload, this hexacopter build was built with 740kv – 500watts BLDC and 13in carbon fibre propellers, carbon frame and lots of extra fancy stuff. The real deal.
First fitting on Alduxhexa
Fixing camera on the field
Ifeanyi setting the range of temperature
Business partners
The camera specifications are as follows:
Sensitivity 40 mK @ + 30 °C
IFOV 0.65 mrad
FOV 24° × 18° / 0.3 m
Temperature range –40°C to +120°C
Accuracy ±2°C or ±2% of reading
FLIR B620 is a high performance infrared inspection system specially developed for building applications including automatic humidity and insulation defects alarm. With its state of the art technology including a 640×480 detector it produces sharp detailed images. Its unique ergonomic design makes it convenient to work with during inspections. The camera is equipped with the standard 24° lens.
The results:
FLIR taking people
FLIR University of Glasgow Tower
Main building
Research and partnership is still and ongoing process. Ask us for any doubt you got.
When Tania and me traveled to Berlin this Feb for a conference called Transmediale, we had some time to do tourist stuff, so, we went to the Judisches museum.
Overall the museum is great, but what really caught my attention was a KUKA robotic arm writing the entire Torah…
The idea behind this installation, was to make a industrial robot write the Torah at human speed. The robot is equipped with a pen nib and ink and it wrote the entire five books of Moses on 80 meters of paper, using 304,805 hebrew letters.
The name of the installation was cool… BIOS [torah].
and of course it refers to the basic input/output system on which all computer’s programs build.
As an elementary component of computer technology, the BIOS is fundamental to the development of machines – just as Scripture is fundamental to the cultural history of human beings.
While I was waiting for some parts of my hexacopter to come (GPS post and similar stuff) I removed the dust of my 500mm quadcopter, AlduxQuad.
I use all my batteries, so lots of flight time, for AlduxQuad I have:
4 x 2200 mah 3S 30-40C
3300 mah 3S 60-70C
4500 mah 3S 20-40C (the less powerful I have)
I removed all the heavy stuff, making AlduxQuad super agile and fast, also I added some nice LED stripes to know the direction of the quadcopter while flying very high and always know the reference.
So, in this video, I put the GoPro in my mouth, using my teeth 😛
You can see the quadcopter doing a dance, by just yawing a lot and trying to keep the altitude, easy and fun stuff!