So the blog has been pretty quiet the past couple of years or so. I've had a job, doing software development for Macro4 in Manchester, though I have since left and started a PhD working on Image Restoration. More specifically Image Deblurring.
The project is quite exciting because of all the potential applications, from medical scans (which the research will be aimed at) to astronomy (a subject which I find fascinating). The aim of the research is to produce a robust, algorithm for general deblurring of both images and video, even when the blurring is not uniform across the image.
I'll hopefully keep this blog updated as I go through my PhD but for now I'm currently just reading up on the relevant areas for my research. Part of the project will involve working in Singapore for 18 months which is slightly terrifying but exciting none the less!
This week myself and Rob have been working on getting the model to load pixels from the camera inputs. My model was loaded into Rob's camera code and we received initial results from two sources.
These results are shown below (click to view larger for a clearer image):
The next steps are to improve the pixel data retrieval to use Gaussian selection and also to increase the size of the pixels in the output image to make them more visible.
Hopefully I'll have more next week!
I made the results slightly easier to see by also colouring the pixels around the pinpoints. That looked like this:
I realised yesterday that there may be issues with the first model I generated. While I mentioned in my previous post it was too regular, I also realised that, by rendering in the X direction first, the model was more uniform in that direction, which is different to what measurements of the bee eye seem to suggest.
When swapping these around I got the following graph:
This started to look closer to what I expected from reading other reports. However it still had the same issues with regularity.
To somewhat deal with this issue I decided it may mimic the hexagon pattern more closely if I varied along the initial line by half the interommatidial angle for that location either side of the initial point.
This resulted in the following graph:
This looks very much like many of the previous models I have made and thus is likely the first model we will take forward. We must decide how to implement this into C++ however. The way I see it there are two options. Firstly we could generate the model every time the code starts. This means that a short amount of time will be taken at the beginning of the program but will mean that the model is easier to alter if needs be.
The other option would be to generate the model using python, output the model as a file, then get the C++ program to read in this file. This would likely make it easier to generate a completely new model if necessary, and also speed up the start up time of the program, and several models may be stored to make it easier to switch between them.
This will likely have to be discussed with Rob, Chelsea and Alex for what they would prefer, though a generation program in python seems the best option to me.
Tonight (after a couple of bugs that ate all my RAM and forced me to reboot) I managed to generate my first full ommatidial model for a single eye. While it still needs some work it is a good first step.
The model is shown below:
The main problem with this model is it appears to be too regular. The bee's ommatidia appear in somewhat of a mesh pattern, whereas here they appear to be in straight lines. This is because as a first model I decided to try just generating the points from starting x and y angles.
I have a couple of ideas on how this can be made less regular, and thus more realistic, and will try these soon. I hope to have an update on this within the next few days.
Over the past couple of days I have been trying to think of a way to model the vertical interommatidial angle with an off-centre minima. After investigating this for a while I decided the first method I would try would be to get two graphs and connect them at the minima. As such I calculated the formulae for either side then connected the curves together.
As such this model is:
y=0.00069x^2-0.08333x+4.6 for x<60
y=0.00037x^2-0.04462x+3.438 for x>60
I graphed the model and it can be seen below:
This seems fine for an initial model, though some of the numbers may change. As such the next step should be forming an algorithm to combine the vertical and horizontal interommatidial angles into the same model and then forming a gaussian to combine the pixel data over the area that is needed. These two steps may not be done in that order as the latter is relatively trivial while the first may take some thinking about.