My daily routine starts with checking the performance of AI models that I am training. A model is a way of transforming one piece of information (for example, an image) into another (for example asset data). I make decisions about their performance, check the programme for the current week and prepare data for my next tasks.
Logical problem solving had always been exciting to me. My older brother introduced me to computers. I also started to learn how to solve algorithmic problems from my brother’s books.
I set on a university degree in computer science and was lucky enough to secure a government award/grant to achieve that ambition.
At university in Russia I learned the Pascal and C++ programming languages, but I gained a real passion for coding from a group of older programmers-students who were solving Olympiad problems in programming.
We spent months applying knowledge from all parts of mathematics: discrete math, algebra, geometry, optimization etc. I learned how to apply abstract concepts to solve real-life problems.
Yes, I grew up in a nice warm city on the red coast called Hodaidah. It is the fourth-largest city in Yemen and its principal port on the Red Sea.
I did my bachelor’s degree and master’s in Russia and I did my PhD in Spain. Being in different countries for so many years (seven years in Russia, five years in Spain) I enjoyed learning languages as well as maths and AI. I believe that there is a creative relationship between all these disciplines.
For my master’s I had a strong supervisor who pushed me to the AI and after that I met a good friend who suggested applying AI on computer vision. Those were the best two pieces of advice that I have received in my life.
We recently supported the Government in a nationwide audit of ‘white-lines’.
We created what we call ‘training data’ to teach our AI model how to carry out this specific task of recognising road markings, lines and symbols.
Our (human) inspectors carry out a process known as annotation where they look at images and mark them up. They did this on 15,000 different images for the training data.
These images and annotations are fed into the AI model and through ‘inference’, the AI model then provides condition data analysis.
We found that ‘the ground truth’ from inspectors was often less detailed than the image created by our deep learning algorithm.
Often people are concerned about the loss of jobs but we must remember that AI is not infallible and has to be taught be human experts in the first place – this is why at Gaist we always have humans checking the output of the AI.
Converting an intuition that you have in mind into something that works in the real world is a super exciting process. It pushes me to try more solutions. Fortunately, at Gaist, I have the scope to do this. As a researchers, I have access to powerful, deep-learning hardware, where I can solve problems and experiment at the same time.
When you know that what you are doing will have a direct impact on everyday life, it is very exciting.
Even when I am not ‘working’ I cannot stop watching the road surface and thinking about the best way to improve the performance of my models. I am addicted!
To a very good extent yes. Looking at the amount of the data which needs to be processed, only machines can do that in reasonable time and for a reasonable cost.
I feel privileged with Gaist to be at the forefront of the work to achieve this in our sector and to be at the leading-edge of it all.
I don’t support the idea about an AI which takes over the world. That is a fiction. But there are some jobs that genuinely are at risk because of AI: Telemarketers, book-keeping clerks etc.
In highways, because of the extensive research to produce self-driving cars, drivers will be at risk in a few years.
Yes. It has. It helps us in discovering complex patterns in data of different natures – structured (medical records) and unstructured data like video or photos language.
It also has the power to extend teams’ knowledge and capabilities. A good example of this is emerging through virtual assistants and visual augmented reality.
Technologies like deep learning applied on natural language processing and image processing perform tasks that previously could be done only by humans – for example to read, to see and to answer questions and to understand the human environment.