I finally managed to attend the Shadow ML - for Machine Learning - meetup in Berlin yesterday evening, hosted by Amazon in their Computer Vision division in Berlin. Two talks were scheduled, one with images and a second with words. I explain.
Before pizza time
Here we learn about soft shadow removing. I liked this talk because it combined computer vision (CV) and machine learning (ML) and it's a problem I'm aware of as I'm regularly facing it when I'm post-processing my spherical panorama pictures taken under the sun - you can see my shadow in the picture.
What I remember from the hard shadow problem description is that a big part of the solution is to be able to isolate in the picture the shadow areas. What is a shadow area you may wonder? It's a part - or parts - of an image where the brightness has been drastically reduced such that they appear almost grey but there is still some color information available. Saying that we almost solve our problem: we need to find the color information in the shadow area and adjust its brightness to match the non shadowed neighbor area. You may have to operate in a different color space than RGB to keep the chromatic information undamaged and to change only the pixel brightness/luminance. A good image segmentation is an inevitable step.
For hard shadow the segmentation is an "easy" task as the transition between shadow/not-shadow areas is fast/brutal, in another word not soft. The problem with the soft transition is that is required a lot of human inputs to mask the image - in the sens of creating a mask that isolate the shadow areas from the others - and we want to automate this task.
A solution proposed yesterday was to use machine learning in order to make your system learning about the difference image with and without shadow. The speaker talked about the problem of getting data - which is a recurrent part of machine learning problem modelisation and any other scientific problems - and how he did create his data-set: computer generated images with Maya where he could get two sets, one with shadow and another without for the same scene.
After that I got a bit lost of on what the author does when he found out where the shadow areas were. But assuming the areas have been well discriminated you still need to adjust the brightness level. From that two solutions at least: if the area is homogeneous then a simple scaling factor/function should do something, treating the background - or the area - as a texture can be helpful too especially if you plan is to use in-painting techniques. But the chosen solution is of course linked to what you want to do: preserving information in the image - then I will say no in-painting - or tricking the eye/human brain such that the image appears nice without shadows - then go for in-painting.
After pizza time
A complete different topic to follow but not less interesting. It was about text and word analysis. For an introduction you can check WordNet to have a glimpse of what that field is. But back to the second speaker, his problem was to see if we can predict an affiliation to a political party based on text analysis.
As the speaker did mention it this is/was a work in progress where the first task was to establish a usable data-set for building the classifier. The text of each party manifesto was employed for that purpose.
Once you have your classifier what you want is to evaluate it. All the interventions, talks given by the government members, parliament members are the perfect data sources to be used for that as well article from different newspapers could be feed to the system.
This work goes as well into the direction of sentiment analysis and a temporal parameter is something you want to have in such problem. Depending of who is running the country, who has the majority at the parliament the roles, the words play/use by the people representatives evolve. It might be obvious but this kind of tool can tell us how much we perceive the words, talks given by our politicians and how much they or we interpret/dream/hallucinate about different situations.
Building such system wasn't too complicated - if I got it right from the speaker(s) - and the main challenges were/are to get clean data. As for all machine learning you need clean data, in every basic or applied research actually.