Our paper A^2-Nets: Double Attention Networks (pdf comming soon) was accepted as a poster at NIPS 2018. Work with awesome collaborators from the National University of Singapore.
Our work on visual similarity search over the whole Flickr corpus just launched! Try it yourselves by clicking on the magnifying glass icon at the top right corner of any photopage! Story covered in The Verge, Engadget, Petapixel, Digital Trends and Venture Beat.
After two amazing years at Yahoo Research, joined the Computer Vision Group at Facebook Research in Menlo Park.
Our paper "Tag Prediction in Flickr: A view from the darkroom" on large scale image classification with noisy training data received the best paper award at the 1st Workshop on Large Scale Computer Vision Systems at NIPS 2016.
Our paper "Multimodal Classification of Moderated Online Pro-Eating Disorder Content" was accepted at the ACM CHI 2017 conference (25% acceptance rate).
Our demo paper "Visual MemoryQA: your personal photo and video search agent" on personal media search using natural language questions was accepted at the Thirty-First AAAI Conference on Artificial Intelligence (AAAI 2017). You may also watch the demo's video.
Will be a guest lecturer at Fei-Fei's and Juan Carlos' CS 131 Computer Vision: Foundations and Applications course at Stanford during the 2016-2017 Fall Semester.
Our paper "Delving Deep into Personal Photo and Video Search" was accepted for publication at WSDM 2017 (16% acceptance rate).
Our paper "Visual Congruent Ads for Image Search" will be presented by Ricardo Baeza-Yates during his invited lecture at International Conference on Pattern Recognition (ICPR) on December 2016.
Grew up and lived in Greece until 2015 with brief breaks in Sweden, Spain and the United States. Lived in San Francisco from January 2015 till November 2017 and currently in Oakland.
The large-scale visual similarity search work of my PhD came to a nice closure while a researcher at Yahoo Research, as it was applied on a trully web scale, powering the visual search feature on Flickr. At the same time, my interests expanded towards modeling of vision and language when we collaborated with Stanford on the Visual Genome project.
Currently conducting research and development on video understanding, temporal segmentation, learning image & video representations, multi-modal classification and large-scale vision and language.
Full list at my Google Scholar profile.