Program



  • Guidelines for oral sessions

  • Duration: 20 minutes (15 minutes of presentation and 5 minutes for questions)
    Allowed file format: pdf, ppt, pptx
    If you want to use your own laptop, please be sure to have VGA or HDMI connectors.

  • Monday, September 2

  • @ University of Salerno (Tutorials only)
  • Registration for tutorials 08:00 - 08:50
  • Tutorial Contemporary Deep Learning Models and their Applications
    09:00 - 13:00
    • In this tutorial we would like to start with an introduction to the major deep learning architecture such as FC, CNN and RNN.

      Following this, the primary content of the workshop will include various contemporary applications using versions of Siamese, Autoencoders and Generative Adversarial Networks, which are built upon the basic architectures.

      We shall cover a selection of works from other research groups and that from our group. There is a specially designed hand-on sessions worksheet (given to each participant) that can suitably augment the tutorial learning outcome.

      Web Site
  • Lunch break 13:00 - 14:00
  • Tutorial Active Object Recognition: a survey of a (re-)emerging domain
    14:00 - 17:00
    • Back in 1988, Aloimonos et al. introduced the first general framework for active vision, proving that an active observer can solve basic vision problems more efficiently than a passive one. They defined an active observer as an agent that can engage in some kind of activity whose purpose is to improve the quality of the perceptual results. Active vision is demonstrated to be particularly effective to cope with problems like occlusions, limited field of view and limited resolution of the camera. Active control of the camera view-point also helps in focusing computational resources on the relevant element of the scene. Researchers also suggested that visual attention and the selective aspect of active camera control can help in tasks like learning more robust models of objects and environments with few labeled samples or autonomously. Active vision was a very popular topic in the computer vision community during the late '80s and '90s, with strong effort spent on it and a number of publications in related fields such as 3D reconstruction, autonomous robots and video-surveillance.

      During the 2000s – for several reasons including the growing amount of data provided by the internet and the advent of depth sensors – active vision approaches experienced a phase of low popularity. Nevertheless, since 2010, mostly because of a re-emerging interest in robotic vision, a higher availability of low-cost robots and the increasing computational power provided by GPUs, active vision is experiencing a second hype of popularity, in particular when used in conjunction with reinforcement learning techniques and applied to tasks like environment exploration and object categorization.

      In this tutorial the speaker will first present an overview of this field throughout the history, from very foundational works published back in the ‘90s to most recent approaches relying on deep neural networks and reinforcement learning. Then, the main open problems and challenges will be analyzed in more details, trying to isolate the most promising research directions and connections with other topics of interest for the CAIP community.

      Web Site
  • @ Grand Hotel Salerno
  • Registration desk 15:30 - 20:00
  • Social Event Welcome Cocktail
    19:30 - 21:00



  • Thursday, September 5

  • @ Grand Hotel Salerno
  • Registration 08:00 - 08:50
  • Oral Session 6 Machine learning for image and pattern analysis 2
    09:00 - 11:00
    • Learning Discriminatory Deep Clustering Models
      Ali Alqahtani, Xianghua Xie and Mark W. Jones
    • Multi-stream Convolutional Autoencoder and 2D Generative Adversarial Network for Glioma Classification.
      Muhaddisa Barat Ali, Irene Yu-Hua Gu and Asgeir Store Jakola
    • Object Contour and Edge Detection with RefineContourNet
      Andre Kelm, Vijesh Rao and Udo Zölzer
    • LYTNet: A Convolutional Neural Network for Real-Time Pedestrian Traffic Lights and Zebra Crossing Recognition for the Visually Impaired
      Samuel Yu, Heon Lee and John Kim
    • A Sequential CNN Approach for Foreign Object Detection in Hyperspectral Images
      Mahmoud Al-Sarayreh, Marlon M. Reis, Wei Qi Yan and Reinhard Klette
    • Transfer Learning For Improving Lifelog Image Retrieval
      Fatma Ben Abdallah, Ghada Feki, Anis Ben Ammar and Chokri Ben Amar
    Coffee break / Poster Session 3 11:00 - 11:40
    Oral Session 7 Data sets and benchmarks
    11:40 - 13:00
    • Which is Which? Evaluation of local descriptors for image matching in real-world scenarios
      Fabio Bellavia and Carlo Colombo
    • How well current saliency prediction models perform on UAVs videos?
      Anne-Flore Perrin, Olivier Le Meur and Lu Zhang
    • Place recognition in gardens by learning visual representations: data set and benchmark analysis
      María Leyva Vallina, Nicola Strisciuglio and Nicolai Petkov
    • 500,000 images closer to eyelid and pupil segmentation
      Wolfgang Fuhl, Wolfgang Rosenstiel and Enkelejda Kasneci
    Lunch break 13:00 - 14:10
  • Casaro dish composed of mozzarella cheese, ricotta cheese, smoked mozzarella cheese and raw ham

    Corteccia pasta with asparagus, pecorino cheese and italian bacon

    Babà cake with cream and black cherry

    Chocolate cake

    Water, wine and coffee

  • Oral Session 8 Structural and computational pattern recognition
    14:10 - 15:10
    • Blur invariant template matching using projection onto convex sets
      Matej Lebl, Filip Sroubek, Jaroslav Kautsky and Jan Flusser
    • Partitioning 2D Images into Prototypes of Slope Region
      Darshan Batavia, Jiri Hladuvka and Walter Kropatsch
    • Homological Region Adjacency Tree for a 3D binary digital image via HSF model
      Pedro Real, Helena Molina-Abril, Fernando Díaz-del-Río and Sergio Blanco-Trejo
    Coffee break / Poster Session 3 15:10 - 16:10
  • Closing Remarks 16:10 - 17:10
  • Social Event Guided tour to Salerno historical city center
    17:10 - 19:00

  • Friday, September 6

  • @ University of Salerno
  • Registration for contest and workshops 08:00 - 08:50
  • Workshop Visual Computing and Machine Learning for Biomedical Applications (ViMaBi 2019)
    09:00 - 17:00
    • In recent years, there have been significant new developments in the use of computational image techniques in medical and biomedical applications. These have been mainly driven by the proliferation of data driven methods, spearheaded by deep learning methodologies, as well as the development of health monitoring and self-assessment systems.

      With a gradual acceptance of these methods in clinical practice, we are at a precipice of possibly very significant changes to the way medical care is delivered, with “intelligent machines” being more and more involved in the process. Their role is changing from merely being a tool operated by medical practitioners to being an autonomous agent, with which practitioners collaborate. However, because of the rapid progress, the current state-of-the-art is not always adequately communicated to medical and health professionals, reducing its possible impact. Furthermore, a significant part of this research is being done not at universities, but in the commercial sector.

      Therefore, the workshop aims to be a meeting place between academics, practitioners and representatives of the commercial sector. The workshop organization reflects this aim, since the Programme Committee includes researchers, medical doctors, and members from industry.

      ViMaBi is intended for academics, researches working in the commercial sector as well as clinical practitioners interested in biomedical image computing and machine learning. The format of the workshop is devised to encourage interactions between experts and colleagues (including research students) who may have just started working in this fast-paced field.

      Web Site
    Workshop Deep-learning based computer vision for UAV
    09:00 - 13:00
    • Unmanned aerial vehicles (UAV) or systems (UAS) offer an exciting and affordable means to capture aerial imagery for various application domains.

      Algorithms based on deep learning will undoubtedly play a crucial role in empowering application domains and services in the field of agriculture, remote sensing, urban and forest terrain modelling, construction, public safety, and crowd management.

      The focus of this workshop is on topics related to deep learning, image processing and pattern recognition techniques for UAV applications. The main scope of this workshop is to identify and promote innovative deep-learning based methods capable of performing computer vision analysis unique to UAV imagery.

      Web Site
    Constest Which is Which?
    09:00 - 13:00
    • This contest is devoted to image matching using local image descriptors. In order to test descriptor effectiveness for real-world scenarios, we have built an image pair dataset including non-trivial transformations induced by relevant viewpoint changes with respect to either planar or non-planar scenes. The evaluation metrics will be based on the exact overlap error for the planar case, and on a patch-wise approximation to it for the non-planar case.

      Participants will be able to download the local image patches from which to extract descriptors. The matched descriptor pairs, formatted according to the submission instructions, will be sent for evaluation to the official e-mail of the contest. Results will be published online within two weeks after the deadline. The most interesting descriptors will be selected for possible publication in the forthcoming Special Issue "Local Image Descriptors in Computer Vision" of the International Journal IET COMPUTER VISION.

      Web Site

      Results