Thursday, 23rd November 2023
Computer Vision for Games and Games for Computer Vision (CVG)
Thursday, 23rd November 2023
Website: https://cvg2023.institutedigitalgames.com/
Organisers: Chintan Trivedi (University of Malta), Matthew Guzdial (University of Alberta), Konstantinos Makantasis (University of Malta), Tim Pearce (Microsoft Research), Roberta Raileanu (Meta AI), Marguerite deCourcelle (Blockade Games), Nicu Sebe (University of Trento), Julian Togelius (New York University; modl.ai), Georgios N. Yannakakis (University of Malta; modl.ai)
Contact: [email protected]
Venue: P&J (Main Conference Centre)
This workshop contains two main tracks. The first track focuses on introducing novel techniques within computer vision research that can advance the field of digital games. The second track, instead, focuses on leveraging game technologies to advance state-of-the-art techniques in computer vision. The list of topics below is not inclusive of all research directions that will be represented in this workshop.
1) Computer Vision for Games
- CV for game-playing, game testing and player modeling.
- Data-driven CV to improve game graphics, animations, level-design, etc.
- HCI through visual interfaces (gestures, posture, gaze, etc.).
- Extended reality games.
- Synthetic data and media generation based on users' emotions, behavior, etc.
- Improving real-time applicability of vision models integrated within games and game engines.
- Computer vision for procedural content generation.
- Rich game-based labeled datasets for tasks such as object detection, segmentation, or depth and flow estimation.
- Generalization and robustness in vision models leveraging a plethora of existing commercial games.
- Game worlds that aid data augmentation techniques.
- Unsupervised pre-training of image/video representations and world transition models from gameplay data.
- Forward modeling in and for games.
- Ethics of game-based data collection and inference.
The Third Workshop on Computational Aspects of Deep Learning (CADL 2023)
Thursday, 23rd November 2023
Website: https://ailb-web.ing.unimore.it/cadl2023/
Organisers: Giuseppe Fiameni (NVIDIA), Iuri Frosio (NVIDIA), Claudio Baecchi (Small Pixels; University of Florence), Frederic Pariente (NVIDIA), Lorenzo Baraldi (University of Modena and Reggio Emilia)
Contact: [email protected]
Venue: P&J (Main Conference Centre)
Over the past decade, deep learning has revolutionized computer science by enabling remarkable advances in prediction models across various research fields and applications, such as computer vision, natural language processing, pattern recognition, and generative models. This transformative shift has made AI a computational science where massive models with millions of parameters are trained on large-scale computing infrastructures, accelerating scientific discovery and leading to more accurate results. However, harnessing such computational power requires careful optimization and design of neural architectures as well as their training procedures that play an increasingly crucial role in shaping research pace, model effectiveness, applicability at scale, and energy consumption reduction.
The advent of transformer models and large language models, such as BERT or GPT, has further transformed the field of natural language processing, enabling unprecedented levels of language understanding and generation. These models have significantly advanced applications like chatbots, virtual assistants, and machine translation. The development, training and inference costs of such models (both in terms of money and environment) is non negligible. Furthermore, the development of transformer-based models has introduced a new paradigm for deep learning, leading to a range of state-of-the-art models in various domains.
The workshop on "Computational Aspects of Deep Learning" aims to bring together experts in deep learning to exchange ideas, discuss current challenges, and identify solutions that can advance the field in a computationally efficient and energy-saving way. The workshop will focus on the development of deep neural network architectures in high-density data fields, such as video processing, action recognition, and high-resolution image understanding. It will also cover research fields that involve sequential predictions, like reinforcement learning, embodied AI, and natural language understanding and generation.
The workshop will encourage submissions that address computationally intensive scenarios from multiple perspectives, such as architectural design, data preparation and processing, operator design, training strategies, and distributed and large-scale training. It will provide the attendees with theoretical and practical instruments for the deployment of computationally effective AI solutions. Therefore, submissions will also be evaluated based on their ability to reduce energy consumption by designing novel and effective model architectures and training procedures. The workshop will also promote positive criticism of current data-intensive trends in machine learning, and encourage new perspectives and solutions. In summary, the advent of transformer models and large language models has further propelled the field of deep learning, and this workshop aims to facilitate the continued advancement of the field in a computationally efficient and energy-saving way.
The following topics are of interest for the workshop, but not limited to:
- Developing optimization strategies for reducing the energy consumption in deep learning
- Design of novel architectures and operators that are suitable for data-intensive scenarios
- Developing distributed, efficient reinforcement learning algorithms
- Implementing large-scale pre-training techniques for real-world applications
- Developing distributed training approaches and architectures
- Utilizing HPC and massively parallel architectures for deep learning
- Exploring frameworks and optimization algorithms for training deep networks
- Utilizing model pruning, gradient compression techniques, and quantization to reduce the computational complexity
- Developing methods to reduce the memory/data transmission footprint
- Developing methods and differentiable metrics to estimate computational costs, energy consumption and power consumption of models
- Designing, implementing and using hardware accelerators for deep learning
- Developing efficient and cost saving models and methods that promote diversity and inclusivity in the field of deep learning.
- Speed up of the training and inference of GPT and other generative models
The workshop is organized in close collaboration with the NVIDIA AI Technology Center. Diversity and inclusion are critical to NVIDIA’s mission and key priorities for directors and management as well as with every employee. To address this point, we will ask proposers to add a paragraph into respective proposals to explain how their research can improve AI adoption and scientific discovery in countries where computational resources are scarce or limited, reduce energy consumption, favor inclusion, and leverage portable devices in remote areas where energy is poorly available.
Friday, 24th November 2023
Project Aria for All-Day Egocentric Research
Friday, 24th November 2023
Website: https://www.projectaria.com/events/bmvc2023/
Organisers: Edward Miller (Meta), Richard Newcombe (Meta), Vasileios Balntas (Meta), Xiaqing Pan (Meta), Zhaoyang Lv (Meta), Pierre Moulon (Meta)
Contact: [email protected]
Venue: Robert Gordon University, Sir Ian Wood Building, Garthdee Campus
In this workshop, we will cover a broad range of research topics related to the challenges of all-day wearable egocentric devices (such as AR glasses), including visual and non-visual localization and mapping, static and dynamic object detection and spatialization, human pose estimation, and building geometry estimation. Exploration of these research areas will be facilitated by the use of Project Aria, a research device in an all-day wearable glasses form-factor. Specifically, we will share research approaches to three research challenges related to object detection and building geometry estimation, launched at CVPR earlier in June. At BMVC 2023, we will announce winners of the announced challenges and invite challenge participants to present their methods with the research community. In addition, we will also announce new challenges related to object detection at this year’s BMVC workshop.
The 1st Workshop in Video Understanding and its Applications (VUA)
Friday, 24th November 2023
Website: https://vua-bmvc.github.io/
Organisers: Faegheh Sardari (University of Surrey), Armin Mustafa (University of Surrey), Asmar Nadeem (University of Surrey), Robert Dawes (BBC), Adrian Hilton (University of Surrey)
Contact: [email protected]
Venue: Robert Gordon University, Sir Ian Wood Building, Garthdee Campus
Video understanding is a popular field in computer vision and AI where we aim to learn/assess the world around us from video footage and can benefit many real-world applications, such as training and education, patient monitoring, sports assessment, and security systems. By automating these applications through video analysing, not only we can save money and time for their users, but also, we can decrease human errors. Despite the recent advances in the other areas of computer vision, e.g. image analysis, video understanding is still an unsolved problem and is considered a very challenging task. The proposed workshop on video understanding aims to address the challenges in this field by making the following contributions:
- Bringing together leading experts in the field of video understanding to help propel the field forward. This includes junior and senior researchers, with equal representation and contribution from academia and industry
- The workshop also aims to stimulate and accelerate research progress in the field of video understanding to match the requirements of real-world applications by identifying the challenges and ways to address them through a panel discussion between experts, presenters and attendees
- Application of Video Understanding to healthcare and media production
- View-invariant and 3D video understanding (e.g. 3D action recognition)
- Transformer for video understanding
- Generating synthetic data for video understating tasks
- Self-supervised learning for video understanding
- Multi-modal video understanding
- Action/event detection
- Video captioning
- Video editing and summarization
- Videography/virtual cinematography
- Video search and retrieval
Workshop on Machine Vision for Earth Observation and Environment Monitoring (MVEO)
Friday, 24th November 2023
Website: https://mveo.github.io
Organisers: Chunbo Luo (University of Exeter), Diego Marcos (Inria Université ́Côte d’Azur), Fabiana Di Ciaccio (University of Florence), Huiyu Zhou (University of Leicester), Jan Boehm (University College London), Jefersson A. dos Santos, (University of Stirling), Keiller Nogueira (University of Stirling), Paolo Russo (Sapienza University of Rome), Ribana Roscher (Research Center Jülich; University of Bonn), Ronny Hänsch (German Aerospace Center)
Contact: [email protected], [email protected]
Venue: National Subsea Centre, 3 International Ave, Dyce
The primary goal of this workshop is to foster collaboration and idea exchange among the Computer Vision, Remote Sensing and Environmental Monitoring communities, both nationally and internationally. We aim to bring together researchers and experts from the three fields to promote interdisciplinary research, encourage innovative computer vision approaches for automated interpretation of Earth observation and other correlated data, and enhance knowledge within the vision community for this rapidly evolving and highly impactful area of research. The implications of this research are far-reaching, affecting human society, economy, industry, and the environment.
Precisely, a non-exhaustive list of topics of interest includes the following:
We are following the relatively recent trend in the field of machine learning that shifts the focus from improving machine learning models, also known as model-centric approaches, to optimizing the data used to train these models, including how this data is presented during the learning process. This shift has opened up a research direction known as data-centric machine learning.
To this end, we are combining this workshop with a data-centric machine learning challenge. Participants will be given an Earth observation data set, a task (such as segmentation), and a machine learning model. Their goal is to propose approaches that improve the quality of the training data to increase the overall performance on the test data. The best solutions will be selected based on the increase in performance on the test data. Creative and innovative approaches are especially encouraged.
All the details on the challenge can be found on the official MVEO website.
Artificial Intelligence and Computer Vision for Neurodegenerative Diseases Assessment: Advancing Computer Science in Dementia and Neurodegenerative Disorders (AI CV for NDS)
Friday, 24th November 2023
Website: https://sites.google.com/view/ai-cv-for-nds
Organisers: Donato Impedovo (University of Bari), Lerina Aversano (University of Sannio), Vincenzo Dentamaro (University of Bari Aldo Moro)
Contact: [email protected]
Venue: King's College, University of Aberdeen
We are delighted to host a pioneering workshop for the British Machine Vision Conference (BMVC) 2023, focusing on the latest advances in computer vision and behavioral biometrics applied to the assessment and understanding of neurodegenerative diseases. Neurodegenerative disorders, including Alzheimer's, Parkinson's, and Huntington's diseases, pose significant challenges to healthcare systems worldwide due to their progressive nature and the lack of effective treatment options. In recent years, AI-based approaches, particularly computer vision and behavioral biometrics, have shown promising results in early detection, diagnosis, and monitoring of these diseases, revolutionizing the way we approach neurodegenerative disorders.
This workshop aims to bring together researchers, clinicians, and industry professionals to discuss state-of-the-art techniques, share novel ideas, and establish new collaborations in the field of computer vision and behavioral biometrics applied to neurodegenerative diseases. The primary goal is to foster the development and integration of innovative AI solutions to improve the assessment and management of these debilitating conditions.
The workshop will cover a range of topics, including but not limited to:
1. Advanced computer vision techniques for analyzing medical imaging data, such as MRI, PET, and OCT, in neurodegenerative disorders.
2. Behavioral biometrics and gait analysis for early detection and diagnosis of neurodegenerative diseases.
3. Machine learning and deep learning algorithms for facial expression and emotion recognition in cognitive assessment.
4. Eye-tracking and pupillometry in monitoring disease progression and response to treatment.
5. Computer vision-based assessment of speech, voice, and language impairments in neurodegenerative diseases.
6. Integration of AI-driven computer vision and behavioral biometrics with wearable devices, IoT, and telemedicine for remote monitoring and management of patients.
7. Challenges and opportunities in data collection, annotation, and sharing for AI research in computer vision and behavioral biometrics applied to neurodegenerative diseases.
The workshop will feature invited keynote speakers, oral presentations, and poster sessions, providing ample opportunities for participants to engage in stimulating discussions and exchange ideas. We also plan to organize a panel discussion to address the challenges and future directions in the field of computer vision and behavioral biometrics applied to neurodegenerative diseases.
We firmly believe that this workshop will contribute to the advancement of knowledge and foster innovation in the application of computer vision and behavioral biometrics techniques to address the pressing challenges posed by neurodegenerative diseases, ultimately improving the quality of life for millions of patients and their families worldwide."