Program of

/ Agenda

Click on the red agenda items to jump to the details.

Day 1: Thursday / 7 November

Copernicus Science Centre

Wybrzeże Kościuszkowskie 20, 00-390 Warsaw

11:00 - 12:00

Registration

(Also open after 12:00)
12:00 - 12:15 / Main Hall

Opening remarks

12:00 - 12:15 / Halls A & B

Opening remarks

13:15 - 14:45

Lunch

15:45 - 16:15

Coffee

16:15 - 17:45 / Main Hall

ElevenLabs AI Audio Challenge Final

  • Talk: AI Audio: New Research and Applications Frontier for Generative AI Models
    by Georgy Marchuk (ElevenLabs)
  • Presentations of Finalists
    in Front of Expert Jury
  • Estimator Quiz
    with Prizes for the Audience
  • Winners Announcement
19:00 - 24:00

Conference Party

Bolek Pub & Restaurant, al. Niepodległości 211, 02-086 Warszawa
Remember to take your badges with you, they are required to enter.
10:00 - 18:00

AI Art Exhibition

Academy of Fine Arts in Warsaw (floors -1 and 2), Wybrzeże Kościuszkowskie 37, 00-379 Warsaw

Day 2: Friday / 8 November

Copernicus Science Centre

Wybrzeże Kościuszkowskie 20, 00-390 Warsaw

08:30 - 09:30

Registration

(Also open after 09:30)
12:00 - 12:20

Coffee

13:20 - 14:30

Lunch

10:00 - 18:00

AI Art Exhibition

Academy of Fine Arts in Warsaw (floors -1 and 2), Wybrzeże Kościuszkowskie 37, 00-379 Warsaw

Day 3: Saturday / 9 November

Copernicus Science Centre

Wybrzeże Kościuszkowskie 20, 00-390 Warsaw

08:30 - 09:30

Registration

(Also open after 09:30)
13:30 - 15:00

Lunch

17:15 - 17:45 / Main Hall

Closing ceremony

10:00 - 18:00

AI Art Exhibition

Academy of Fine Arts in Warsaw (floors -1 and 2), Wybrzeże Kościuszkowskie 37, 00-379 Warsaw

Day 4: Sunday / 10 November

Faculty of Mathematics, Informatics and Mechanics, University of Warsaw

Stefana Banacha 2, 02-097 Warsaw

/ Invited talks

Iryna Gurevych photo

Iryna Gurevych

Technical University of Darmstadt

Invited talk 1: Towards Real-World Fact-Checking with Large Language Models

Thursday / 7 November 12:15 - 13:15 Main Hall

Abstract:

Misinformation poses a growing threat to our society. It has a severe impact on public health by promoting fake cures or vaccine hesitancy, and it is used as a weapon during military conflicts to spread fear and distrust. Current research on natural language processing (NLP) for fact-checking focuses on identifying evidence and predicting the veracity of a claim. People’s beliefs, however, often do not depend on the claim and rational reasoning but on credible content that makes the claim seem more reliable, such as scientific publications or visual content that was manipulated or stems from unrelated contexts. To combat misinformation, we need to show (1) “Why was the claim believed to be true?“, (2) “Why is the claim false?“, (3) “Why is the alternative explanation correct?“. In this talk, I will zoom in on two critical aspects of such misinformation supported by credible though misleading content. Firstly, I will present our efforts to dismantle misleading narratives based on fallacious interpretations of scientific publications. Secondly, I will show how we can use multimodal large language models to (1) detect misinformation based on visual content and (2) provide strong alternative explanations for the visual content.

Biography:

Iryna Gurevych is Professor of Ubiquitous Knowledge Processing in the Department of Computer Science at the Technical University of Darmstadt in Germany. She also is an adjunct professor at MBZUAI in Abu-Dhabi, UAE, and an affiliated professor at INSAIT in Sofia, Bulgaria. She is widely known for fundamental contributions to and innovative applications of natural language processing and machine learning. Professor Gurevych is a past president of the Association for Computational Linguistics (ACL), the leading professional society in the field of natural language processing. Her many accolades include being a Fellow of the ACL, an ELLIS Fellow, and the recipient of an ERC Advanced Grant.

Bernardino Romera Paredes photo

Bernardino Romera Paredes

Google DeepMind

Invited talk 2: Evolving programs with LLMs

Thursday / 7 November 12:15 - 13:15 Halls A & B

Abstract:

In this talk I will present FunSearch, a method to search for new solutions in mathematics and computer science. FunSearch works by pairing a pre-trained LLM, whose goal is to provide creative solutions in the form of computer code, with an automated “evaluator”, which guards against hallucinations and incorrect ideas. By iterating back-and-forth between these two components, initial solutions “evolve” into new knowledge. I will present the application of FunSearch to a central problem in extremal combinatorics — the cap set problem — where we discover new constructions of large cap sets going beyond the best known ones, both in finite dimensional and asymptotic cases. This represents the first discoveries made for established open problems using LLMs. Then, I will present the application of FunSearch to an algorithmic problem, online bin packing, which showcases the generality of the method. In this use case, FunSearch finds new heuristics that improve upon widely used baselines. I will conclude the talk by discussing the implications of searching in the space of code.

Biography:

Bernardino is a researcher at Google DeepMind, where he has been a core team member of AlphaFold2 for protein folding, and AlphaTensor for matrix multiplication algorithms. More recently, he initiated FunSearch, a system which uses Large Language Models for program search and has discovered new mathematical knowledge. Long before that, in 2009, Bernardino started his AI journey by studying the MSc Computational Statistics and Machine Learning at UCL. In 2010 he started a PhD, also at UCL, supervised by Prof. Massimiliano Pontil and Prof. Nadia Berthouze, and in 2013 he also did an internship at Microsoft Research. After finishing his PhD in 2014, he joined the Torr Vision Group as a Postdoc at the University of Oxford, researching about semantic segmentation and zero-shot learning. He has several papers published in Nature, as well as in machine learning conferences like NeurIPS and ICML. His main motivation is to leverage the power of AI to bring light to important scientific problems.

Wojciech Samek photo

Wojciech Samek

Technical University of Berlin

Invited talk 3: Explainable AI for LLMs

Thursday / 7 November 14:45 - 15:45 Halls A & B

Abstract:

Large Language Models are prone to biased predictions and hallucinations, underlining the paramount importance of understanding their model-internal reasoning process. However, achieving faithful explanations for the entirety of a black-box transformer model and maintaining computational efficiency is difficult. This talk will present a recent extension of the Layer-wise Relevance Propagation (LRP) attribution method to handle attention layers, which addresses this challenge effectively. Our method is the first to faithfully and holistically attribute not only input but also latent representations of transformer models with the computational efficiency similar to a singular backward pass. Since the LRP is a model-aware XAI method, it not only identifies the relevant features in input space (e.g., pixels or words) but also provides deep insights into the model’s representation and the reasoning process. Through extensive evaluations against existing methods on Llama 2, Flan-T5 and the Vision Transformer architecture, we demonstrate that our proposed approach surpasses alternative methods in terms of faithfulness and enables the understanding of latent representations, opening up the door for concept-based explanations.

Biography:

Wojciech Samek is a Professor in the EECS Department at TU Berlin and is jointly heading the AI Department at Fraunhofer HHI. He is a Fellow at BIFOLD - Berlin Institute for the Foundation of Learning and Data, the ELLIS Unit Berlin, and the DFG Research Unit DeSBi. Furthermore, he is a Senior Editor for IEEE TNNLS, an Associate Editor for Pattern Recognition, and an elected member of the IEEE MLSP Technical Committee and Germany's Platform for AI. He also serves as a member of the scientific advisory board of IDEAS NCBR - Polish Centre of Innovation in the Field of Artificial Intelligence. Wojciech has co-authored more than 200 papers and was the leading editor of the Springer book "Explainable AI: Interpreting, Explaining and Visualizing Deep Learning" (2019), and co-editor of the open access Springer book “xxAI – Beyond explainable AI” (2022). He has served as Program Co-Chair for IEEE MLSP'23, and as Area Chair for NAACL'21 and NeurIPS'23, and is a recipient of multiple best paper awards, including the 2020 Pattern Recognition Best Paper Award and the 2022 Digital Signal Processing Best Paper Prize.

Roberto Calandra photo

Roberto Calandra

Technical University of Dresden

Invited talk 4: Perceiving, Understanding, and Interacting through Touch

Friday / 8 November 09:30 - 10:30 Main Hall

Abstract:

Touch is a crucial sensor modality in both humans and robots. Recent advances in tactile sensing hardware have resulted -- for the first time -- in the availability of mass-produced, high-resolution, inexpensive, and reliable tactile sensors. In this talk, I will argue for the importance of creating a new computational field of "Touch processing" dedicated to the processing and understanding of touch through the use of Artificial Intelligence. This new field will present significant challenges both in terms of research and engineering, but also significantly opportunities in digitizing a new sensing modality. To conclude, I will present some applications of touch in robotics and discuss other future applications.

Biography:

Roberto Calandra is a Full (W3) Professor at the Technische Universität Dresden where he leads the Learning, Adaptive Systems and Robotics (LASR) lab. Previously, he founded at Meta AI (formerly Facebook AI Research) the Robotic Lab in Menlo Park. Prior to that, he was a Postdoctoral Scholar at the University of California, Berkeley (US) in the Berkeley Artificial Intelligence Research (BAIR) Lab. His education includes a Ph.D. from TU Darmstadt (Germany), a M.Sc. in Machine Learning and Data Mining from the Aalto university (Finland), and a B.Sc. in Computer Science from the Università degli Studi di Palermo (Italy). His scientific interests are broadly at the conjunction of Robotics and Machine Learning, with the goal of making robots more intelligent and useful in the real world. Among his contributions is the design and commercialization of DIGIT -- the first commercial high-resolution compact tactile sensor, which is currently the most widely used tactile sensor in robotics. Roberto served as Program Chair for AISTATS 2020, as Guest Editor for the JMLR Special Issue on Bayesian Optimization, and has previously co-organized over 16 international workshops (including at NeurIPS, ICML, ICLR, ICRA, IROS, RSS). In 2024, he received the IEEE Early Academic Career Award in Robotics and Automation.

Stanisław Jastrzębski photo

Stanisław Jastrzębski

Molecule.one

Invited talk 5: molecule.one: Candid Stories and Hard Lessons from our Journey Building an AI for Science Startup

Friday / 8 November 09:30 - 10:30 Hall A

Abstract:

For the first time, we’re sharing the story behind Molecule.One. Founded in 2018 to make making medicines faster, we built the first generative deep learning product for chemistry. We battled — as early as in 2019 — problems that we know understand as hallucination and alignment. We then almost went under. We raised back building the most comprehensive reaction dataset in our highly automated laboratory, just to face another almost fatal adversity. In the process, we found ourselves immersed in a broader and sweeping change in the AI landscape. We offer a candid look at ups and downs, distilling insights and giving broader predictions about the future of AI-first companies.

Biography:

Stanislaw Jastrzebski serves as the CTO and Chief Scientist at Molecule.one, a biotech startup in the drug discovery space. He is passionate about improving the fundamental aspects of deep learning and applying it to automate scientific discovery. He completed his postdoctoral training at New York University in deep learning. His PhD thesis was based on work on foundations of deep learning done during research visits at MILA (with Yoshua Bengio) and the University of Edinburgh (with Amos Storkey). He received his PhD from Jagiellonian University, advised by Jacek Tabor. Beyond academia, he gained industrial experience at Google, Microsoft and Palantir. In his scientific work, he has published at leading machine learning venues (NeurIPS, ICLR, ICML, JMLR, Nature SR). He is also actively contributing to the machine learning community as an Area Chair (most recently NeurIPS '23) and as an Action Editor for TMLR. At Molecule.one, he leads technical teams working on software for synthesis planning based on deep learning, public data sources, and experiments from a highly automated laboratory.

Jan Peters photo

Jan Peters

Technical University of Darmstadt

Invited talk 6: Inductive Biases for Robot Reinforcement Learning

Friday / 8 November 16:10 - 17:10 Hall A

Abstract:

Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. To accomplish robot reinforcement learning learning from just few trials, the learning system can no longer explore all learn-able solutions but has to prioritize one solution over others – independent of the observed data. Such prioritization requires explicit or implicit assumptions, often called ‘induction biases’ in machine learning. Extrapolation to new robot learning tasks requires induction biases deeply rooted in general principles and domain knowledge from robotics, physics and control. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis, juggling and manipulation of various objects.

Biography:

Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the Robotics: Science & Systems – Early Career Spotlight, the INNS Young Investigator Award, and the IEEE Robotics & Automation Society’s Early Career Award as well as numerous best paper awards. In 2015, he received an ERC Starting Grant and in 2019, he was appointed as an IEEE Fellow and in 2020 an ELLIS fellow.

Tomek Korbak photo

Tomek Korbak

UK AI Safety Institute

Invited talk 7: RLHF as conditioning on human preferences

Friday / 8 November 16:10 - 17:10 Hall B

Abstract:

The dominant approach to aligning large language models with human preferences is reinforcement learning from human feedback (RLHF): finetuning an LLM to maximise a reward function representing human preferences. In this talk, I will try to present a complementary perspective: that we can also think about LLM alignment not in terms of reward maximisation but in terms of conditioning LMs on evidence about human preferences. First, I will explain how minimizing the classic RLHF objective is equivalent to approximate Bayesian inference. Then, I will go on to argue that the conditioning view also inspires other approaches to LLM alignment. I will discuss three: minimising different f-divergences from a target distribution, learning from feedback expressed in natural language and aligning LMs already during pretraining by directly learning a distribution conditional on alignment score. I will end the talk discussing how they correspond to conditioning subsequent priors on subsequent pieces of evidence about human preferences.

Biography:

Tomek Korbak is a Senior Research Scientist at the UK AI Safety Institute working on safety cases for frontier models. Previously, he was a Member of Technical Staff at Anthropic working on honesty. Before that, he did a PhD at the University of Sussex focusing on RL from human feedback (RLHF) and spent time as a visiting researcher at NYU working with Ethan Perez, Sam Bowman and Kyunghyun Cho. He studied cognitive science, philosophy and physics at the University of Warsaw.

Alejandro Frangi photo

Alejandro Frangi

University of Manchester

Invited talk 8: Unveiling the Potential of AI-enabled In-silico Trials in Medical Innovation

Saturday / 9 November 09:30 - 10:30 Main Hall

Abstract:

The rapid introduction of novel medical technologies necessitates swift, reliable scientific validation of their safety and efficacy to protect patient welfare. Traditional clinical trials, while essential, face challenges such as detecting low-frequency side effects, high costs, and practical limitations, especially with paediatric patients, rare diseases, and underrepresented ethnic groups. In-silico trials (IST), powered by Computational Medicine, offer a promising solution by using computer simulations to test medical products on virtual patient populations. This approach allows for the a-priori optimisation of clinical outcomes, thorough risk assessment, and failure mode analysis before human trials. Although in-silico evidence is still emerging, it has the potential to revolutionise health and life sciences R&D and regulatory processes.

Biography:

Prof. Alejandro Frangi FREng, holds the Bicentennial Turing Chair in Computational Medicine at the University of Manchester, UK, with joint appointments in the Schools of Computer Science and Health Science. Additionally, he is the Royal Academy of Engineering Chair in Emerging Technologies, specialising in Precision Computational Medicine for in silico trials of medical devices. He serves as the Director of the Christabel Pankhurst Institute for Health Technology Research and Innovation and is a Fellow at the Alan Turing Institute. Recently, his research vision was recognised with an ERC Advanced Grant from the European Research Council. He leads the InSilicoUK Pro-Innovation Regulations Network (www.insilicouk.org). Professor Frangi's main research interests lie at the crossroads of medical image analysis and modelling with an emphasis on machine learning (phenomenological models) and computational physiology (mechanistic models). His work has had a profound impact on the field, particularly in the areas of cardiovascular, musculoskeletal and neurosciences. He is particularly interested in statistical methods applied to population imaging and in silico clinical trials. Prof. Frangi's contributions to the field have been widely recognised. He has received numerous accolades, including the IEEE Engineering in Medicine and Biology Technical Achievement Award (2021) and Early Career Award (2006). In 2011, he was honored with the UPF Medal for his service as Dean of the Escuela Politècnica Superior. He also received the ICREA-Academia Prize from the Institució Catalana de Recerca i Estudis Avançats (ICREA) in 2008, a President's International Initiative Award from the Chinese Academy of Science in 2019. Prof. Frangi has also edited a textbook on Medical Image Analysis, published in the MICCAI-Elsevier Book Series by Academic Press.

Tom Rainforth photo

Tom Rainforth

University of Oxford

Invited talk 9: Modern Bayesian Experimental Design

Saturday / 9 November Hall A

Abstract:

Bayesian experimental design (BED) provides a powerful and general framework for optimizing the design of experiments. However, its deployment often poses substantial computational challenges that can undermine its practical use. In this talk, I will outline how recent advances have transformed our ability to overcome these challenges and thus utilize BED effectively, before discussing some key areas for future development in the field. time: 09:30 - 10:30

Biography:

Tom is a Senior Researcher in Machine Learning and leader of the RainML Research Lab at the Department of Statistics in the University of Oxford. He is the principal investigator for the ERC Starting Grant Data-Driven Algorithms for Data Acquisition. His research covers a wide range of topics in and around machine learning and experimental design, with areas of particular interest including Bayesian experimental design, deep learning, representation learning, generative models, Monte Carlo methods, active learning, probabilistic programming, and approximate inference.

Lucas Beyer photo

Lucas Beyer

Google DeepMind

Invited talk 10: Computer Vision in the age of LLMs

Saturday / 9 November 16:10 - 17:10 Hall A

Abstract:

I will discuss how computer vision has changed with the integration of language and the advent of LLMs, with focus on the recent works of our group. Depending on the audience's familiarity, I may spend most of the time covering the way these modalities are integrated via SigLIP, CapPa, and PaLI, as well as touch on some fairness and cultural diversity aspects ("No Filter" paper), or spend more time on the advanced way in which classically "typical vision" tasks such as detection, segmentation, monocular depth, are finding their way into VLMs and the standard language-modeling approach, covering UViM, RL-tuning, GIVT, and more recent approaches towards fully end-to-end multimodal learning.

Biography:

Lucas grew up in Belgium wanting to make video games and their AI. He went on to study mechanical engineering at RWTH Aachen in Germany, then did a PhD in robotic perception and computer vision there too. Now, he is a staff research scientist at Google DeepMind (formerly Brain) in Zürich, leading multimodal vision-language research.

Yuki Asano photo

Yuki Asano

University of Technology Nuremberg

Invited talk 11: Improving Foundation Models (with academic compute)

Saturday / 9 November 16:10 - 17:10 Hall B

Abstract:

I will talk about how we can build on top of pretrained Foundation Models to achieve better models for vision, language, audio and multi-modal tasks. First I will show that despite its strong performance, DINOv2 and other vision backbones often lack spatial understanding of images. To counteract this, we use NeCo, a new post-pretraining approach based on patch-nearest neighbors, which significantly improves the dense performances of this and any other model despite using only 16 GPU hours. Next I will introduce two parameter-efficient finetuning methods for LLMs and for VLMs that significantly reduce the amount of parameters required for successfully tuning these models. Finally, I will present our latest work, where we show that gradients from self-supervised losses can be successfully used as features for improved retrieval performances across vision, audio and text.

Biography:

Yuki Asano is the head of the Fundamental AI (FunAI) Lab and full Professor at the University of Technology Nuremberg. Prior to this, Yuki lead the QUVA lab at the University of Amsterdam, where he closely collaborated with Qualcomm AI Research. His PhD was at the Visual Geometry Group (VGG) at the University of Oxford, where he worked with Andrea Vedaldi and Christian Rupprecht. Also, he loves running, the mountains, and their combination.

/ Discussion Panels

Discussion Panel 1: AI in Law

Thursday / 7 November 14:45 - 15:45 Main Hall

Join us for the "AI in Law" panel, where we would like to explore the influence of artificial intelligence on the legal sector. During discussion we will delve into the potential role for AI-driven technologies and their promise to increase access to justice as well as challenges for responsible AI adoption.

Organized in cooperation with OCTO Legal. Moderators: Marek Ballaun, Emilia Wiśnios

Gabriela Bar

Gabriela Bar

Founder of the Gabriela Bar Law & AI firm, an experienced expert in the field of new technology law and the law and ethics of Artificial Intelligence (AI), researcher, member of Women in AI, recognised in Forbes’ list of the 25 Best Business Lawyers (2022) and the TOP100 Women in AI in Poland (2022). Member of the IEEE Legal Committee within the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, the Association for New Technology Law, and the FBE New Technologies Commission. Teaches at several universities, actively participates in scientific conferences and industry seminars. Author of numerous publications in the field of AI, digital services, and personal data protection. Lawyer in EU projects: Smart Human Oriented Platform for Connected Factories – SHOP4CF, Multi-Agent Systems for Pervasive Artificial Intelligence for assisting Humans in Modular Production Environments – MAS4AI, and an independent AI ethics expert in the EXTRA-BRAIN project.

Michał Jackowski

Michał Jackowski

Prof. Michał Jackowski - Executive MBA at ESCP Europe, International Cooperation Leader of the AI Working Group at the Ministry of Digitital Affairs, member of Plenary Group drafting LLM Code of Practice for EU AI Office, law professor (SWPS University), attorney and tax advisor with over 20 years of experience as an entrepreneur. Advisor and representative of the technology industry in difficult legislative processes. Co-founder of DSK Law Firm - a law and tax firm specializing in advising the IT sector, and co-founder of innovative startups AnyLawyer and LexDigital. Arbitrator of the arbitration court at the Polish Chamber of Information Technology and Telecommunications. Advisor and representative of the technology industry in difficult legislative processes. Author of several books and numerous scientific publications in the field of law. A promoter of knowledge at the intersection of law and AI - he hosts the Monday Bagel podcast on YT, where he regularly publishes interviews with scientists, legal practitioners and AI experts. Privately - he is a marathon runner and skier, and enjoys reading books - especially about digital transformation in the future, medieval history and Asian culture.

Alkan Dogan

Alkan Dogan

Alkan Dogan is the European Lead for Legal Engineering at Simmons & Simmons Wavelength. Based in Frankfurt, he is spearheading the firm’s legal engineering activities and projects across Europe. Alkan specializes in initiating and creating tech-driven solutions for our clients to ensure a streamlined delivery of legal services. This includes the utilization of technology, data analytics, AI, design thinking and process optimization techniques to increase usability and efficiency of client processes. In addition to his position as European Lead for Legal Engineering, Alkan is currently pursuing a doctorate degree (Dr. rer. pol.) at the Friedrich Schiller University in Jena. His field of study focuses on the impact of behavioural biases on effective innovation management and measures to mitigate these effects as an organization.

Discussion Panel 2: Career Paths in ML

Friday / 8 November 12:20 - 13:20 Main Hall

Join us for the "Career Paths in Machine Learning" panel, where we will explore the critical decisions and turning points in the careers of machine learning professionals. This discussion will delve into the important choice between pursuing a career in industry or academia.

Moderators: Alicja Grochocka-Dorocińska, Maciej Chrabąszcz

Bartłomiej Twardowski

Bartłomiej Twardowski

IDEAS NCBR, Computer Vision Center UAB

Bartłomiej Twardowski is a Research Team Leader at the IDEAS NCBR and a researcher at the Computer Vision Center, Universitat Autònoma de Barcelona. His research interests focus on computer vision and continual learning of neural networks. He earned his Ph.D. in 2018, focusing on recommender systems and neural networks. Following his doctoral studies, he served as an assistant professor at Warsaw University of Technology in the AI group for 1.5 years before deciding to join the Computer Vision Center, UAB, for a post-doctoral program. He has been actively involved in various research projects related to DL/NLP/ML (ranging from €40k to €1.4M). He is a Ramón y Cajal fellow. He has a wide industry experience (more than 12 years), including international companies, e.g., Zalando, Adform, Huawei, Naspers Group (Allegro), as well as helping startups with research projects (Sotrender, Scattered). Throughout his career, he has had the opportunity to publish papers in prestigious conferences such as CVPR (2020, 2024, 3 papers), NeurIPS (2020, 2023), ICCV (2021, 2023), ICLR (2023, 2024), and ECIR (2021, 2023). Additionally, he has served as a reviewer for multiple AI/ML conferences, i.e., AAAI, CVPR, ECCV, ICCV, ICML, WACV and NeurIPS. Currently, his research primarily focuses on lifelong machine learning in computer vision, efficient neural network training, transferability and domain adaptation, as well as information retrieval and recommender systems.

Anna Dawid

Anna Dawid

Leiden University

Ania is an assistant professor at the Leiden Institute of Advanced Computer Science (LIACS) at Leiden University in the Netherlands, happily playing with interpretable machine learning for science, ultracold platforms for quantum simulations, and the theory of machine learning. Before joining LIACS, she was a research fellow at the Center of Computational Quantum Physics of the Flatiron Institute in New York. In 2022, she defended her joint Ph.D. in physics and photonics under the supervision of Prof. Michał Tomza (Faculty of Physics, University of Warsaw, Poland) and Prof. Maciej Lewenstein (ICFO – The Institute of Photonic Sciences, Spain). Before that, she did an MSc in quantum chemistry and a BSc in biotechnology at the University of Warsaw. Ania is the first author of the book "Machine Learning in Quantum Sciences" by Cambridge University Press (in press). She is also a 2022 FNP START laureate, awardee of two NCN grants, and one of the selected participants in the Lindau Nobel Laureate Meeting in 2024.

Tomek Korbak

Tomek Korbak

UK AI Safety Institute

Tomek Korbak is a Senior Research Scientist at the UK AI Safety Institute working on safety cases for frontier models. Previously, he was a Member of Technical Staff at Anthropic working on honesty. Before that, he did a PhD at the University of Sussex focusing on RL from human feedback (RLHF) and spent time as a visiting researcher at NYU working with Ethan Perez, Sam Bowman and Kyunghyun Cho. He studied cognitive science, philosophy and physics at the University of Warsaw.

Discussion Panel 3: AI in Medicine

Saturday / 9 November 16:10 - 17:10 Main Hall

Join us for a panel discussion on "AI in Medicine", where we will delve into the influence of artificial intelligence on the healthcare sector. The focus areas are practical applications of AI in clinical settings and the role of AI in expediting research breakthroughs across medicine and biology.

Moderators: Błażej Dolicki, Aleksandra Daniluk

Anna Gambin

Anna Gambin

University of Warsaw

Professor Anna Gambin is deputy dean for research and international cooperation at the Faculty of Mathematics, Computer Science and Mechanics at the University of Warsaw (term 2016-2024). In her scientific work she deals with mathematical modeling of molecular processes and efficient algorithms for the analysis of biomedical data. Recently, her research is focused on computational methods supporting medical diagnostics based on genomic and proteomic data. She is the author of over 100 scientific publications and, to date, has supervised 13 PhDs in computational biology.

Wouter Bulten

Wouter Bulten

Aiosyn

Wouter is the Chief Operation and Product Officer (COO & CPO) of Aiosyn. At Aiosyn he works on precision pathology for cancer and kidney diseases using AI. Wouter studied Artificial Intelligence and worked as a software engineer and data scientist. He holds a Ph.D. in computational pathology with a focus on using artificial intelligence for clinical diagnostics. Wouter’s research showed that AI algorithms could grade prostate cancer on the level of experienced pathologists and actively assist pathologists in performing better diagnoses. Wouter was also one of the main organizers of the PANDA challenge, collaborating with Karolinska Institute and Google Health. Wouter’s research was published in top journals like The Lancet Oncology and Nature Medicine.

Piotr Wygocki

Piotr Wygocki

MIM Fertility

Piotr Wygocki, PhD, is the Co-Founder and CEO of MIM Fertility, a Polish deep tech company developing AI solutions tailored to the needs of IVF clinics. He is also the Co-Founder of MIM Solutions, a software house specializing in MedTech innovations. Piotr Wygocki holds a Ph.D. in Informatics and dual master's degrees in Informatics and Mathematics from the University of Warsaw, where he serves as an Assistant Professor. A winner of KaggleDays Warsaw 2019. He is a member of the AIFS and the Business Advisory Group to EC President Ursula von der Leyen for the Global Gateway Programme. He has also earned recognition in the Deloitte Technology Fast 50 Central Europe 2023.

Discussion Panel 4: AI Safety

Friday / 8 November 16:10 - 17:10 Main Hall

Join us for the “AI Safety” panel, powered by ElevenLabs, where we will discuss real-world challenges of building and assessing models for safety, while maintaining top performance. We will explore ways for AI manipulation, as well as automated and human-in-the-loop systems for preventing misuse.

Moderator: Aleksandra Pedraszewska

Aleksandra Pedraszewska

Aleksandra Pedraszewska

ElevenLabs

Aleksandra leads AI Safety Operations at ElevenLabs – the most realistic AI audio tool for generating speech, voices and dubbing of content in 32 languages, with over 1 million users (including TIME, HarperCollins, and nvidia) and $1.1bn valuation. She is responsible for ensuring that ElevenLabs’ products are developed, deployed, and used in a safe way, maximising audio AI's transformative potential. Aleksandra has extensive operational experience, having directed a venture-backed deep tech company for 7 years, and holds an MPhil in Technology Policy from Cambridge Judge Business School, and a BA from the University of Cambridge. She supports IP-driven companies as an Entrepreneur-in-Residence at Cambridge and a mentor at Conception X.

Anna Bialas

Anna Bialas

Cohere

Anna is a Machine Learning Engineer at Cohere, a provider of foundational LLM models for enterprise customers. Her work focuses on post-training techniques to customize models and ensure safety, consistency, and accuracy, while also designing evaluation frameworks to assess model performance. Previously, she worked as a Quant at Goldman Sachs and as a Natural Language Data Scientist at Harvard Business School. Anna holds a Bachelor's degree in Computer Science from Oxford University and a Master's in Data Science from Harvard University. Her research on adversarial attacks on large language models was featured at ICML 2023.

Julia Bazinska

Julia Bazinska

Lakera AI

Julia is a Machine Learning Engineer at Lakera AI, developing their core product. Lakera AI is a frontrunner in AI security, known for empowering developers to build secure AI applications with Lakera Guard, which protects against prompt injections, data leaks, and other risks. Her career trajectory includes internships at Google, DeepMind, and IBM Research. Julia earned her Bachelor's degree in Computer Science from the University of Warsaw and completed her Master's at ETH Zurich in 2023. Her professional interests are in Machine Learning for Natural Language Processing, AI security, and performance optimization of ML systems.

Matija Franklin

Matija Franklin

Human In the Loop / UCL

Matija is an AI Safety Researcher who has worked with OpenAI, DeepMind, the AI Objectives Institute, ContextualAI, and Mercor on advancing methods for collecting human data for post-training, developing evals and benchmarks. His work on AI Manipulation and General Purpose AI Systems has impacted the EU AI Act, and he is currently involved in crafting the Codes of Practice for the EU AI Office. He holds a BA/MSc in Psychological Sciences and Experimental Psychology from the University of Cambridge, and completed his PhD in Cognitive Science at the Causal Cognition Laboratory, at University College London.

/ Sponsor talks

Alicja Rączkowska photo

Alicja Rączkowska

Allegro

Sponsor talk 1: AlleNoise - large-scale text classification benchmark dataset with real-world label noise

Friday / 8 November 12:20 - 13:20 Hall A

Abstract:

Label noise remains a challenge for training robust classification models, as it might negatively impact their classification performance. To help with the development of new algorithms, we've published AlleNoise, a curated text classification dataset with real-world instance-dependent label noise. In this presentation, we will show how we've evaluated existing robust classification methods and argue why they are ill-equipped for handling such realistic noise patterns.

Biography:

Alicja Rączkowska is a Senior Research Engineer in the Machine Learning Research team at Allegro, where she works on applying and advancing NLP methods in the e-commerce domain. Obtained her PhD from the University of Warsaw, where she focused on machine learning methods for histopathology.

Charles Martinez photo

Charles Martinez

G-Research

Scarlett Bailey photo

Scarlett Bailey

G-Research

Sponsor talk 2: Careers in Quant Finance

Friday / 8 November 12:20 - 13:20 Hall B

Abstract:

An introduction to G-Research Quantitative Finance activities, what we look for in potential quants and how our recruitment process works.

Biography:

Dr Charles Martinez is the Academic Relations Manager at G-Research. Charles completed a PhD in Phonon interactions in Gallium Nitride nanostructures at the University of Nottingham. Charles’ previous role was as Elsevier’s Key Account Manager, managing sales and renewals for the UK Russell Group institutions, Government and Funding body accounts, including being one of the negotiators in the UK ScienceDirect Read and Publish agreement. Since leaving Elsevier Charles is dedicated to forming beneficial partnerships between G-Research and Europe’s top institutions, and is living in Cambridge, UK.

Scarlett Bailey is currently an intern in the Talent Acquisition department at G-Research, completing her placement year while studying for a BSc in Management at the University of Bath. As part of the team, Scarlett supports recruitment for the Summer Research Programme and other internship opportunities, with a focus on engaging emerging talent in quantitative research and technology. She also contributes to G-Research’s global attraction events, connecting students with career opportunities across the company.

Long Cheng photo

Long Cheng

Google

Sponsor talk 3: LLM Serving

Saturday / 9 November 15:00 - 15:30 Hall A

Abstract:

The presentation Scaling and Optimizing LLM Serving focuses on techniques for efficiently deploying Large Language Models (LLMs) to meet performance demands. It covers key fundamentals like prefill, decoding, and memory optimization, and explains the importance of metrics such as latency, throughput, and cost. The talk compares popular model-serving frameworks (vLLM, Hugging Face Text Generation Inference, and Nvidia TensorRT-LLM), highlighting their advantages and limitations. It also introduces optimization techniques, including quantization, batching, and memory management strategies like PagedAttention, and concludes with recommendations for selecting the right hardware and server configurations to maximize LLM efficiency and scalability.

Biography:

Long Cheng is an Engineering Leader at Google Cloud Vertex AI - a unified machine learning (ML) platform that aims to streamline the entire ML workflow, making it easier for data scientists and ML engineers to build, deploy, and scale AI models. Vertex AI offers a comprehensive suite of tools to manage every stage of the ML development lifecycle, from data preparation and model training to deployment, monitoring, and maintenance. As the site lead for Google Cloud Vertex AI in Poland, Long spearheads the development of MLOps infrastructure (Model training pipelines, Model Garden, Fine-tuning and etc.) and AI serving infrastructure for both online and batch predictions of Gemini and other pretrained AI models, leveraging GPU and TPU acceleration. Long brings a wealth of experience in product strategy and cross-functional leadership, effectively guiding engineering and product management teams to achieve evolving business goals. Prior to that, he gained over 12 years of experience in the U.S. as a lead engineer with top tech companies such as Google, Oracle and Microsoft.

Przemysław Spurek photo

Przemysław Spurek

IDEAS NCBR

Sponsor talk 4: NeRF based generative models

Saturday / 9 November 15:00 - 15:30 Hall B

Abstract:

Recently, generative models for 3D objects have gained much popularity in virtual (VR) and augmented reality (AR) applications. Training such models using standard 3D representations, like voxels or point clouds, is challenging and requires complex tools for proper color rendering. In order to overcome this limitation, Neural Radiance Fields (NeRFs) offer a state-of-the-art quality in synthesizing novel views of complex 3D scenes from a small subset of 2D images. In the presentation, I describe generative models which use hypernetworks paradigm to produce 3D objects represented by NeRF. The advantage of the models over existing approaches is that it produces a dedicated NeRF representation for the object without sharing some global parameters of the rendering component.

Biography:

Przemysław Spurek is the leader of the Neural Rendering research team at IDEAS NCBR and a researcher in the GMUM group operating at the Jagiellonian University in Krakow. In 2014, he defended his PhD in machine learning and information theory. In 2023, he obtained his habilitation degree and became a university professor. He has published articles at prestigious international conferences such as NeurIPS, ICML, IROS, AISTATS, ECML. He co-authored the book Głębokie uczenie. Wprowadzenie [Deep Learning. Introduction] – a compendium of knowledge about the basics of AI. He was the director of PRELUDIUM, SONATA, OPUS and SONATA BIS NCN grants. Currently, his research focuses mainly on neural rendering, in particular NeRF and Gaussian Splatting models.

Tomasz Sapiński photo

Tomasz Sapiński

Comarch

Sponsor talk 5: Revolutionizing IT Operations with AI: The Comarch Experience

Saturday / 9 November 15:30 - 16:00 Hall A

Abstract:

In today's digital landscape, IT operations must continuously evolve to meet increasing demands for efficiency, scalability, and resilience. Artificial Intelligence (AI) is transforming how IT teams manage systems, predict issues, and automate routine tasks, allowing organizations to move from reactive to proactive operational strategies. This presentation explores the impact of AI on IT operations, from AI-powered monitoring and predictive analytics to automated incident response and daily operations support systems. Gain insights into real-world AI applications in Comarch, the benefits of AI-empowerment, and how AI enhances and optimizes everyday tasks. Discover how integrating AI can revolutionize IT operations, driving greater innovation, agility, and cost-effectiveness in modern enterprises.

Biography:

Tomasz Sapiński is a highly experienced IT professional with a strong background in software development and image processing using AI. He holds a degree from Lodz University of Technology and has spent over 15 years in the industry, gaining extensive experience in project management and IT system implementations around the world. Tomasz is also a skilled Data Scientist with a keen interest in the development of AI and neural networks, and their practical applications.

Daniel Śliwiński photo

Daniel Śliwiński

LOT Polish Airlines

Patryk Radoń photo

Patryk Radoń

LOT Polish Airlines

Sponsor talk 6: Leveraging Feature Store for high-sparsity recommendations in LOT Polish Airlines

Saturday / 9 November 15:30 - 16:00 Hall B

Abstract:

Recommendation systems are an essential part of most e-commerce industries, often responsible for a significant portion of revenue. However, every branch of this industry has its own set of exceptions and challenges that affect how recommender systems have to be designed. In airlines, these exceptions become extreme as returning visitors become sparse, many purchases are anonymous, and items, such as flight tickets, can be sold at different prices depending on the circumstances. To overcome these challenges, we propose a simple method that utilizes information collected about users and items, omitting the need for extracting user/item embeddings with matrix factorization. Additionally, we will talk about how we used a Feature Store as a foundation for this project and why it could be beneficial to implement it in your Data Science team as well.

Biography:

Daniel Śliwiński holds a master’s degree in Data Science and Business Analytics from the University of Warsaw, where he focused on exploring the intersection of airline e-commerce and machine learning. He also has a bachelor’s degree in Japanese Studies. Currently, Daniel is a Junior Data Scientist at LOT Polish Airlines, where he has contributed to various projects for over two years, working primarily on personalization, segmentation, forecasting, and big data initiatives. Outside of work, Daniel trains in Olympic weightlifting and competes in triathlon.

Patryk Radoń with a bachelors and master’s degree from Cracow’s AGH and Cracow University of Technology, he has honed his skills in data science over five years of practical experience. Currently working at LOT Polish Airlines, he specializes in modeling customer behavior with statistical background in causal inference and customer personalization, focused primarily on areas related to CRM and ecommerce. His expertise lies in leveraging data to drive business growth, optimize customer interactions, and enhance marketing strategies. In his free time he strives to balance his time between rock climbing, traveling and practicing martial arts.

/ Contributed talks

Klaudia Balcer photo

Klaudia Balcer

Computational Intelligence Research Group, University of Wrocław

Co-authors:

Piotr Lipinski

Contributed talk 1: Exceeding historical exposure in session-based recommender systems

Friday / 8 November 10:35 - 11:00 Hall A (CfC Session 1)

Abstract:

Recommender systems (RS) are invisible artificial intelligence tools accompanying us in our daily lives online: on streaming platforms, social media, banner ads, or online shops, providing us with personalised content, offers, training programs etc. There are different scenarios of recommendations. Sometimes we track the user activity through the years, gathering explicit opinions about the items. In session-based RS (SBRS), we track only short, anonymous sessions of user actions (like clicks) without direct feedback. Methods applied for modelling RS highly rely on the nature of the data. In SBRS, surprisingly good accuracy can be achieved by using the bigram model, which recommends the most common successors of the last item in the session prefix. It suggests how strongly are the users biased with exposure (with what the previously used RS showed them). Training on biased data, it is hard to obtain non-biased results. Also, if we change the exposure (present new recommendations to the user), their preferences may also change (as they will be conditioned on different exposure), which breaks the silent assumption in Machine Learning about the identical distribution of the data. To handle the issues caused by exposure, we propose to incorporate the uncertainty of the collected data in the training process. In our recent work, we proposed to treat sessions as realizations of a stochastic process and train the model with random realizations of the underlying process in each epoch. As recent research suggests the supremacy of spherical embeddings, we decided to use the von Mises-Fisher distribution (Gaussian conditioned on a sphere). It allowed us to directly include the uncertainty of the user behaviour and to model the dense user interest (directed to items similar to what the user was looking for, not to a specific one). Additionally, we used disrupted targets (session suffixes of length 1) during the training. Instead of optimizing the model to focus on a unique target, we used the true target and a number of fake targets (sampled with consideration of the similarity to the true one) in the loss function. It allowed us to simulate the change in the exposure. We also provided a broad evaluation of the proposed approach, including datasets with various levels of popularity bias. We have also used several metrics scoring recommendation relevance (recall, distance in embeddings space to the taregt), recommendation quality (average recommendation popularity, coverage) and embedding quality (radial basis function). We have also perfomed evaluation in groups defined by item popularity. The results showed that our approach improves the overall recommendation quality. The exact consequences of using stochastic augmentations in SBRS depend on the strength of popularity bias in data. For less biased data, we obtained an improved recommendation hit-rate. For more biased data, we obtained improvements in coverage and reduced the propagation of popularity into recommendations, while keeping the hit-rate stable. In the talk, we propose to give a short introduction to session-based recommendations and present our findings. We will also get into the details of evaluation, to present various aspects of recommendation quality.

Biography:

Klaudia Balcer is a PhD Student and Research Assistant in the Computational Intelligence Research Group at the University of Wrocław. Using her mathematical background, she bridges between a stochastic interpretation of data uncertainty and deep learning models for recommender systems. She focuses on reducing exposure and popularity bias.

Tudor Coman photo

Tudor Coman

Adobe

Contributed talk 2: Leveraging Multi-Armed Bandit Algorithms for Dynamic Decision Making

Friday / 8 November 11:05 - 11:30 Hall A (CfC Session 1)

Abstract:

Consider the challenge of allocating resources efficiently across multiple options, where each choice's potential benefit is initially unknown. Multi-armed bandit algorithms provide a robust solution by dynamically adjusting decisions based on real-time feedback, maximizing outcomes across various sectors. From enhancing user engagement through smart A/B testing in web development to optimizing investment strategies in finance and personalizing treatment plans in healthcare, these algorithms are pivotal. Multi-armed bandit (MAB) algorithms have become a cornerstone in various fields due to their ability to balance exploration and exploitation effectively. This approach is used in contexts where decision-making under uncertainty is crucial, such as finance, healthcare, marketing, and more. The presentation will explore the broader applications of MAB algorithms, demonstrating their versatility and effectiveness in dynamic environments. Bandits are considered a typical Reinforcement Learning problem, but they are not currently as popular as other AI algorithms (such as Neural Networks, GPT etc.). However, due to the large number of applications that require informed decision-making, this is a topic of interest to the industry. At Adobe, we are using multi-armed bandits in Adobe Target for allocating traffic in A/B tests dynamically and automatically, and we are currently working on implementing this feature in Adobe Experience Platform for a similar use case. This talk will explore how multi-armed bandit algorithms use advanced statistical methods to revolutionize decision-making processes, making them more data-driven and results-oriented. The demo will be oriented towards A/B testing, but no prior background is necessary to be able to understand the concepts, as they are easily applicable to other fields as well. Join us to learn how integrating these algorithms into your strategies can lead to significant improvements in performance and resource utilization.

Biography:

Tudor Coman is a software engineer, working at Adobe for the past 6 years, since he was at the young age of 15. At first, he tackled and helped develop complex fraud prevention systems for online video platforms. My past experience also includes working with ranking/recommendation algorithms and MLOps. Moving on, the work he do now enables customers get the most out of Adobe products. This includes using Generative AI and Large Language Models to answer user questions about feature documentation and usage insights. He is also involved in the development of a large-scale experimentation platform that instruments A/B testing. Previously, Tudor have been recognized by Forbes Romania, who featured me in their 30 Under 30 list in 2020 for my achievements.

Patryk Wielopolski photo

Patryk Wielopolski

DataWalk

Contributed talk 3: From Theory to Practice: A Practitioner's Journey with Knowledge Graphs

Friday / 8 November 11:35 - 12:00 Hall A (CfC Session 1)

Abstract:

Large Language Models (LLMs) and Knowledge Graphs (KGs) are emerging as significant technological trends, as highlighted in recent industry reports. These technologies offer promising avenues for enhancing data-driven decision-making and operational efficiency across various sectors. While LLMs have recently gained significant attention, KGs remain less widely understood. This presentation draws on six years of experience in designing and implementing solutions with DataWalk, a Knowledge Graph platform, to offer a clear and practical introduction to Knowledge Graphs. It will explore real-world applications across diverse industries, illustrating how Knowledge Graphs can independently provide substantial value to organizations. Additionally, the talk will demonstrate the synergy between Knowledge Graphs and AI techniques, showcasing their combined potential to address complex challenges.

Biography:

Patryk Wielopolski is an R&D leader at DataWalk and a Ph.D. candidate in Artificial Intelligence at Wrocław University of Science and Technology. His work focuses on Knowledge Graphs and AI, where he has contributed to designing and implementing solutions in the finance, insurance, and law enforcement sectors. Patryk’s efforts connect academic research with practical applications, advancing innovation in data-driven technologies.

Adriana Borowa photo

Adriana Borowa

Ardigen SA

Contributed talk 4: Deep Learning for effective analysis of High Content Screening

Friday / 8 November 10:35 - 11:00 Hall B (CfC Session 2)

Abstract:

High Content Screening (HCS) is a powerful technique that facilitates complex cellular analysis by integrating fluorescence microscopy with automated high-throughput image acquisition. This approach enables the detailed examination and comparison of various cell phenotypes, generating extensive image datasets that can reveal subtle biological effects. However, these datasets often suffer from challenges such as sparse and imbalanced labeling, where the underlying chemical or biological effects are not fully annotated or are unevenly distributed. Recent advancements in Deep Learning have shown great promise in overcoming these challenges. By leveraging sophisticated algorithms, Deep Learning can extract rich, high-dimensional representations from HCS images, enabling more accurate and efficient analysis even in the face of limited or imbalanced labels. These methods can enhance our understanding of the complex interactions captured in HCS datasets, providing insights that were previously difficult to achieve with traditional analysis techniques. This talk will focus on the transformative potential of Deep Learning in High Content Screening, highlighting its ability to address the limitations of traditional analysis methods. We will explore the latest developments in Deep Learning techniques tailored for high-dimensional image data, emphasizing their applications in overcoming challenges such as sparse labeling and class imbalance.

Biography:

Adriana is Senior Data Scientist at Ardigen responsible for the development of Ardigen phenAID platform that enables the identification of small molecule candidates. Her diverse experience includes work in digital pathology and neuron imaging with commitment to making meaningful contributions in various domains of life sciences. Adriana is also pursuing PhD at Jagiellonian University and her research interests are focused on advancing the field of biomedical imaging by leveraging AI models. Her scientific work on cutting-edge deep learning algorithms aims to automate the analysis of High Content Screening images as well as microscopy images of bacteria.

Maciej Chrabaszcz photo

Maciej Chrabaszcz

NASK - National Research Institute / Warsaw University of Technology

Co-authors:

Hubert Baniecki, Piotr Komorowski, Szymon Płotka, Przemysław Biecek

Contributed talk 5: Aggregated Attributions for Explanatory Analysis of 3D Segmentation Models

Friday / 8 November 11:05 - 11:30 Hall B (CfC Session 2)

Abstract:

Analysis of 3D segmentation models, especially in the context of medical imaging, is often limited to segmentation performance metrics that overlook the crucial aspect of explainability and bias. Currently, effectively explaining these models with saliency maps is challenging due to the high dimensions of input images multiplied by the ever-growing number of segmented class labels. To this end, we introduce Agg$^2$Exp, a methodology for aggregating fine-grained voxel attributions of the segmentation model's predictions. Unlike classical explanation methods that primarily focus on the local feature attribution, Agg$^2$Exp enables a more comprehensive global view on the importance of predicted segments in 3D images. Our benchmarking experiments show that gradient-based voxel attributions are more faithful to the model's predictions than perturbation-based explanations. As a concrete use-case, we apply Agg$^2$Exp to discover knowledge acquired by the Swin UNEt TRansformer model trained on the TotalSegmentator v2 dataset for segmenting anatomical structures in computed tomography medical images. Agg$^2$Exp facilitates the explanatory analysis of large segmentation models beyond their predictive performance.

Biography:

Maciej Chrabąszcz is a dedicated researcher in the field of Artificial Intelligence, with a particular focus on AI model behavior analysis, alignment, and efficient computing. As a PhD student in Computer Science, his work contributes to the critical areas of AI development and understanding. Having completed his Master's in Mathematical Statistics at the Warsaw University of Technology (WUT), Maciej is now pursuing his doctoral studies at the same institution. Concurrently, he contributes his expertise to NASK - National Research Institute.

Barbara Klaudel photo

Barbara Klaudel

TheLion.AI

Co-authors:

Piotr Frąckowski, Andrzej Komor, Aleksander Obuchowski, Wasyl Badyra, Kacper Bober, Kacper Rogala, Kacper Knitter, Mikołaj Badocha, Sebastian Cygert

Contributed talk 6: Towards Medical Foundation Model -- a Unified Dataset for Pretraining Medical Imaging Models

Friday / 8 November 11:35 - 12:00 Hall B (CfC Session 2)

Abstract:

We present UMIE datasets, the largest publicly available collection of annotated medical imaging data to date. This resource combines over 1 million images from 20 open-source datasets, spanning X-ray, CT, and MRI modalities. The dataset includes images for both classification and segmentation tasks, with 40+ standardized labels and 15 annotation masks. A key contribution is the unified preprocessing pipeline that standardizes the heterogeneous source datasets into a common format, addressing challenges such as diverse file types, annotation styles, and labeling ontologies. We mapped all labels and masks to the RadLex ontology, ensuring consistency across datasets. The preprocessing scripts are modular and extensible, allowing researchers to easily incorporate new datasets. By providing this large-scale, standardized medical imaging resource, UMIE datasets aim to facilitate the development of more robust and generalizable medical foundation models akin to those in general-purpose computer vision. The associated code enabling exact replication of the dataset is publicly available, with select portions to be released on HuggingFace to comply with redistribution restrictions on some source datasets.

Biography:

Co-founder of a research group, TheLion.AI devoted to creating AI-based open source solutions for healthcare. Worked on projects such as the Universal Medical Image Encoder and the Polish medical language model Esculap. Creates educational materials, like "Computer Vision Worksheets" with video tutorials on YouTube. Awarded Forbes 25 under 25.

Marek Justyna photo

Marek Justyna

Poznan University of Technology

Contributed talk 7: RNAgrail: GRAph neural network and diffusIon modeL for RNA 3D structure prediction

Friday / 8 November 14:30 - 14:55 Main Hall (CfC Session 3)

Abstract:

Accurate prediction of RNA 3D structures is crucial for understanding its diverse biological functions, yet current methods face significant challenges due to the limited availability of high-resolution RNA structures and the inherent data imbalance. Traditional approaches, such as those inspired by AlphaFold, rely heavily on multiple sequence alignment and template-based strategies, which are hindered by the scarcity of RNA data. To address these limitations, we propose a novel method combining Graph Neural Networks (GNNs) with generative diffusion models for RNA 3D structure prediction. Unlike existing methods that attempt to predict entire RNA structures, our approach focuses on predicting local RNA descriptors, which allows for more precise modeling of RNA’s complex secondary and tertiary interactions. By leveraging the structural and relational properties encoded in these local descriptors, our model can generate high-quality RNA structures even in the absence of extensive sequence data or templates. This method represents a significant departure from the traditional template-based models and has demonstrated superior performance in preliminary evaluations, particularly in cases where data is sparse. Our results suggest that this innovative approach not only addresses the current limitations in RNA 3D structure prediction but also opens new avenues for the application of machine learning in structural biology. We believe that this methodology could serve as a robust alternative to current state-of-the-art techniques, providing more reliable predictions and advancing our understanding of RNA structure and function.

Biography:

He is a PhD student supported by the prestigious PRELUDIUM BIS grant funded by the Polish National Science Center. His main interests lie in applying AI techniques to solve complex biological problems, with a particular focus on structural biology. His current research under this grant is dedicated to advancing the use of generative models in RNA 3D structure prediction.

Krzysztof Maziarz photo

Krzysztof Maziarz

Microsoft Research

Contributed talk 8: Fake it till you make it: planning chemical syntheses for drug discovery

Friday / 8 November 15:00 - 15:25 Main Hall (CfC Session 3)

Abstract:

Recent advances in Deep Learning are powering increasingly sophisticated generative models for the design of novel drugs, but these imagined molecules are only useful if we can synthesize them. In this talk, I will dive into our recent results on retrosynthesis, which is the task of coming up with “recipes” describing how a given molecule can be made in the lab. This requires first building a bespoke model to predict “single-step” chemical reactions, where we utilize a mix of symbolic and learned components, including graph rewriting transformations, Transformers, and Graph Neural Networks. The model is then combined with a planning algorithm akin to A* search, in order to find plausible trees of reactions describing how to synthesize a drug of interest from simple molecules that are commercially available. Finally, I will also connect this to our larger effort in Drug Discovery, and more broadly AI for Science, fuelled by a five-year collaboration between Microsoft Research and Novartis.

Biography:

Krzysztof is a Principal Researcher in the AI for Science team at Microsoft Research, where he works on applying Deep Learning to problems in Drug Discovery. Among other things, he has developed generative models of molecular graphs, few-shot molecular property prediction methods, and planning algorithms for sequences of chemical reactions. These works not only led to 4 publications at top ML conferences but also to application in a pharma company, with 150+ molecules proposed by his generative models successfully synthesised and tested in a lab. Before joining Microsoft nearly five years ago, Krzysztof studied Theoretical Computer Science, and was a serial intern (including three research internships at Google). Before settling on Deep Learning, he also had decent success in competitive programming, advancing to finals of ACM ICPC, Facebook HackerCup and Distributed Google CodeJam.

France Rose photo

France Rose

University of Cologne

Co-authors:

Monika Michaluk, Timon Blindauer, Bogna M. Ignatowska-Jankowska, Liam O’Shaughnessy, Greg J. Stephens, Talmo D. Pereira, Marylka Y. Uusisaari, Katarzyna Bozek

Contributed talk 9: Uncertainty-aware self-supervised learning on multi-dimensional time series for animal behavior

Friday / 8 November 15:30 - 15:55 Main Hall (CfC Session 3)

Abstract:

Studying freely moving animals is essential to understand how animals behave and make decisions -- e.g. when they escape predators, find mates, or raise their young -- in an undisturbed manner. Although animal behavior has been studied for decades, animal movements can only now be recorded at high throughput thanks to recent technical progress. On one hand, videos from synchronized cameras can be coupled with deep learning pose estimation methods, automatically tracking the trajectories of a few keypoints. On the other hand, motion capture systems directly outputs the 3D trajectories of physical reflectors apposed on the body (reflectors on a suit for humans, reflecting piercings for rodents). However, these methods are not perfect and contain missing data. Since animal behavior cannot be easily scripted and additional recordings are not always possible due to constraints in experimental design, missing data is a more pressing problem in animal compared to human behavior analysis. So far, few works have effectively addressed these issues in animal recordings, with most relying on linear interpolation and smoothing (e.g. Kalman filter) only suitable for short gaps, or lacking large-scale testing. We hypothesized that recent advances in deep learning architectures and self-supervised learning (SSL) can help recover missing data by learning dynamics within and between keypoints. Specifically masked modeling has proven to be successful in recent large language models and computer vision transformers. Mimicking the missing data during training via masked modeling, we tested several neural network architectures: Gated Recurrent Unit (GRU), Temporal Convolutional network (TCN), Spatio-Temporal Graph Convolutional Network (ST-GCN), Space-Time-Separable Graph Convolutional Network (STS-GCN), and a custom transformer encoder named DISK (Deep Imputation for Skeleton data). For testing, we gathered seven datasets, covering five species (human, fly, mouse, rat, fish), in 2D and 3D, from one to two animals, and a variety of number of keypoints (from 3 to 38 per animal). Furthermore we adapted a probabilistic head, initially proposed for probabilistic forecasting of time-series, to assess the reliability of the imputed data at inference time. We found that DISK outperformed other architectures and linear interpolation baseline (42% to 89% root mean square error improvement compared to linear interpolation, calculated between true coordinates and imputed ones on a held-out test set - one value per dataset). DISK probabilistic head outputs an estimated error linearly correlated with the real error (Pearson correlation coefficient: 0.746 to 0.890 - one value per dataset). This estimated error allows to filter out less reliable predictions and control the amount of noise in the imputed dataset. As SSL methods are known to learn general properties about input data, we further explored the latent space of DISK and showed motion sequences clustered by behavior categories (e.g. attack, mount, investigation). While animal behavior experiments are expensive and complex, tracking errors make sometimes large portions of the experimental data unusable. DISK allows for filling in the missing information and for taking full advantage of the rich behavioral data. Available as a stand-alone imputation package (github.com/bozeklab/DISK.git), DISK is applicable to results of any tracking method (cameras or motion capture) and allows for any type of downstream an.

Biography:

France Rose is a post-doctoral researcher at the University Hospital of Cologne. Her research topics cover biomedical image and time-series analysis. At the time of exploding data generation in Biology and Medical Sciences, it is exciting to meet the needs in image analysis and challenge current scientific knowledge.

Natasha <nobr>Al-Khatib</nobr> photo

Natasha Al-Khatib

Symbio

Contributed talk 10: How LLMs are Revolutionizing the cybersecurity field

Friday / 8 November 14:30 - 14:55 Hall A (CfC Session 4)

Abstract:

The ever-evolving threat landscape demands constant adaptation. Traditional methods struggle. Large Language Models (LLMs) emerge, wielding the power of language. This talk explores LLMs' revolution in cybersecurity. LLMs are AI models trained on massive text and code datasets. This grants them an understanding of complex linguistic patterns, invaluable in cybersecurity. Firstly, LLMs excel at advanced threat detection. Analyzing vast amounts of data, they identify subtle anomalies indicating brewing attacks. Traditional methods rely on pre-defined rules, vulnerable to novel attack vectors. LLMs, with their ability to learn and adapt, identify unseen threats, providing a crucial early warning system. Secondly, LLMs offer proactive threat analysis. By ingesting vast quantities of threat intelligence data, including past attack methods and attacker motivations, LLMs uncover patterns and predict future attack vectors. This allows security teams to take a pre-emptive approach, focusing resources on fortifying potential weaknesses before attackers exploit them. Imagine an LLM analyzing a hacker forum, identifying discussions about targeting a specific software vulnerability. This foresight empowers security professionals to patch the vulnerability before a widespread breach. Furthermore, LLMs can revolutionize vulnerability research . Traditionally, identifying vulnerabilities is time-consuming and laborious. LLMs, with their ability to analyze vast code repositories, pinpoint potential vulnerabilities through code patterns and language constructs associated with known weaknesses. This streamlines the vulnerability discovery process, allowing security teams to address critical issues before attackers identify them. While LLMs offer a powerful new frontier, challenges remain. Issues surrounding explainability, bias in training data, and potential misuse require careful consideration. However, the potential benefits are undeniable. As these models continue to evolve and integrate with existing security solutions, they hold the promise of a more secure and resilient digital landscape.

Biography:

Dr. Natasha Al-Khatib is a researcher and engineer with expertise in cybersecurity and artificial intelligence (AI) for the automotive industry. Her passion for securing vehicles against cyber threats led her to pursue a Ph.D. thesis at the prestigious Institut Polytechnique de Paris. Her doctoral research focused on leveraging AI to develop robust solutions against cyberattacks in connected and autonomous vehicles. Currently, Dr. Natasha Al-Khatib applies her expertise at ETAS Bosch, a leading provider of embedded systems for the automotive industry. In this role, she is instrumental in developing AI-based solutions that enhance the cybersecurity of automotive products. She plays a key role in ensuring the safety and security of future generations of vehicles.

Klaudia Bałazy photo

Klaudia Bałazy

NVIDIA / Jagiellonian University

Co-authors:

Mohammadreza Banaei, Karl Aberer, Jacek Tabor

Contributed talk 11: Efficient Fine-Tuning of LLMs: Exploring PEFT Methods and LoRA-XS Insights

Friday / 8 November 15:00 - 15:25 Hall A (CfC Session 4)

Abstract:

The rapid scaling of large language models (LLMs) has underscored the need for parameter-efficient fine-tuning (PEFT) methods to manage increasing computational and storage demands. Among these methods, Low-Rank Adaptation (LoRA) has emerged as a prominent solution, often matching or exceeding the performance of full fine-tuning with significantly fewer parameters. Despite its success, LoRA faces challenges related to the storage of numerous task-specific or user-specific modules on top of a base model. In this talk, I will discuss the importance of parameter-efficient fine-tuning in natural language processing (NLP) and provide an overview of various PEFT approaches for large language models. I will introduce our latest research, LoRA-XS (Low-Rank Adaptation with eXtremely Small number of parameters), which leverages Singular Value Decomposition (SVD) to further enhance parameter efficiency. I will also highlight emerging trends and future possibilities in efficient fine-tuning.

Biography:

Klaudia Bałazy is a Senior Deep Learning Engineer at NVIDIA and a PhD student at the Jagiellonian University. She is also an active member of the Group of Machine Learning Research (GMUM). Her research primarily focuses on enhancing the efficiency of deep learning solutions, with particular emphasis on model compression, dynamic neural networks, and the parameter efficiency of large language models. Klaudia holds both a Master's and an Engineer's degree in Computer Science from the AGH University of Science and Technology. Throughout her career, she has led and participated in various AI-based projects across several tech startups, contributing to the development of practical AI applications.

Adam Dziedzic photo

Adam Dziedzic

CISPA Helmholtz Center for Information Security

Co-authors:

Franziska Boenisch

Contributed talk 12: Open LLMs are Necessary for Private Adaptations and Outperform their Closed Alternatives

Friday / 8 November 15:30 - 15:55 Hall A (CfC Session 4)

Abstract:

While open Large Language Models (LLMs) have made significant progress, they still fall short of matching the performance of their closed, proprietary counterparts, making the latter attractive even for the use on highly private data. Recently, various new methods have been proposed to adapt closed LLMs to private data without leaking private information to third parties and/or the LLM provider. In this talk, we will analyze the privacy protection and performance of the four most recent methods for private adaptation of closed LLMs. By examining their threat models and thoroughly comparing their performance under different privacy levels according to differential privacy (DP), various LLM architectures, and multiple datasets for classification and generation tasks, we found that: (1) all the methods leak query data, i.e., the (potentially sensitive) user data that is queried at inference time, to the LLM provider, (2) three out of four methods also leak large fractions of private training data to the LLM provider while the method that protects private data requires a local open LLM, (3) all the methods exhibit lower performance compared to three private gradient-based adaptation methods for local open LLMs, and (4) the private adaptation methods for closed LLMs incur higher monetary training and query costs than running the alternative methods on local open LLMs. This yields the conclusion that to achieve truly privacy-preserving LLM adaptations that yield high performance and more privacy at lower costs, one should use open LLMs.

Biography:

Adam is a Tenure Track Faculty Member at CISPA Helmholtz Center for Information Security, co-leading the SprintML group. His research is focused on secure and trustworthy Machine Learning as a Service (MLaaS). Adam designs robust and reliable machine learning methods for training and inference of ML models while preserving data privacy and model confidentiality. Adam was a Postdoctoral Fellow at the Vector Institute and the University of Toronto, and a member of the CleverHans Lab, advised by Prof. Nicolas Papernot. He earned his PhD at the University of Chicago, where he was advised by Prof. Sanjay Krishnan and worked on input and model compression for adaptive and robust neural networks. Adam obtained his Bachelor's and Master's degrees from Warsaw University of Technology in Poland. He was also studying at DTU (Technical University of Denmark) and carried out research at EPFL, Switzerland. Adam also worked at CERN (Geneva, Switzerland), Barclays Investment Bank in London (UK), Microsoft Research (Redmond, USA), and Google (Madison, USA).

Kamil Deja photo

Kamil Deja

Warsaw University of Technology / IDEAS NCBR

Contributed talk 13: Personalisation of large-scale diffusion models

Friday / 8 November 14:30 - 14:55 Hall B (CfC Session 5)

Abstract:

-Mum, Can we have a diffusion model? -We have a diffusion model at home! Diffusion model at home: Well, come and listen :) In this talk, I will discuss recent advances in large-scale diffusion model personalisation methods. I will overview and explain techniques for finetuning off-the-shelf models to generate images with desired concepts or styles, starting from naive finetuning through low-rank adaptation to data-free model editing techniques. On top of adding new concepts to the existing model, I will outline state-of-the-art unlearning approaches that allow for the precise removal of unwanted content.

Biography:

Kamil Deja is a postdoctoral researcher at IDEAS NCBR and Warsaw University of Technology where he obtained a Ph.D. His research focuses on Generative Modelling with applications to Continual Learning. He has previously interned at Virje Universiteit in Amsterdam and twice at Amazon Alexa. His research work has been published in prestigious conferences such as NeurIPS, IJCAI, and Interspeech. In recognition of his accomplishments, Kamil received the FNP Start scholarship in 2023, awarded to the top-100 young researchers in Poland.

Dawid Rymarczyk photo

Dawid Rymarczyk

Jagiellonian University / Ardigen SA

Contributed talk 14: Current trends in intrinsically interpretable deep learning

Friday / 8 November 15:00 - 15:25 Hall B (CfC Session 5)

Abstract:

The talk will focus on intrinsically interpretable deep learning models, where the transparent reasoning process is integral to the prediction, eliminating the need for an explainer to interpret the results. I will discuss current trends in this field, including continual learning of these models using prototypical parts architecture (Rymarczyk@ICCV2023), the limitations of prototypical parts in their interpretations (Sacha@AAAI2024, Ma@NeurIPS2023, Pach@arxiv2024), and ways to involve users in interacting with interpretations to create more reliable models (Kim@ECCV2022, Bontempelli@ICLR2023).

Biography:

In 2024, Dawid Rymarczyk earned a PhD with distinction in a topic related to interpretable neural networks. Since 2017, Dawid have been working as a Data Scientist at Ardigen, where he was recently promoted to Director of Data Science and Lead Data Scientist. My research interests include computer vision, prototypical parts for deep learning architectures, and AI applications in the drug discovery process. He is actively involved in publishing and collaborating with the GMUM and SINN research groups. Additionally, he completed an internship with Dr. Joost van de Weijer's group and attended the International Computer Vision Summer School (ICVSS).

Przemysław Spurek photo

Przemysław Spurek

Jagiellonian University

Co-authors:

Joanna Waczyńska, Piotr Borycki, Weronika Smolak, Dawid Malarz

Contributed talk 15: Neural rendering: the future of 3D modeling

Friday / 8 November 15:30 - 15:55 Hall B (CfC Session 5)

Abstract:

The presentation will present the central concept of neural rendering for modeling 3D objects. We concentrate on Neural Radiation Fields (NeRFs) and Gaussian Splatting (GS). Then, new results obtained by the GMUM Neural Rendering group will be presented. NeRF has demonstrated the remarkable potential of neural networks to capture the intricacies of 3D objects. NeRFs excel at producing strikingly sharp novel views of 3D objects by encoding the shape and color information within neural network weights. Recently, numerous generalizations of NeRFs utilizing generative models have emerged, expanding its versatility. In contrast, GS offers a similar render quality with faster training and inference, as it does not need neural networks to work. It encodes information about the 3D objects in the set of Gaussian distributions that can be rendered in 3D similarly to classical meshes.

Biography:

Przemysław Spurek is the leader of the Neural Rendering research team at IDEAS NCBR and a researcher in the GMUM group operating at the Jagiellonian University in Krakow. In 2014, he defended his PhD in machine learning and information theory. In 2023, he obtained his habilitation degree and became a university professor. He has published articles at prestigious international conferences such as NeurIPS, ICML, IROS, AISTATS, ECML. He co-authored the book Głębokie uczenie. Wprowadzenie [Deep Learning. Introduction] – a compendium of knowledge about the basics of AI. He was the director of PRELUDIUM, SONATA, OPUS and SONATA BIS NCN grants. Currently, his research focuses mainly on neural rendering, in particular NeRF and Gaussian Splatting models.

Franziska Boenisch photo

Franziska Boenisch

CISPA Helmholtz Center for Information Security

Co-authors:

Dominik Hintersdorf, Lukas Struppek, Kristian Kersting, Adam Dziedzic

Contributed talk 16: Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models

Saturday / 9 November 12:00 - 12:25 Main Hall (CfC Session 6)

Abstract:

Diffusion models (DMs) produce very detailed and high-quality images. Their power results from extensive training on large amounts of data usually scraped from the internet without proper attribution or consent from content creators. Unfortunately, this practice raises privacy and intellectual property concerns, as DMs can memorize and later reproduce their potentially sensitive or copyrighted training images at inference time. Prior efforts prevent this issue by either changing the input to the diffusion process, thereby preventing the DM from generating memorized samples during inference or removing the memorized data from training altogether. While those are viable solutions when the DM is developed and deployed in a secure and constantly monitored environment, they hold the risk of adversaries circumventing the safeguards and are not effective when the DM itself is publicly released. To solve the problem, we introduce NeMo, the first method to localize the memorization of individual data samples down to the level of neurons in DMs' cross-attention layers. Through our experiments, we make the intriguing finding that in many cases, single neurons are responsible for memorizing particular training samples. By deactivating these memorization neurons, we can avoid the replication of training data at inference time, increase the diversity in the generated outputs, and mitigate the leakage of private and copyrighted data. In this way, our NeMo contributes to a more responsible deployment of DMs.

Biography:

Franziska Boenisch is a tenure-track faculty at the CISPA Helmholtz Center for Information Security where she co-leads the SprintML lab. Before, she was a Postdoctoral Fellow at the University of Toronto and Vector Institute advised by Prof. Nicolas Papernot. Her current research centers around private and trustworthy machine learning. Franziska obtained her Ph.D. at the Computer Science Department at Freie University Berlin, where she pioneered the notion of individualized privacy in machine learning. During her Ph.D., Franziska was a research associate at the Fraunhofer Institute for Applied and Integrated Security (AISEC), Germany. She received a Fraunhofer TALENTA grant for outstanding female early career researchers, the German Industrial Research Foundation prize for her research on machine learning privacy, and the Fraunhofer ICT Dissertation Award 2023, and was named a GI-Junior Fellow in 2024.

Jan Dubiński photo

Jan Dubiński

Warsaw University of Technology / IDEAS NCBR

Co-authors:

Antoni Kowalczuk; Franziska Boenisch; Adam Dziedzic

Contributed talk 17: CDI: Copyrighted Data Identification in Diffusion Models

Saturday / 9 November 12:30 - 12:55 Main Hall (CfC Session 6)

Abstract:

Diffusion Models (DMs) benefit from large and diverse datasets for their training. Since this data is often scraped from the internet without permission from the data owners, this raises concerns about copyright and intellectual property protections. While (illicit) use of data is easily detected for training samples perfectly re-created by a DM at inference time, it is much harder for data owners to verify if their data was used for training when the outputs from the suspect DM are not close replicas. Conceptually, membership inference attacks (MIAs), which detect if a given data point was used during training, present themselves as a suitable tool to address this challenge. However, we demonstrate that existing MIAs are ineffective in determining the membership of individual images in large DMs. To overcome this limitation, we propose Copyrighted Data Identification (CDI), a framework for data owners to identify whether their dataset was used to train a given DM. CDI relies on dataset inference techniques, i.e., instead of using the membership signal from a single data point, CDI leverages the fact that most data owners, such as providers of stock photography, visual media companies, or even individual artists, own datasets with multiple publicly exposed data points which might all be included in the training of a given DM. By selectively aggregating signals from existing MIAs and using new handcrafted methods to extract features for these datasets, feeding them to a scoring model, and applying rigorous statistical testing, CDI allows data owners with as little as 70 data points to identify with a confidence of more than 99% whether their data was used to train a DM. Thereby, CDI represents a valuable tool for data owners to claim illegitimate use of their copyrighted data.

Biography:

Jan Dubiński is pursuing a PhD in deep learning at the Warsaw University of Technology. He is a member of the ALICE Collaboration at LHC CERN. Jan has been working on fast simulation methods for High Energy Physics experiments at the Large Hadron Collider at CERN. The methods developed in this research leverage generative deep learning models such as GANs to provide a computationally efficient alternative to existing Monte Carlo-based methods. More recently, he has focused on issues related to the security of machine learning models and data privacy. His latest efforts aim to improve the security of self-supervised and generative methods, which are often overlooked compared to supervised models.

Bartlomiej Sobieski photo

Bartlomiej Sobieski

MI2.ai / University of Warsaw

Co-authors:

Przemysław Biecek

Contributed talk 18: Global Counterfactual Directions

Saturday / 9 November 13:00 - 13:25 Main Hall (CfC Session 6)

Abstract:

Explaining the decision-making process of image classifiers is a long-standing problem for which, as of today's state of knowledge, no ultimate solution exists. Counterfactual explanations aim to provide such explanation by presenting the user with the answer to a specific what-if question, such as "what would be the hair color classifier's decision if the eye color changed". Crucially, this type of explanation stands at the highest level of Pearl’s causality ladder as they help humans in identifying the cause-effect relations of the model’s decision and its input. However, constructing these explanations is extremely difficult, as they require precise control over the image content conditioned on the model's inference process. Therefore, previous works made strong assumptions about the model's availability, e.g. so-called white-box access which assumes that one can utilize fully the classifier's gradients. Unfortunately, this scenario is often not observed in practice. Many models, such as the latest ChatGPT, allow only black-box access, i.e. observing only the input and the model's output through an API. In this work, we propose a novel state-of-the-art solution to finding visual counterfactual explanations in a black-box scenario. We discover a remarkable property of Diffusion Autoencoders, a type of diffusion model, whose latent space encodes the decision-making process of a classifier in a form of global directions. Despite assuming black-box access to the model of interest, our method finds these directions using only a single image, which allows for limiting the generation of explanations to pure inference of the generative model. In addition, we show that the nature of our approach can be utilized to improve the understanding of the explanations themselves by extending the Latent Integrated Gradients method to black-box case. Overall, our method pushes the boundaries of explaining models with greatly limited access, while also shedding light at interesting properties of Diffusion Autoencoders. This work has been accepted as a conference paper on the upcoming European Conference on Computer Vision (ECCV) 2024, rated as A* conference in the CORE ranking. Paper link: https://arxiv.org/abs/2404.12488.

Biography:

Bartlomiej Sobieski is an AI researcher passionate about combining image generative models and explainable computer vision. He believes that highly advanced mathematics is the key for developing better AI models and explaining their decision-making process.

Michał Bortkiewicz photo

Michał Bortkiewicz

Warsaw University of Technology

Co-authors:

Michał Bortkiewicz, Władek Pałucki, Vivek Myers, Tadeusz Dziarmaga, Tomasz Arczewski, Łukasz Kuciński, Benjamin Eysenbach

Contributed talk 19: Accelerating Goal-Conditioned RL Algorithms and Research

Saturday / 9 November 12:00 - 12:25 Hall A (CfC Session 7)

Abstract:

Self-supervision has the potential to transform reinforcement learning (RL), paralleling the breakthroughs it has enabled in other areas of machine learning. While self-supervised learning in other domains aims to find patterns in a fixed dataset, self-supervised goal-conditioned reinforcement learning (GCRL) agents discover new behaviors by learning from the goals achieved during unstructured interaction with the environment. However, these methods have failed to see similar success, both due to a lack of data from slow environments as well as a lack of stable algorithms. We take a step toward addressing both of these issues by releasing a high-performance codebase and benchmark JaxGCRL for self-supervised GCRL, enabling researchers to train agents for millions of environment steps in minutes on a single GPU. The key to this performance is a combination of GPU-accelerated environments and a stable, batched version of the contrastive reinforcement learning algorithm, based on an infoNCE objective, that effectively makes use of this increased data throughput. With this approach, we provide a foundation for future research in self-supervised GCRL, enabling researchers to quickly iterate on new ideas and evaluate them in a diverse set of challenging environments.

Biography:

Michał Bortkiewicz is a PhD student at Warsaw University of Technology, where his research centres on Continual and Reinforcement Learning. As a data scientist, he advises companies on implementing deep learning methods for data-intensive tasks. Michał has previously worked as a deep learning engineer at Samsung Research, focusing on audio intelligence projects, at Scope Fluidics, where he specialized in computer vision, and at Airspace Intelligence, where he dealt with tabular machine learning.

Bartłomiej Cupiał photo

Bartłomiej Cupiał

University of Warsaw / IDEAS NCBR

Co-authors:

Maciej Wołczyk, Mateusz Ostaszewski, Michał Bortkiewicz, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś

Contributed talk 20: Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem

Saturday / 9 November 12:30 - 12:55 Hall A (CfC Session 7)

Abstract:

Fine-tuning is a widespread technique that allows practitioners to transfer pre-trained capabilities, as recently showcased by the successful applications of foundation models. However, fine-tuning reinforcement learning (RL) models remains a challenge. This work conceptualizes one specific cause of poor transfer, accentuated in the RL setting by the interplay between actions and observations: forgetting of pre-trained capabilities. Namely, a model deteriorates on the state subspace of the downstream task not visited in the initial phase of fine-tuning, on which the model behaved well due to pre-training. This way, we lose the anticipated transfer benefits. We identify conditions when this problem occurs, showing that it is common and, in many cases, catastrophic. Through a detailed empirical analysis of the challenging NetHack and Montezuma's Revenge environments, we show that standard knowledge retention techniques mitigate the problem and thus allow us to take full advantage of the pre-trained capabilities. In particular, in NetHack, we achieve a new state-of-the-art for neural models, improving the previous best score from 5K to over 10K points in the Human Monk scenario.

Biography:

Bartłomiej Cupiał is a PhD student at IDEAS NCBR and the University of Warsaw. He finished his master's degree at Jagiellonian University and bachelor's degree at Wrocław University of Science and Technology. Currently, he is working on combining reinforcement learning with large language models. In particular, how to improve exploration in RL with the help of LLMs and how to integrate external knowledge into RL agents.

Adam Pardyl photo

Adam Pardyl

IDEAS NCBR / Jagiellonian University

Co-authors:

Michał Wronka, Maciej Wołczyk, Kamil Adamczewski, Tomasz Trzciński, Bartosz Zieliński

Contributed talk 21: AdaGlimpse: Active Visual Exploration with Arbitrary Glimpse Position and Scale

Saturday / 9 November 13:00 - 13:25 Hall A (CfC Session 7)

Abstract:

Active Visual Exploration (AVE) is a task that involves dynamically selecting observations (glimpses), which is critical to facilitate comprehension and navigation within an environment. While modern AVE methods have demonstrated impressive performance, they are constrained to fixed-scale glimpses from rigid grids. In contrast, existing mobile platforms equipped with optical zoom capabilities can capture glimpses of arbitrary positions and scales. To address this gap between software and hardware capabilities, we introduce AdaGlimpse. It uses Soft Actor-Critic, a reinforcement learning algorithm tailored for exploration tasks, to select glimpses of arbitrary position and scale. This approach enables our model to rapidly establish a general awareness of the environment before zooming in for detailed analysis. Experimental results demonstrate that AdaGlimpse surpasses previous methods across various visual tasks while maintaining greater applicability in realistic AVE scenarios.

Biography:

Adam Pardyl is a researcher at Sustainable Machine Learning For Autonomous Machines team and IDEAS NCBR PhD candidate at GMUM at Jagiellonian University.

Tomasz Piotrowski photo

Tomasz Piotrowski

Nicolaus Copernicus University in Toruń

Co-authors:

R L G Cavalcante, M Gabor

Contributed talk 22: Fixed points of nonnegative neural networks

Saturday / 9 November 12:00 - 12:25 Hall B (CfC Session 8)

Abstract:

We use fixed point theory to analyze nonnegative neural networks, which we define as neural networks that map nonnegative vectors to nonnegative vectors. We first show that nonnegative neural networks with nonnegative weights and biases can be recognized as monotonic and (weakly) scalable mappings within the framework of nonlinear Perron-Frobenius theory. This fact enables us to provide conditions for the existence of fixed points of nonnegative neural networks having inputs and outputs of the same dimension, and these conditions are weaker than those recently obtained using arguments in convex analysis. Furthermore, we prove that the shape of the fixed point set of nonnegative neural networks with nonnegative weights and biases is an interval, which under mild conditions degenerates to a point. These results are then used to obtain the existence of fixed points of more general nonnegative neural networks. From a practical perspective, our results contribute to the understanding of the behavior of autoencoders, and we also offer valuable mathematical machinery for future developments in deep equilibrium models.

Biography:

Tomasz Piotrowski received the M.Sc. degree in Mathematics from SilesianUniversity of Technology, Poland, in 2004, the M.Sc. degree in Information Processing & Neural Networks from King’s College London, UK, in 2005, the Ph.D. degree from Tokyo Institute of Technology, Japan, in 2008, and the D.Sc. degree from Systems Research Institute, Polish Academy of Sciences, in 2021. From 2009 to 2010 he worked in industry as a data analyst at Comarch SA. In 2011, he joined Nicolaus Copernicus University (NCU) in Toruń, Poland as an Assistant Professor. Since 2022, he is an Associate Professor at NCU. He is involved in brain research, signal processing, and mathematical foundations of deep learning.

Marcin Przewięźlikowski photo

Marcin Przewięźlikowski

GMUM (Jagiellonian University) / IDEAS NCBR

Co-authors:

Mateusz Pyla, Bartosz Zieliński, Bartłomiej Twardowski, Jacek Tabor, Marek Śmieja

Contributed talk 23: Augmentation-aware Self-supervised Learning with Conditioned Projector

Saturday / 9 November 12:30 - 12:55 Hall B (CfC Session 8)

Abstract:

Self-supervised learning (SSL) is a powerful technique for learning robust representations from unlabeled data. By learning to remain invariant to applied data augmentations, methods such as SimCLR and MoCo are able to reach quality on par with supervised approaches. However, this invariance may be harmful to solving some downstream tasks which depend on traits affected by augmentations used during pretraining, such as color. In this paper, we propose to foster sensitivity to such characteristics in the representation space by modifying the projector network, a common component of self-supervised architectures. Specifically, we supplement the projector with information about augmentations applied to images. In order for the projector to take advantage of this auxiliary conditioning when solving the SSL task, the feature extractor learns to preserve the augmentation information in its representations. Our approach, coined Conditional Augmentation-aware Self-supervised Learning (CASSLE), is directly applicable to typical joint-embedding SSL methods regardless of their objective functions. Moreover, it does not require major changes in the network architecture or prior knowledge of downstream tasks. In addition to an analysis of sensitivity towards different data augmentations, we conduct a series of experiments, which show that CASSLE improves over various SSL methods, reaching state-of-the-art performance in multiple downstream tasks.

Biography:

Marcin Przewięźlikowski is a PhD Student with Group of Machine Learning Research at Jagiellonian University in Kraków, Poland, and IDEAS NCBR. He is interested in data-effciency and works on topics such as Meta-Learning and Self-Supervised Learning.

Omar Rivasplata photo

Omar Rivasplata

University of Manchester

Co-authors:

Paul Blomstedt, Diego Mesquita, Yours Truly, Jarno Lintusaari, Tuomas Sivula, Jukka Corander, Samuel Kaski

Contributed talk 24: Meta-analysis of Bayesian Analyses

Saturday / 9 November 13:00 - 13:25 Hall B (CfC Session 8)

Abstract:

Meta-analysis aims to generalize results from multiple related statistical analyses through a combined analysis. While the natural outcome of a Bayesian study is a posterior distribution, traditional Bayesian meta-analyses proceed by combining summary statistics (i.e. point-valued estimates) computed from data. In this talk, I will present work with collaborators proposing a framework for combining posterior distributions from multiple related Bayesian studies into a meta-analysis. Importantly, the method is capable of reusing pre-computed posteriors from computationally costly analyses, without needing the implementation details from each study. Besides providing a consensus across studies, the method enables updating the local posteriors post-hoc and therefore refining them by sharing statistical strength between the studies, without rerunning the original analyses. The wide applicability of the framework is illustrated by combining results from likelihood-free Bayesian analyses, which would be difficult to carry out using standard methodology.

Biography:

Omar Rivasplata's top-level topics of interest are statistical learning theory and machine learning theory. In the limit of tending to praxis, these days he is very interested in strategies to train and certify machine learning models. Currently Omar is Senior Lecturer (Associate Professor) in Machine Learning in the Department of Computer Science at The University of Manchester, where he is a member of the Manchester Centre for AI Fundamentals and a supervisor at the UKRI AI CDT in Decision-Making for Complex Systems. Before joining The University of Manchester (July 2024), He had positions at University College London and DeepMind. Omar have a PhD in Mathematics (University of Alberta, 2012) and a PhD in Statistical Learning Theory (University College London, 2022). Back in the day he studied undergraduate maths (BSc 2000, Pontificia Universidad Católica del Perú).

/ Posters

Maciej Szymkowski photo

Maciej Szymkowski

Białystok University of Technology

Poster 1: Machine learning models for analysis of behavior of Engineered Heart Tissues (EHTs)

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Nowadays, one of the most important issues is connected with preparation of new medications. Especially, it is hard when it comes to organs like human heart. It is not easy to appropriately test the medication without applying it to the organ. Such a procedure is extremely risky and can lead even to patient's death. To reduce that risk, scientists proposed to test the medication within synthetically grown tissues and even single cells. However, here we can observe another problem. To find some changes in the cell/tissue behavior (e.g., its stoppage), lab worker needs to observe the cell constantly through the microscope. It is hard, because such changes can be observed even after a couple of hours. It is why, the Author of that work would like to propose a procedure by which, the behavior of the cell can be automatically assigned in each second of its lifetime (also after application of new medication) – in real time. This procedure combines Machine Learning and Deep Learning techniques – Convolutional Neural Networks (CNN) are used to recognize the stage of systole or diastole whilst methods like Support Vector Machines (SVM) are consumed to detect the areas which take part in the process of contraction. Digital signal/image processing methods are also used to increase the quality of the image and visibility of the cell. It needs to be pointed out that all experiments were performed with the database of 3D videos (single cell is observable in the center of the scene). More than 50 videos were used to develop the proposed techniques. Obtained results were discussed with experienced biologists and bioengineers from Institute of Human Genetics, Polish Academy of Science (Poznań, Poland). On the basis of these discussions it was proven that the algorithm is precise and effective enough to be used in real biological experiments.

Biography:

M.Sc. B.Sc. Eng in Computer Science. He is keen on artificial intelligence and digital signal processing and analysis (especially in the field of medicine). Right now, he is a Research Assistant at Białystok University of Technology and CTO in startup called Bobomed.care. He is interested in new technologies (especially connected with Computer Vision). His work was summarized with 39 research publications (published in JCR journals, conference proceedings and as chapters in the books) and 8 non-scientific papers as well as plenty of successfully completed projects (also funded by European Union) and tasks. He loves to increase his knowledge by participating in Conferences as well as reading papers and books. Privately, he is a fan of football, travel (he is in love with Swiss and Spain) and the proud owner of Commodore 64 (on which he still loves to create software).

Antoni Zajko photo

Antoni Zajko

Warsaw University of Technology

Co-authors:

Katarzyna Woźnica

Poster 2: Are encoders able to learn landmarkers?

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Effectively representing heterogeneous tabular datasets for meta-learning purposes is still an unsolved problem. Previous approaches rely on representations that are intended to be universal. This paper proposes two novel methods for tabular representation learning tailored to a specific meta-task -- warm-starting Bayesian Hyperparameter Optimization. The first involves deep metric learning, while the second is based on landmarkers reconstruction. We evaluate the proposed methods using efficiency in the target meta-task and using one method proposed by ourselves. Experiments demonstrate that while the proposed encoders can effectively learn representations aligned with landmarkers, they may not directly translate to significant performance gains in HPO warm-starting.

Biography:

Student at Warsaw University of Technology with experience in Data Science doing ML research in spare time.

Kacper Trębacz photo

Kacper Trębacz

PrecisionArt

Co-authors:

Jack Henry Good, Artur Dubrawski

Poster 3: Improving performance of distributed learning through density estimation.

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

In domains like healthcare and military, training data is often distributed among independent clients who cannot share data due to privacy concerns or limited bandwidth. This scenario is typically addressed by federated learning, where a central server coordinates communication of models between clients. However, when a central server is absent, clients must train models in a distributed, peer-to-peer manner. Ongoing work by the Auton Lab is developing a novel approach based on function space regularization to train models in distributed fashion with low communication overhead. In their work, the authors have shown that for certain high dimensional data sets, the performance is far lower than that of a central model. It is partially due to the fact that function space regularization enforces agreement on the whole domain, which overly penalizes models for disagreeing on out of distribution data. To address this, issue, we propose an extension to the Distributed AI framework that leverages density estimates to appropriately weight the function space regularization by these estimates. We also show an example implementation using Gaussian Mixture Models and Decision Trees. Furthermore, we present benchmark results on popular machine learning data sets as well as synthetically created data sets.

Biography:

Kacper Trebacz is a Master's student in Data Science at Warsaw University of Technology and an AI Research Intern at Carnegie Mellon University. His research focuses on developing new algorithms for distributed learning. Kacper is also the co-founder of Precision Art, a startup that develops AI-driven tissue analysis pipelines to improve medical diagnostics. He has previously collaborated with the Institute of Bioorganic Chemistry, Polish Academy of Sciences (PAN), contributing to AI and biomedical research projects.

Łukasz Staniszewski photo

Łukasz Staniszewski

Warsaw University of Technology

Co-authors:

Kamil Deja, Łukasz Kuciński

Poster 4: Unrevealing Hidden Relations Between Latent Space and Image Generations in Diffusion Models

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Denoising Diffusion Probabilistic Models (DDPMs) achieve state-of-the-art performance in synthesizing new images from random noise, but they lack meaningful latent space that encodes data into features. Recent DDPM-based editing techniques try to mitigate this issue by inverting images back to their approximated staring noise. In this work, we propose to study the relation between the initial Gaussian noise, the samples generated from it, and their corresponding latent encodings obtained through the inversion procedure. First, we interpret their spatial distance relations to show the inaccuracy of the DDIM inversion technique by localizing latent representations manifold between the initial noise and generated samples. On top of this observation, we explain the nature of image interpolation and editing through linear combining of latent encodings, revealing the origin of their significant limitations. Finally, we demonstrate the peculiar relation between initial Gaussian noise and its corresponding generations during diffusion training, showing that the high-level features of generated images stabilize rapidly, keeping the spatial distance relationship between noises and generations consistent throughout the training.

Biography:

Łukasz Staniszewski is a graduate student researcher at the Computer Vision Lab at the Warsaw University of Technology. He completed his bachelor's degree with honors, and his work on a novel object detection architecture earned him the Best Engineering Thesis of 2024 award from the 4Science Institute. Łukasz's experience involves research on Large Language Models at Samsung R&D Institute and a research internship on Diffusion Models at the SprintML lab in CISPA, Germany. Currently, he is involved in several projects focused on Image Generation tasks, with plans to continue his research career in this field through PhD studies.

Kinga Kwoka photo

Kinga Kwoka

Warsaw University of Technology

Co-authors:

Mateusz Zembroń

Poster 5: Model fusion for multimodal prediction of plant species composition

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Prediction of plant species composition across space and time at a fine resolution is crucial for biodiversity management, inventarisation or creation of high-resolution maps. However, the annotation of large datasets with multi-label species information is resource intensive. The GeoLifeCLEF 2024 competition addresses this by posing a challenge of learning from a small amount of high quality presence-absence multi-label data and a large number of presence-only single-label samples. The training dataset consists of five million plant observations across Europe, supplemented by various environmental data such as remote sensing imagery, land cover, and climate variables. To approach the multimodal aspect of the problem we propose an architecture utilizing feature fusion of Visual Transformer (ViT-B/32) and two convolutional networks (ResNet18). Each modality is processed by a network best suited for particular task and then concatenated into a single vector representing the combined features from all modalities. To address the issue of class imbalance arising from detecting a small number of species among numerous possibilities, we employ a focal loss function down-weighing the influence of easy negatives. This model fusion approach, which integrates multiple deep learning models, achieved promising results, securing 13th place in the GeoLifeCLEF CVPR 2024 competition.

Biography:

Kinga Kwoka is a Master's student at Warsaw University of Technology studying Computer Science with focus on artificial intelligence. Her professional experience includes a data analyst role at Deloitte. She is broadly interested in computer vision as well as multi-modal machine learning.

Filip Ręka photo

Filip Ręka

AGH University of Krakow

Poster 6: Generating music with Large Language Models

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

The work is devoted to explore the possibilities of music generation using large languagemodels (LLM). The aim of the study is to analyze the architectures of language models and their application in the process of music creation. The paper focuses on the structural similarities between music and text, arguing that these two forms of expression share common sequential features, which allows the adaptation of language models for the generation of musical compositions. The paper also provides an overview of existing formats for digital representation of music that can be used to train generative models. By conductinga series of experiments using a variety of architectures, such as transformers and state space models, the effectiveness of different approaches in the context of music generation was analyzed. An attempt was made to evaluate the generated musical fragments using commercially available LLM models or those trained to understand music. The work offers new perspectives on the potential of LLM in the field of artificial musical creativity, paving the way for further research in this fascinating interdisciplinary space.

Biography:

Filip Reka just started his PhD with the thesis of developing domain specific LLM for cancer treatment in AGH University in Krakow. Outside of AI his interests are music, cycling and airplanes.

Bartłomiej Sadlej photo

Bartłomiej Sadlej

University of Warsaw

Co-authors:

Bartłomiej Sobieski, Jakub Grzywaczewski

Poster 7: Region-constrained Visual Counterfactual Explanations

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Visual counterfactual explanations (VCEs) have recently become widely recognized for their ability to enhance the interpretability of image classifiers. This surge in interest is driven by the potential of VCEs to highlight semantically relevant factors that can alter a classifier's decision. Nevertheless, we contend that current leading methods lack a critical feature – region constraint – which hinders the ability to draw clear conclusions and can even foster issues like confirmation bias. To overcome the limitations of prior approaches, which alter images in a highly entangled and scattered fashion, we introduce region-constrained VCEs (RVCEs). These constraint modifications to a specific region of the image in order to influence the model's prediction. To efficiently generate examples from this subclass of VCEs, we present Region-Constrained Counterfactual Schrödinger Bridges (RCSB), which adapt a traceable subclass of Schrödinger Bridges to handle conditional inpainting, with the conditioning signal coming from the classifier of interest. Our approach not only establishes a new state-of-the-art, but also allows for exact counterfactual reasoning, ensuring that only the predefined region is semantically modified, and allows the user to interactively engage with the explanation generation process.

Biography:

Student and practitioner diving deep into different branches of Machine Learning with special interest in practical, explainable and simple solutions guided by observing the nature. Currently working on diffusion models.

Dawid Płudowski photo

Dawid Płudowski

Warsaw University of Technology

Co-authors:

Katarzyna Woźnica

Poster 8: Adaptivee: Adaptive Ensemble for Tabular Data

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Ensemble methods are widely used to improve model performance by combining multiple models, each contributing uniquely to predictions. Traditional ensemble approaches often rely on static weighting schemes that do not account for the varying effectiveness of individual models across different subspaces of the data. This work introduces adaptivee, a dynamic ensemble framework designed to optimize performance for tabular data tasks by adjusting model weights in response to specific data characteristics. The adaptivee framework offers flexibility through various reweighting strategies, including emphasizing single models for subspace specialization or distributing importance among models for robustness. Experiments on the OpenML-CC18 benchmark demonstrate that adaptivee can significantly boost performance, achieving up to a 6% improvement in balanced accuracy over traditional static ensemble methods. This framework opens new avenues for advancing ensemble techniques, particularly in tabular data contexts where model complexity is constrained by the nature of the data.

Biography:

Data Science student working at the research laboratory at WUT. Interested in AutoML and TimeSeries analysis

Emilia Wiśnios photo

Emilia Wiśnios

Independent

Co-authors:

Gracjan Góral

Poster 9: When All Options Are Wrong: Evaluating Large Language Model Robustness with Incorrect Multiple-Choice Options

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

The ability of Large Language Models (LLMs) to identify multiple-choice questions that lack a correct answer is a crucial aspect of educational assessment quality and an indicator of their critical thinking skills. This paper investigates the performance of various LLMs on such questions, revealing that models experience, on average, a 55\% reduction in performance when faced with questions lacking a correct answer. The study also highlights that Llama 3. 1-405B demonstrates a notable capacity to detect the absence of a valid answer, even when explicitly instructed to choose one. The findings emphasize the need for LLMs to prioritize critical thinking over blind adherence to instructions and caution against their use in educational settings where questions with incorrect answers might lead to inaccurate evaluations. This research establishes a benchmark for assessing critical thinking in LLMs and underscores the ongoing need for model alignment to ensure their responsible and effective use in educational and other critical domains.

Biography:

University of Warsaw graduate with a Master's in Machine Learning. Specializes in Natural Language Processing (NLP), large language models, and the intersection of NLP with political science.

Mateusz Panasiuk photo

Mateusz Panasiuk

AI Investments

Poster 10: Bayesian Ensemble Learning for Robust Time Series Forecasting in Financial Markets

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

In the volatile world of financial markets, accurate time series forecasting is essential for making informed decisions. While traditional ensemble methods have their strengths, they often fall short in capturing the uncertainty and variability that can significantly affect model performance over time. In this work, we introduce a Bayesian ensemble learning approach designed specifically for forecasting financial time series. By integrating past model errors into a probabilistic framework, our method dynamically adjusts the weights of individual models within the ensemble. This is achieved through Bayesian inference, which allows us to estimate the posterior distributions of model weights and biases, leading to a more robust aggregation of predictions that naturally accounts for uncertainty. We test the proposed approach across a diverse set of financial instruments, showing that it consistently improves forecasting accuracy and stability when compared to conventional methods. Beyond improved performance, the Bayesian framework also provides deeper insights into the confidence of our predictions, offering a quantifiable measure of uncertainty that is particularly valuable for risk management and strategic planning in financial contexts. Our results indicate that this method not only enhances predictive accuracy but also provides crucial guidance in understanding the underlying dynamics of the market. This research advances the field of financial forecasting by presenting a method that effectively balances precision and uncertainty, offering a powerful tool for both practitioners and researchers aiming to better navigate the complexities of financial markets.

Biography:

Mateusz Panasiuk is a specialist in Machine Learning and Mathematical Modelling, with a strong focus on Bayesian statistics and stochastic processes. Currently, he serves as the Chief Scientific Officer at AI Investments, where he leads research and development efforts in reinforcement learning, machine learning, and statistical modeling within the financial sector. His approach to problem-solving emphasizes clarity and efficiency, allowing him to drive impactful solutions across various scientific domains. Mateusz's academic background includes an MD from the Medical University of Białystok and a BSE in Applied Computer Science from the Warsaw University of Technology, where he is now pursuing an MSE. He is also an active contributor to the data science community, regularly presenting his work at major conferences. Outside of his professional life, he is passionate about mathematics, classical music, scuba diving, and the conservation of turtles and tortoises.

Stanisław Pawlak photo

Stanisław Pawlak

Warsaw University of Technology

Co-authors:

Bartłomiej Twardowski, Tomasz Trzciński, Joost van de Weijer

Poster 11: Addressing the Devastating Effects of Single-Task Data Poisoning in Exemplar-free Continual Learning

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Our research addresses the overlooked security concerns related to data poisoning in continual learning (CL). Data poisoning – the intentional manipulation of training data to affect the predictions of machine learning models – was recently shown to be a threat to CL training stability. While existing literature predominantly addresses scenario-dependent attacks, we propose to focus on a more simple and realistic single-task poison (STP) threats. In contrast to previously proposed poisoning settings, in STP adversaries lack knowledge and access to the model, as well as to both previous and future tasks. During an attack, they only have access to the current task within the data stream. Our study demonstrates that even within these stringent conditions, adversaries can compromise model performance using standard image corruptions. We show that STP attacks are able to strongly disrupt the whole continual training process: decreasing both the stability (its performance on past tasks) and plasticity (capacity to adapt to new tasks) of the algorithm. Finally, we propose a high-level defense framework for CL along with a poison task detection method based on task vectors.

Biography:

Stanisław Pawlak is a Ph.D. student and AI researcher working at Warsaw University of Technology. He received a M.Sc. degree in data science and a B.Sc. in applied computer science from the Warsaw University of Technology. Stanisław coauthored multiple publications on world top-tier AI conferences, including NeurIPS 2023 and CVPR 2024. He also worked as a programmer, AI engineer building ML-powered applications, and AI consultant. His research focuses on continual learning, generative models, and ML security. His latest efforts aim to measure the influence of data poisoning attacks on continual learning and propose defensive methods to improve the security of supervised and self-supervised training methods.

Oleksii Furman photo

Oleksii Furman

DataWalk / Wrocław University of Science and Technology

Co-authors:

Patryk Wielopolski, Jerzy Stefanowski, Maciej Zięba

Poster 12: Probabilistically Plausible Counterfactual Explanations with Normalizing Flows

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

In this presentation, I will introduce the domain of explainable artificial intelligence (XAI), focusing on counterfactual explanations, and introduce our method, PPCEF—a novel approach for generating probabilistically plausible counterfactual explanations. PPCEF stands out by integrating a probabilistic framework that aligns with the underlying data distribution while optimizing for plausibility. Unlike existing approaches, PPCEF directly optimizes the density function without assuming a specific parametric distribution, ensuring that counterfactuals are both valid and consistent with the data's probability density. By leveraging normalizing flows as powerful density estimators, PPCEF effectively captures complex, high-dimensional data distributions. Our novel loss function balances the need for a class change with maintaining similarity to the original instance, incorporating probabilistic plausibility. The unconstrained formulation of PPCEF allows for efficient gradient-based optimization, significantly accelerating computations and enabling future customization with specific counterfactual constraints.

Biography:

Oleksii Furman is a Machine Learning Researcher and Engineer with a Master’s degree from Wrocław University of Science and Technology and is currently pursuing a Ph.D. in Artificial Intelligence. His research primarily focuses on generative models and counterfactual explanations. In addition to his academic achievements, Oleksii plays a crucial role at DataWalk, where he applies his AI expertise to develop innovative data analytics solutions. His contributions include projects involving Large Language Models, advanced computer vision, and multilingual natural language processing systems.

Paulina Tomaszewska photo

Paulina Tomaszewska

Warsaw University of Technology

Poster 13: Position: Do Not Explain Vision Models Without Context

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Does the stethoscope in the picture make the adjacent person a doctor or a patient? This, of course, depends on the contextual relationship of the two objects. If it’s obvious, why don’t explanation methods for vision models use contextual information? The role of context has been widely covered in Natural Language Processing and Time Series but much less in Computer Vision. I will explain what contextual information within images is, using some real-world examples. I will outline how the issue of spatial context was addressed in the Deep Learning models and contrast it with the small number of works concerning the topic within the field of Explainable AI (XAI). I will show examples of failures of popular XAI methods when the spatial context plays a significant role. Finally, I will argue that there is a need to change the approach to explanations from 'where' to 'how'.

Biography:

Paulina Tomaszewska is a PhD student at the Warsaw University of Technology. She gained experience in the field of AI at universities in Singapore, South Korea, Austria and Switzerland. Her research covers Explainable AI, the importance of context in images and digital pathology.

Joanna Kaleta photo

Joanna Kaleta

Warsaw University of Technology; Sano Centre for Computational Medicine

Co-authors:

Kacper Kania, Tomasz Trzcinski, Marek Kowalski

Poster 14: LumiGauss: High-Fidelity Outdoor Relighting with 2D Gaussian Splatting

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Decoupling lighting from geometry using unconstrained photo collections is notoriously challenging. Solving it would benefit many users as creating complex 3D assets takes days of manual labor. Many previous works have attempted to address this issue, often at the expense of output fidelity, which questions the practicality of such methods. We introduce LumiGauss - a technique that tackles 3D reconstruction of scenes and environmental lighting through 2D Gaussian Splatting. Our approach yields high-quality scene reconstructions and enables realistic lighting synthesis under novel environment maps. We also propose a method for enhancing the quality of shadows, common in outdoor scenes, by exploiting spherical harmonics properties. Our approach facilitates seamless integration with game engines and enables the use of fast precomputed radiance transfer. We validate our method on the NeRF-OSR dataset, demonstrating superior performance over baseline methods. Moreover, LumiGauss can synthesize realistic images when applying novel environment maps.

Biography:

Joanna Kaleta is a PhD student at the Warsaw University of Technology and the Sano Centre for Computational Medicine. She holds a Master’s degree in Computer Science from the Warsaw University of Technology. Joanna’s current research focuses on the intersection of computer graphics and deep learning, particularly in neural rendering. At Sano, she is part of the Health Informatics team, where she applies deep learning methods to image-guided therapy, working on advancements in medical technology.

Alicja Dobrzeniecka photo

Alicja Dobrzeniecka

Lingaro / NASK National Research Institute

Poster 15: Continual Learning of Multi-Modal Models

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

AI models can become obsolete after training as new data becomes available. Re-training large models is costly and energy inefficient. Continual Learning attempts to find a solution to one of the most challenging bottlenecks of current AI models - the fact that data distribution changes over time. In my poster I would like to show the capabilities of Continual Learning methods for multimodal models, and in particular for vision-language models such as CLIP. Vision-Language models can handle both textual and visual data, which has a wide range of use cases such as image analysis, object recognition and scene understanding, image captioning, answering visual questions, and more. I will present the current state of the art in applying Continual Learning to vision-language models, their limitations and opportunities for improvement, and the results of experiments on selected methods.

Biography:

Alicja Dobrzeniecka have been studying and researching AI for a number of years. She hold a Master of Science in Artificial Intelligence from the Vrije Universiteit Amsterdam and a Bachelor of Arts in Philosophy from the University of Gdansk. She has recently published an article entitled "A Bayesian Approach to Uncertainty in Word Embedding Bias Estimation" in Computational Linguistics in MIT Press Direct. My Master's thesis focused on the interpretability of large language models such as BERT. Alicja share some of my research with a wider audience by publishing on the Medium platform. She has commercial experience as a Data Scientist, developing machine learning and deep learning models for business. In her last role, she worked on the use of LLMs for machine translation applications. Alicja currently focused on exploring the area of Continual Learning for multimodal models, which she believe will be a crucial direction for AI in the near future due to energy and resource constraints.

Valeriya Khan photo

Valeriya Khan

IDEAS NCBR, Warsaw University of Technology

Co-authors:

Kamil Deja, Bartłomiej Twardowski, Tomasz Trzcinski

Poster 16: Assessing the Impact of Unlearning Methods on Text-to-Image Diffusion Models

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Text-to-image diffusion models like Stable Diffusion and Imagen set a new standard in generating photorealistic images. However, their widespread use raises concerns about the nature of the content they produce, particularly when models are trained on large datasets that may include inappropriate or copyrighted material. In response, various unlearning methods have been developed to effectively remove unwanted information. This research evaluates the impact of unlearning methods on the overall performance of text-to-image diffusion models. Specifically, we examine how unlearning certain content influences the models' ability to generate accurate and diverse images across different concepts. Through a series of experiments, we investigate potential trade-offs, such as unintended reductions in image quality or diminishing features related to the remaining classes. Our findings offer valuable insights into balancing the need to eliminate specific content with the goal of preserving the broader functionality and integrity of diffusion models.

Biography:

Valeriya Khan is a PhD student at IDEAS NCBR and Warsaw University of Technology with focus on continual learning and unlearning of generative models.

Katarzyna Zaleska photo

Katarzyna Zaleska

Warsaw University of Technology

Co-authors:

Łukasz Staniszewski*, Kamil Deja

Poster 17: Style and Object Low-Rank Continual Personalization of Diffusion Models

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Diffusion models have demonstrated remarkable capabilities in image generation. Moreover, recent personalization methods such as Dreambooth combined with low-rank adaptation techniques allow fine-tuning pre-trained models to generate concepts or styles absent in the original training data. However, the application of personalization techniques across multiple consecutive tasks exhibits a tendency to forget previously learned knowledge. While recent studies attempt to mitigate this issue by combining trained adapters across tasks post-fine-tuning, we adopt a more rigorous regime and investigate the personalization of large diffusion models under a continual learning scenario. To that end, we compare four different methodologies: (1) naive LoRA adapter fine-tuning, (2) merging adapters with further fine-tuning through reinitialization, (3) leveraging orthogonalization during adapters reinitialization, and (4) updating only relevant (according to the current task) parameters of trained LoRA weights. Finally, the findings from our comprehensive experiments indicate improvements by decreasing forgetting over the baseline approach and provide a comprehensive evaluation of these methods, highlighting the nuances between style and object personalization in the context of continual fine-tuning.

Biography:

Katarzyna Zaleska is a master's student in Artificial Intelligence at the Warsaw University of Technology. Her bachelor's thesis on image-to-text synthesis won the "Engineer 4 Science 2023" competition organized by the 4Science Institute. She began her professional journey in the Natural Language Processing team at Samsung Research and Development Institute, focusing on researching, applying, and evaluating Large Language Models. In September 2024, Katarzyna joined the Snowflake team, working in the Document Understanding field, where she plans to further develop her expertise in NLP.

Nichal Ashok Narotamo photo

Nichal Ashok Narotamo

Zendesk

Co-authors:

David Aparício, Tiago Mesquita, Mariana Almeida

Poster 18: Efficient Intent Detection Across Multiple Industries: A Unified Approach

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

In order to have efficient customer service support, it is important to have a model capable of detecting the intent of customer messages, allowing support agents to not only understand the content of the message promptly but also to react accordingly. In particular, user requests may come in from different domains (e.g. Software, E-commerce or Finance), therefore maintaining separate models for each industry or business can become impracticable as the client base grows. In this work we study an alternative approach of scaling intent detection to numerous clients by employing a single generic model accompanied with a per-client list of relevant intents. These lists of intents can be derived from historical customer data or even directly obtained from the client’s feedback, granting a more customizable client experience. In addition to enabling clients to modify their list of relevant intents, our technique reduces costs by requiring less training and maintenance compared to managing several models associated with each domain. Furthermore, we describe a strategy where the client list of intents is inserted as model features to accommodate for the changes in the intent lists. This method demonstrates how robust it is to client-made adjustments to such intent lists, which frequently occur in real-world situations whenever a client’s domain changes. When compared to industry-specific models, we are able to achieve greater performance with the augmented generic model with per-client intent lists, indicating the adaptability and capacity of the model to meet a wider range of client requirements.

Biography:

Nichal Ashok Narotamo is based in Lisbon, Portugal. He has completed his master's degree in Computer Science in 2022. After the graduation, Nichal started an internship at Zendesk where he was able to gain valuable experience in the tech industry. After the internship, he transitioned into a role as a machine learning scientist, continuing the focus of the research within the Natural Language Processing field, specifically associated to customer service support systems.

Małgorzata Łazęcka photo

Małgorzata Łazęcka

University of Warsaw

Co-authors:

Małgorzata Łazęcka, Kazimierz Oksza-Orzechowski, Marcin Możejko, Daniel Schulz, Eike Staub, Marie Mourfouace, Henoch Hong, Ewa Szczurek

Poster 19: FACTM: a Bayesian model for integrating structured and tabular data

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Integrating information across various data modalities can be beneficial for gaining valuable insights into underlying phenomena. Numerous methods exist for multi-modal data integration, ranging from linear matrix factorization-based approaches to nonlinear methods employing i.e. deep generative models. However, linear integration becomes particularly challenging when one or more modalities exhibit complex structures, such as image-based, or spatial, while others do not. In such cases, existing strategies often rely on preprocessing structured data as an initial step. We present FACTM, a novel method that leverages a Bayesian probabilistic graphical model to address this challenge. Our approach combines two models. Firstly, it uses a correlated topic model (CTM), a widely used technique in text mining, to uncover the structure present in specific modalities of the data. In particular, the CTM part identifies meaningful clusters and shares information about the observation-wise changes of fractions of specific clusters with the second component of the model. Secondly, it employs a multi-modal factor analysis (FA), a matrix factorization technique commonly used in fields where interpretation is critical. This FA component integrates information and identifies common latent factors shared across all modalities, including structured data. Importantly, our model extracts information from complex modalities and runs factor analysis simultaneously, allowing both components of the model to potentially enhance each other's performance. Optimal parameters are determined using Bayesian variational inference. When structured data consists of documents with text, the topics are defined in the standard way. In the context of spatial imaging of single cells, the CTM component of FACTM clusters spatial niches (analogous to sentences) which are groups of cells (analogous to words) into niche types (topics). These niche types are defined as the distribution of cell types within them. Meanwhile, the FA component integrates information from various structured and plain modalities to uncover common latent factors. In the poster, we will provide a detailed description of the model, along with results demonstrating its practical application. We will present findings derived from multi-omics data obtained from the IMMUcan consortium. Funding: IMI2 JU grant agreement 821558, supported by EU’s Horizon 2020 and EFPIA, Polish National Science Centre SONATA BIS grant No. 2020/38/E/NZ2/00305.

Biography:

Małgorzata Łazęcka is a scientist and statistician with experience in biomedical research. She received her Ph.D. from the Warsaw University of Technology, where her research focused on hypothesis testing, specifically on conditional independence testing using information-theoretic measures. Currently, she is a postdoctoral researcher in Ewa Szczurek's lab, working on new approaches to integrating multi-modal data for tumor analysis.

Dominik Lewy photo

Dominik Lewy

Lingaro Group

Co-authors:

Karol Piniarski

Poster 20: Beyond Benchmarks: What to consider when evaluating foundational models for commercial use?

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

This presentation provides a comprehensive overview of critical considerations for utilizing foundational models within commercial use cases, with a focus on Computer Vision and Natural Language Processing domains. It outlines a systematic framework comprising essential steps for verification. Additionally, the presentation illuminates the process through examples of evaluation protocols, offering practical insights into assessing model performance and applicability in real-world scenarios. The analysis will concern mainly generative models, particularly text-to-image synthesis, and Large Language Models (LLMs). Through this detailed exploration, participants will gain a deeper understanding of the strategic and technical prerequisites for leveraging foundational models to drive innovation and efficiency in commercial applications.

Biography:

Dominik has over 10 years of hands-on experience in Machine Learning, Deep Learning, Data Exploration and Business Analysis projects primarily in the FMCG industry. He is a technical leader setting goals and preparing road maps for projects. He is also a PhD candidate at Warsaw University of Technology where he focuses on the study of neural networks for image processing. He tries to be a bridge between commercial and academic worlds. His main research interest is digital image processing in context of facilitating adoption of deep learning algorithms in business context where training data is scarce or non-existing.

Jędrzej Warczyński photo

Jędrzej Warczyński

Poznan University Of Technology

Co-authors:

Mateusz Lango, Onfrej Dusek

Poster 21: Interpretable Rule-Based Data-to-Text Generation Using Large Language Models

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

In the field of natural language generation (NLG), converting structured data into coherent text poses significant challenges. "Interpretable Rule-Based Data-to-Text Generation Using Large Language Models," introduces a novel approach that integrates the interpretability and precision of rule-based systems with the generative power of large language models (LLMs). This method focuses on generating Python code to transform RDF triples into readable text, achieving a balance between accuracy and flexibility. Approach: The core innovation lies in automating the creation of a rule-based system using LLMs. The process involves three key steps: Rule Generation: An LLM is prompted to write Python code that specifies how to convert given RDF triples into natural language text. Rule Testing: The generated code is checked for syntactic correctness and its output is compared to desired references to ensure alignment. Rule Refinement: The code undergoes iterative refinement using silver-standard references, reducing hallucinations and enhancing accuracy. This approach leverages the strengths of both rule-based and neural methods, creating a system that runs efficiently on a single CPU without the need for GPU resources. Experimental Results: Evaluations on the WebNLG dataset demonstrate that this system outperforms zero-shot LLMs in BLEU and BLEURT scores, and significantly reduces hallucinations compared to a fine-tuned BART model. The system's interpretability allows for easy modification and extension by developers, providing high control over the output. Highlights: The system achieves higher text quality than zero-shot LLMs. It produces fewer hallucinations than a fine-tuned BART baseline. The rule-based approach offers full interpretability and control over generated text. It operates efficiently on a single CPU, eliminating the need for costly GPU resources. This research presents a promising step towards creating efficient, interpretable, and flexible NLG systems by combining the strengths of rule-based and neural approaches. It opens new avenues for further advancements in the field, particularly in multilingual text generation.

Biography:

Jędrzej Warczyński is a computer scientist pursuing a Master's degree in Artificial Intelligence at Poznań University of Technology. He earned his Bachelor's degree in Computer Science with honors from the same institution. With over two years of experience as a full-stack Java developer, Jędrzej has contributed to building robust web applications. His research focuses on natural language processing and natural language generation (NLG). His recent paper, "Interpretable Rule-Based Data-to-Text Generation Using Large Language Models," was accepted for oral presentation at INLG 2024.

Pisula Juan Ignacio photo

Pisula Juan Ignacio

University of Cologne

Co-authors:

Katarzyna Bozek

Poster 22: Addressing data heterogeneity in federated learning with Mixture-of-Experts models

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Federated Learning (FL) offers a solution to collaborative learning when sharing private data is not possible. However, domain shift among the different clients in the federation remains an important challenge. When this occurs, the Federated Averaging (FedAvg) strategy typically performs poorly, as each client optimizes towards its local empirical risk minimum, which may be inconsistent with the global direction. Handling this issue is not only of theoretical interest, but could be critical in real-world scenarios, for example, in medical applications where each client acquires geographically-biased data using its own protocol. The problem of non-independent, identically distributed (non-iid) data in FL has been studied mainly on situations where it is the distribution of labels that shifts among clients, and there is limited work on data originated from different domains. In the FL literature, non-iid distributions are commonly addressed with novel federated algorithms that train a better global model, or that include local models that mitigate the biases of their respective clients. In this work, we study how the domain shift problem can be overcome by using Mixture-of-Experts architectures (MoEs). The MoE layers that we employ compute their output as a linear combination of the outputs of a pool of experts, where the coefficients are predicted by a router network. Furthermore, if the routing to the experts is sparse, the computation of unused experts can be spared, providing a boost in inference speed. Our experiments show that the ability of MoEs to process different inputs with different experts can be exploited to automatically deal with data heterogeneity among clients, and a single global model can be trained even with a naive FedAvg strategy without compromising performance. Additionally, we report an increase in accuracy when the gradients of the MoE layers are estimated using a heuristic. Overall, we show that MoEs make a solid solution to federated scenarios where data heterogeneity is a concern.

Biography:

Electronics engineer born and raised in La Pampa.

Jolanta Śliwa photo

Jolanta Śliwa

AGH University of Krakow

Co-authors:

Paulina Jędrychowska, Bogumiła Papiernik, Oskar Simon

Poster 23: Application of machine learning to support pen & paper RPG game design

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Many real-life scenarios require the prediction of ordinal variables, a task well-suited for ordinal regression methods. These methods include classical regression models with rounding as well as specialized approaches designed specifically for ordinal data. In this talk, we will introduce the topic of ordinal regression and its algorithms and evaluation methods. One of many possible applications of these methods is determining challenge levels of monsters in pen & paper RPG games. Currently there is no automatic way to estimate these levels. However, it is a natural task for machine learning, as opponents are described by long vectors of numerical features and levels are ordinal values. Usage of ordinal regression can help reduce costs for publishers during the design process. We will describe the experiments, evaluation framework, and results.

Biography:

Jolanta Śliwa is a Data Science student at the AGH University of Krakow. As part of my engineering thesis, she co-developed an application that supports the design of opponents in a pen & paper RPG game, using Machine Learning. For this reason, Jolanta have recently been spending her free time playing this type of game, and she also immerse myself in the fascinating world of animation.

Bartłomiej Fliszkiewicz photo

Bartłomiej Fliszkiewicz

Military University of Technology

Poster 24: Repurposing Pharmaceuticals for Organophosphorus Poisoning

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Organophosphorus (OP) compounds found in pesticides and chemical warfare agents (CWA), continue to pose a significant global health risk. There are still over 385 million cases of unintended acute pesticide poisoning annually, mostly in southern Asia and east Africa, resulting in approximately 11,000 deaths, despite a global push to reduce the use of pesticides. There is also a significant number of intended OP poisoning, mostly in suicide attempts. The number of OP poisoning could escalate rapidly in the event of terrorist incidents, warfare and other crises. Due to geographic focus of the problem and the relatively small number of cases, there is little interest in developing new drugs against OP poisoning. Notably the most used antidote, pralidoxime (2-PAM), was developed in 1950s. Most studied antidotes are charged molecules and therefore poorly penetrate the blood-brain-barrier. Repurposing existing pharmaceuticals offers a strategic solution to the lack of interest in developing novel antidotes, as the drug discovery is both costly and time-consuming. This study employs a structure-based method for repurposing compouds from the ChEMBL database. a machine learning model constructed with Light Gradient Boosting Machine algorithm is applied to classify compounds as actives or inactives in treating organophosphorus poisoning. The training database was created by curating PubChem compounds tested against acetylcholinesterase (gene ID 43), focusing on bioassays containing the terms „reactivation” and „nimp”, „gb”, „sarin”, „sp-gbc” or „sp-gb-am”. The model was trained using the structural representations of 62 molecules. The approach was evaluated using the Leave-One-Out cross validation method, yielding an area under the ROC curve of 0.93. Since 52 of the training molecules contained an oxime moiety, the classification was limited to such compounds. Among 34 oximes from ChEMBL database 16 were classified as actives and chosen for further analysis including protein - ligand docking.

Biography:

Bartek is a research assistant at the Department of Radiology and Contamination Monitoring at the Military University of Technology. His scientific interest is in cheminformatics and drug design. He is planning to obtain a PhD soon and start postgraduate studies in bioinformatics. As a hobby project Bartek developed an Android app called Gaslands Builder.

Jan Dubiński photo

Jan Dubiński

Warsaw University of Technology; IDEAS NCBR

Co-authors:

Piotr Warchoł, Maciej Kafel

Poster 25: Efficiently enhancing product design process with Stable Diffusion

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Navigating the landscape of product design demands a nuanced approach, necessitating considerable time and expertise. The integration of AI-aided design systems, though promising, demands substantial resource allocation. In response to these imperatives, this work introduces an innovative solution poised to enhance and expedite the product design process. Leveraging the state-of-the-art Stable Diffusion Model, a cutting-edge framework for generative image generation, our approach propounds a streamlined and resource-efficient methodology to efficiently empower the product design process. Our solution demonstrates three key capabilities: 1) facilitating the generation of new product designs and styles observed on e-commerce platforms; 2) swiftly creating product prototypes based on existing products or new designer sketches; 3) enabling precise modifications to product designs according to the designer's preferences. Leveraging the dreambooth technique, we seamlessly incorporate new styles or products with minimal input data, diversifying design possibilities dynamically. Precision is attained through the ControlNET mechanism, informed by a visual prior, aligning output with a desired product shape. Finally, a masking mechanism allows for product editing to enhance customization. Noteworthy, our solution requires only a single 8GB RAM GPU. Successfully developed, tested, and applied at Eljot Sp. z o. o., specialists in wooden product design, our solution showcases the potential to revolutionize and accelerate the product design process.

Biography:

Jan Dubiński is currently pursuing a PhD degree in deep learning at the Warsaw University of Technology. He is a member of the ALICE Collaboration at LHC CERN. Jan has been working on fast simulation methods for High Energy Physics experiments at the Large Hadron Collider at CERN. The methods developed in this research leverage generative deep learning models such as GANs to provide a computationally efficient alternative to existing Monte Carlo-based methods. More recently, he has focused on issues related to the security of machine learning models and data privacy. His latest efforts aim to improve the security of self-supervised and generative methods, which are often overlooked compared to supervised models.

Bartosz Cywiński photo

Bartosz Cywiński

Warsaw University of Technology

Co-authors:

Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski, Łukasz Kuciński

Poster 26: GUIDE: Guidance-based Incremental Learning with Diffusion Models

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

We introduce GUIDE, a novel continual learning approach that directs diffusion models to rehearse samples at risk of being forgotten. Existing generative strategies combat catastrophic forgetting by randomly sampling rehearsal examples from a generative model. Such an approach contradicts buffer-based approaches where sampling strategy plays an important role. We propose to bridge this gap by incorporating classifier guidance into the diffusion process to produce rehearsal examples specifically targeting information forgotten by a continuously trained model. This approach enables the generation of samples from preceding task distributions, which are more likely to be misclassified in the context of recently encountered classes. Our experimental results show that GUIDE significantly reduces catastrophic forgetting, outperforming conventional random sampling approaches and surpassing recent state-of-the-art methods in continual learning with generative replay.

Biography:

Bartosz Cywiński is a student at Warsaw University of Technology. His main research interests are mechanistic interpretability and continual learning. He previously interned at IDEAS NCBR and CISPA.

David Bertram photo

David Bertram

University of Cologne

Co-authors:

Katarzyna Bozek, Michael Sommerauer

Poster 27: Boosting Neurodegenerative Disorder Screenings with Machine Learning

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

REM sleep behavior disorder (RBD) is a critical biomarker for the early stages of alpha-synucleinopathies such as Parkinson's Disease, Lewy-Body-Dementia, or Multi-System-Atrophy. Early detection is crucial for advancing fundamental research and improving patient well-being. The current gold standard for diagnosis, polysomnography, is cost and labor-intensive and requires specialized sleep experts. To optimize the screening process, this research explores the use of wrist-worn accelerometers to make a pre-selection of high vs low-risk patients and thereby making RBD patients in a prodromal state visible to the medical sector. In this poster, I present my research on developing a tree-based machine-learning classifier that discriminates between RBD patients and healthy controls. The model, trained on a cohort of 116 patients, demonstrates high performance, achieving 0.92±0.06 AUROC in nested cross-validation and 0.86 on an independent test set. This research highlights that in certain medical contexts, traditional machine learning models can be highly effective, often providing robust, interpretable results that meet the demands of real-world applications.

Biography:

During his physics studies, David Bertram applied deep learning techniques to high-energy physics data, with a particular emphasis on using natural language processing (NLP) methods for time-series analysis. After completing his master’s degree, David shifted his focus to medical applications, where he explored the use of adversarial generative models to augment medical datasets. Currently, he is a PhD student under the supervision of Prof. Katarzyna Bozek at the University of Cologne, working on the application of machine learning and deep learning techniques for the early detection of neurodegenerative diseases.

Łukasz Sztukiewicz photo

Łukasz Sztukiewicz

Poznan University of Technology / GHOST

Co-authors:

Ignacy Stępka, Michał Wiliński

Poster 28: Enhancing Fairness in Neural Networks through Debiasing Techniques

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Achieving fairness in machine learning models is a critical challenge, particularly when balancing the trade-off between model performance and fairness. This research proposes an investigation into various existing fairness-enhancing techniques, including post-hoc fairness methods and intra-processing techniques. It systematically compares the effectiveness of these methods in minimizing the harm caused by the performance-fairness trade-off. The analysis provides insights into how these techniques can be most effectively employed in real-world applications.

Biography:

Łukasz Sztukiewicz is pursuing a Bachelor of Science in Artificial Intelligence at the Poznan University of Technology. He participated in the prestigious Robotics Institute Summer Scholar Programme at Carnegie Mellon University and currently serves as one of the leaders of the Students' Scientific Group "Group of Horribly Optimistic STatisticians" (GHOST).

Bartosz Ptak photo

Bartosz Ptak

Poznan University of Technology

Co-authors:

Marek Kraft

Poster 29: Making marine biologists' life easier with computer vision - porpoise detection and tracking in coastal waters

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Recent advancements in deep learning and computer vision have expanded the possibilities for detecting and tracking marine animals, including through drone surveillance. However, existing tools remain inadequate for precise detection, particularly in complex seabed environments. This project addresses this gap by developing a robust solution for detecting porpoises in video sequences captured by aerial robots. The proposed method integrates classical and deep learning techniques to enhance detection accuracy and improve trajectory tracking. Furthermore, it enables detailed analyses, such as identifying keypoints like the tongue and tail, which facilitate animal measurement and movement monitoring. This tool is especially valuable for biological researchers studying porpoise behaviour in coastal waters, as it significantly reduces the need for manual labelling. The effectiveness of the solution is validated using real-world drone footage from shallow water environments.

Biography:

Bartosz has been affiliated with Poznan University of Technology (PUT) since 2021. He received a Bachelor of Engineering in Computer Science and graduated with an honours Master's Degree in Automatic Control and Robotics in July 2021. Currently, he is a PhD student at PUT in the Automation, electronic, electrical engineering, and space technologies discipline and a Computer Vision Engineer at the Institute of Robotics and Machine Intelligence.

Małgorzata Kurcjusz-Gzowska photo

Małgorzata Kurcjusz-Gzowska

Institute of Civil Engineering, Warsaw University of Life Sciences; Faculty of Mathematics and Information Science, Warsaw University of Technology

Co-authors:

Piotr Januszewski

Poster 30: Application of the R-CNN Algorithm for Street Light Detection as a Light Pollution Estimation Tool

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

Light pollution is a significant ecological and social problem that negatively affects human health, disrupts the natural circadian rhythms of organisms, and disrupts the functioning of ecosystems, leading to serious environmental consequences. According to current knowledge, one of the most important factors contributing to the increase in light pollution is not the intensity of light, but the large number of light sources in a given area. In this study, an object-detection model was used to detect street lights in aerial photos taken at night by a drone. In particular, the focus was on using the R-CNN algorithm for fast and efficient detection of street lights. The image processing model was trained based on collected and hand-annotated data from eight selected areas of Warsaw. The model showed high precision in locating the street lights, which is of great importance for urban planners in developing strategies to reduce light pollution and optimize the layout of urban lighting.

Biography:

Małgorzata Kurcjusz-Gzowska is a PhD student in the field of Civil Engineering, her research is focused on Artificial Intelligence in Architecture. Her recent project revolves around using object detection to measure light pollution in Warsaw from drone footage. She graduated from both Architecture and Civil Engineering at Warsaw University of Technology.

Karol Szymański photo

Karol Szymański

Tooploox

Co-authors:

Szymon Płaneta

Poster 31: Comparing Large Language Models in Retrieval-Augmented Generation: A Multi-Metric Evaluation

Friday / 8 November 17:10 - 18:40 (Poster Session 1)

Abstract:

The rapid evolution of generative AI has led to widespread use of Large Language Models (LLMs) in various industries. However, a comprehensive comparison highlighting their strengths and weaknesses is often lacking. This study aims to fill that gap by evaluating popular open-source and commercial LLMs, including GPT-3.5, GPT-4, GPT-4 Turbo, Mistral, and Llama13B, in conjunction with Retrieval Augmented Generation (RAG) systems. Our methodology involved a standardized dataset, a set of relevant questions, and a suite of metrics like answer correctness, faithfulness, and context relevance. The results revealed significant performance variations across models, with GPT-4 generally providing the most accurate answers. Interestingly, open-source models like Mistral demonstrated competitive performance, particularly in faithfulness. Furthermore, while GPT-4 was the only model to admit to lack of necessary information, others tended to generate hallucinated responses when unable to provide accurate answers. This study underscores the importance of choosing the right LLM for specific use cases and the potential of open-source models as viable alternatives to their commercial counterparts.

Biography:

Karol Szymański completed his Master’s degree in 2017, focusing on the application of autoencoders in herding tasks. After graduating, he worked at Intel and Amazon, gaining experience in the industry. Since 2020, he has been working at Tooploox, where he focuses on building deep learning-based solutions for image processing.

Aleksander Obuchowski photo

Aleksander Obuchowski

TheLion.AI

Co-authors:

Mikołaj Badocha, Kinga Marszałkowska, Maciej Gierczak, Barbara Klaudel

Poster 32: Eskulap - The First Polish Open-source Medical Large Language Model

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Although large language models like GPT, Gemini and Claude have demonstrated promising capabilities in processing the Polish language and representing basic medical knowledge, their application in production medical environments faces significant limitations, including issues with data privacy, lack of transparency and control over the model, and instability over time. To address these challenges and increase the potential of AI utilisation in Polish medicine, we have developed Eskulap - an open-source medical model designed for safe integration with hospital infrastructure. This model addresses the previous lack of dedicated medical models in Polish, similar to those available in English. To build Eskulap we have gathered medical information from a diverse array of sources: medical websites, Polish Wikipedia, healthcare flyers, scientific publications, and anonymised clinical notes. The cornerstone of our data strategy was the creation of 800,000 synthetic medical instructions, transforming unstructured data into a rich learning foundation. We have then used Bileik-v2 as the base model and fine-tuned it using LoRa techniques to make it aligned with medical instructions This new model aims to open unprecedented possibilities for AI applications in Polish healthcare while addressing key challenges related to privacy, control, and stability. It has the potential to transform various aspects of medical practice, from assisting in documentation to supporting clinical decision-making.

Biography:

Aleksander Obuchowski is a co-founder of a research group, TheLion.AI devoted to creating AI-based open source solutions for healthcare. Worked on projects such as the Universal Medical Image Encoder and the Polish medical language model Esculap. Head of AI at K-2.AI. Lecturer at the Polish-Japanese Academy of Information Technology. Awarded Forbes 25 under 25.

Miłosz Gajowczyk photo

Miłosz Gajowczyk

Hemolens Diagnostics

Poster 33: Evaluation of AI-Based Coronary Artery Calcium Scoring in Non-Contrastive Cardiac CT

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

In recent years, artificial intelligence (AI) based tools have grown in popularity in medical imaging processing. They offer a rapid and non-invasive way of monitoring diseases and assessing their progression. In the coronary artery disease (CAD) monitoring, a key diagnostic feature is a so-called calcium scoring which is extracted from computed tomography (CT) scans. This feature measures an amount of calcifications that are collected inside the patient's vessels, obstructing a healthy blood flow and subsequently causing a cardiac ischemia. This work focuses on the problem of evaluating coronary artery calcium scoring methods with non-contrastive cardiac CT. Practical issues related to convolutional neural networks such as local relationships between heart muscle and small anatomical objects such as plaques are discussed. Additionally, we validate different models used to detect abnormalities in medical images using artificial intelligence.

Biography:

With four years of experience as a researcher in medical AI, Miłosz Gajowczyk specializes in applying advanced techniques to computed tomography and magnetic resonance imaging modalities. His work focuses on segmentation, point detection, and mesh deformation in cardiac and brain scans. Although his professional work centers on cardiac scans, he have a strong interest in advancing brain scan research.

Łukasz Niedźwiedzki photo

Łukasz Niedźwiedzki

Faculty of Physics, University of Warsaw

Co-authors:

Dr Józef Ginter

Poster 34: Improving Physics-Informed Neural Networks for Modeling Molecular Transport in the Human Brain

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

This study focuses on enhancing the quality and efficiency of physics-informed neural networks (PINNs) for modeling molecular transport in the human brain, particularly for estimating diffusion coefficients from MRI data. While PINNs are effective in solving partial differential equations (PDEs), they struggle with noisy data. The goal is to refine the PINN approach to improve its reliability and consistency, making it a viable alternative to traditional methods like the finite element method (FEM) for solving inverse problems in medical imaging. Building on the work of Zapf et al. (2022), various enhancements to the standard physics-informed neural network formulation are explored to improve performance with noisy data. This involves experimenting with different neural network architectures and incorporating operator learning to better capture complex dynamics. Techniques such as tuning the loss function and using adaptive refinement of training points are applied to enhance data fidelity and parameter estimation. Both synthetic and real life test cases, as well as comparisons with classical methods, are used to evaluate the impact of these improvements on the accuracy and efficiency of estimating diffusion coefficients from MRI data. Preliminary results indicate that the improved PINN approach, enhances quality and efficiency in estimating the diffusion coefficient from MRI data. Comparisons between different neural network architectures and classical methods, like the finite element method, show promise, but further tests are required to confirm these findings. While early findings suggest that the improved PINN method offers advantages in accuracy and efficiency over traditional approaches, decisive conclusions cannot yet be drawn. More extensive testing and validation are necessary to establish the robustness and general applicability of the proposed improvements in clinical settings.

Biography:

Łukasz Niedźwiedzki is a master's student at the Faculty of Physics, University of Warsaw and experienced Machine Learning Engineer.

Artsiom Ranchynski photo

Artsiom Ranchynski

IDEAS NCBR

Poster 35: Cayley-Maze: Reinforcement Learning environment with variable complexity

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

One of the key challenges in contemporary Reinforcement Learning (RL) is effectively addressing problems of algorithmic nature. A deeper understanding of the computational expressive power of architectures could greatly aid in solving such problems and in designing more effective architectures. To address this, we introduce the Cayley-Maze, an open-ended RL environment with variable computational complexity. The Cayley-Maze environment offers several notable features: - Any finite RL environment can be represented as a specific instance of the Cayley-Maze. - The Cayley-Maze naturally extends and generalizes classical problems such as sorting, solving the Rubik's Cube, and integer factorization. Additionally, this environment is particularly valuable for Unsupervised Environment Design and Curriculum Learning, where the core strategy involves gradually increasing the complexity of proposed problems.

Biography:

Artsiom Ranchynski is a PhD student at University of Warsaw and ML Researcher at IDEAS NCBR. Previoslu he was mathematics student at the University of Wrocław.

Wojciech Kusa photo

Wojciech Kusa

Allegro / TU Wien

Co-authors:

Mikołaj Pokrywka, Mikołaj Koszewski, Mieszko Rutkowski

Poster 36: Multimodal and Contextualised Machine Translation in the E-commerce Domain

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

The most popular paradigm in modern Neural Machine Translation (NMT) is sentence-level translation. However, this approach often lacks broader context, leading to challenges with homonyms, transliterations, pronouns, and stylistic nuances. Moreover, texts in the e-commerce domain possess a specific grammatical structure, often lacking clear sentence boundaries. To address these limitations, this research investigates whether incorporating additional context from metadata and images can enhance translation quality, particularly for offer titles and product names in the e-commerce domain. Leveraging recent advancements in vision and language modeling, including large language models (LLMs) such as LLaVA and PaliGemma, we developed and evaluated multimodal translation models tailored for the e-commerce domain. Our methodology involved creating a new multimodal dataset for Polish-to-Czech translation, comprising over 200,000 parallel sentences paired with product images. We used LoRA and backtranslation techniques for fine-tuning LLMs. Additionally, we evaluated our experiments using several test sets designed to highlight translation ambiguities. Our primary research questions focused on the impact of contextual information and the role of image data in improving translation quality. Results indicate that image context improves model performance on both lexical and neural evaluation metrics, particularly for more ambiguous sentences. We also demonstrate that open-source LLMs can struggle with zero- and few-shot machine translation for less common language pairs. Notably, we observed that LLMs trained solely on text data still benefited from visual cues during inference, resulting in an increase of more than 1% in BLEU and COMET scores. Furthermore, we explored the impact of adversarial examples, such as the use of incorrect images during inference, and found that such inputs can degrade LLM performance. In conclusion, we believe that context-based information can improve the fluency and adequacy of NMT for short texts.

Biography:

Wojciech is a researcher specialising in Information Retrieval and Natural Language Processing. He is currently a Senior Research Engineer at Allegro ML Research, where he works at the intersection of large language models and machine translation. He holds a PhD in NLP from TU Wien, where his research focused on the application and evaluation of neural methods on domain-specific data. He was also a member of the EU Horizon 2020 Project DoSSIER, where he specialised in biomedical natural language processing. His research interests include machine translation, health NLP, and AI for scientific research discovery. Previously, he worked as an NLP Research Engineer at Samsung R&D and interned at Sony and UNINOVA.

Szymon Rusiecki photo

Szymon Rusiecki

KN BIT

Co-authors:

Michał Wiliński, Ignacy Stępka, Cecilia Morales Garza, Kimberly Elenberg, Luke Sciulli, Kyle Miller, Artur Dubrawski

Poster 37: Bayesian Network for Prediction of Vital Functions by Autonomous Triage

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Mass Casualty Incidents (MCIs) pose significant challenges for emergency medical systems, requiring rapid and efficient assessment of numerous casualties amidst limited resources and potential hazards. This paper introduces a novel approach for monitoring and evaluating patients in MCI scenarios through the use of a Bayesian Network that integrates data from various machine learning algorithms. The proposed solution combines outputs from independent algorithms that monitor vital signs such as heart rate, respiration rate, and severe hemorrhage. Unlike traditional methods, our Bayesian network was developed solely based on expert knowledge, without reliance on training datasets. This Bayesian approach offers several key benefits: it support inference with incomplete patient data, enhances the reliability and consistency of vital sign detection, and demonstrates robustness against errors and biases from external factors. Additionally, it facilitates faster decision-making in critical situations. Our method has the potential to significantly improve the triage process in MCI scenarios, helping responders and medical personnel to prioritize care more effectively. By bridging the gap between machine learning algorithms and medical expertise, our solution aims to improve patient assessment tools in mass casualty response efforts , ultimately contributing to a higher number of lives saved in emergencies.

Biography:

Szymon Rusiecki is a fourth-year student at the AGH University of Science and Technology in Kraków. He is interested in explainable artificial intelligence and quantum computing. He also loves karaoke.

Bugra Altug photo

Bugra Altug

Vienna University of Technology

Co-authors:

Martin Weise, Andreas Rauber

Poster 38: Generating Semantic Context for Data Interoperability in Relational Databases using BGE M3-Embeddings

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Relational databases store a significant portion of the world’s most valuable data by managing data in tables. It is a common misconception that by providing expressive table- and column names, researchers have sufficient context to reuse the data adequately. Without proper (machine-)understandable context, researchers face a data interoperability problem, i.e. the challenge to confidently interpret data due to lack of context. This problem becomes evident when sharing data in data repositories such as DBRepo [1] that make it findable, accessible, interoperable and reusable to anyone globally. To aid researchers in the difficult task of mapping a relational database schema to ontologies that describe the conceptual and quantitative context, we use a structure-level matching method based on machine learning. Our semi-automatic mapping system utilizes the BGE M3-Embedding model [2] to encode column names and ontology entity labels. It employs a cosine similarity to identify the best-matching ontology concept for each table column. This approach successfully matches 89.9% of the contextual correct semantic concepts and units of measurements within the first 10% of all entities (entity coverage) and achieves a Mean Reciprocal Rank (MRR) of 0.5259, outperforming all other approaches. A similar approach is employed to calculate the similarity between columns and units of measurement entities, with an encoding method that adds the "unit" keyword at the end of entity labels. This achieves a 64.4% entity coverage and 0.1164 MRR, also surpassing all other tested approaches. Let ci={c1, …, ci} with 1 ≤ i ≤ n be a column of a relational database table schema. Each ci may optionally have exactly one concept entity (semantic concept) eci and exactly one unit of measurement entity eui assigned. Based on the column names C={c1, …, cn}, our approach suggests top-level ontologies as well as semantic concepts and unit of measurement for each column. Users can correct the suggested semantic concepts by selecting different concept entity mappings. This selection influences the identification of the respective column’s unit of measurement. We evaluated the efficacy of our method through observing expert users that were trained briefly on the user interface. On average, a researcher needs to click 1.36 times on average to correctly map a column ck to a concept entity eck. Our method is 9.8 times faster than mapping the entity names manually (i.e. typing them). For correctly mapping a unit of measurement entity euk to a column ck, our approach requires the researcher to click 5.745 times on average. This makes our method 2.48 times faster than manually typing the entity names. Note that the minimum number of clicks is calculated by simulating user interactions such as clicking on a drop-down from our user interface is counted as one click. [1] Weise, M., Staudinger, M., Michlits, C., Gergely, E., Stytsenko, K., Ganguly, R., & Rauber, A. (2022). DBRepo: a Semantic Digital Repository for Relational Databases. International Journal of Digital Curation, 17(1), 11. DOI: 10.2218/ijdc.v17i1.825 [2] Chen, J., Xiao, S., Zhang, P., Luo, K., Lian, D., & Liu, Z. (2024). BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation (Version 4). DOI: 10.48550/ARXIV.2402.03216.

Biography:

Bugra Altug is a graduate of MSc. Logic and Computation from Vienna University of Technology. His research focuses on Knowledge Graphs, Semantic Web, and Natural Language Processing.

Joanna Krawczyk photo

Joanna Krawczyk

University of Warsaw

Co-authors:

Krzysztof Gogolewski, Aleksandra Możwiłło, Marcin Możejko, Michał Traczyk, Daniel Schulz, Sylvie Rusakiewicz, Stephanie Tissot, Eike Staub, Marie Morfouace, Henoch Hong, Ewa Szczurek

Poster 39: Spacelet - uncovering patterns of infiltration in the multiplex immunofluorescence data

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Introduction Spatial proteomics, such as multiplex immunofluorescence (mIF) is a medical imaging technique that generates multi-channel data at a single-cell resolution from a tumour sample. When sampled for cohorts of cancer patients, spatial proteomics data can provide useful prognostic information and insights into the organisation of tumour immune microenvironment (TME). Current approaches towards spatial proteomics data analysis focus only on small cellular neighbourhoods, ignoring the broader spatial context. Method Here, we introduce Spacelet - a novel algorithmic and statistical approach designed to discover patterns of infiltration within spatial tumour data. Given the nature of cancer spatial proteomics data, we incorporate a graph-based approach toward modelling and represent the data as a collection of disjoint spatial tumour cell islets (hence Spacelet). Later, we decompose these islets into a sequence of layers that start from the tumour border and go deeply into the tumour interior. These structures are useful in capturing the variety of infiltration patterns, which we obtain by clustering cellular abundances at consequent layers using Wasserstein distance. Results We validated the performance of Spacelet on 576 samples from NSCLC2 192 patients, collected by the IMMUcan consortium (immucan.org) and 68 samples from 34 melanoma patients from the National Institute of Oncology (NIO), Poland. In both datasets, Spacelet identified the same patterns of infiltration (uniform and interior-excluded), which correlate strongly with clinical variables. Among others, in the NSCLC2 cohort, Spacelet discovered that interior-excluded and uniform infiltration patterns differentiate histology subtypes, while in the Melanoma cohort, Spacelet linked interior-excluded infiltration with survival, progression, and response to treatment. Conclusions Results from the application of Spacelet to different datasets prove the efficacy of our approach in inferring biologically meaningful spatial characteristics of infiltration patterns in tumour tissues. With further applications of our method, we aim to deepen the understanding of the spatial organisation of TME and provide insights for potential directions of improvements for immunotherapy treatment strategies.

Biography:

Joanna Krawczyk is a bioinformatics data scientist, deeply passionate about machine learning applications in healthcare, especially in the computational oncology field. Graduated from Bioinformatics and Mathematics at the University of Warsaw. Currently working as a bioinformatics data scientist in a Polish biopharmaceutical company and as a researcher in Ewa Szczurek's Computational Computational Medicine Laboratory, where she focuses on a research project on modeling tumor immune microenvironment.

Mateusz Kapusta photo

Mateusz Kapusta

Astronomical Observatory, University of Warsaw

Poster 40: Iris-ML: Simulation-Based Inference for the Spectral Energy Distribution fitting.

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Markov chain sampling is a versatile algorithm used in modern astronomy for inference tasks. Unfortunately, it is not always suitable for large inference tasks where the evaluation of the likelihood function is computationally expensive. Here, an alternative approach is presented, in the form of a Simulation-Based Inference. I use it to tackle the Spectral Energy Distribution fitting problem. The basic idea for this type of analysis is to uncover the true physical properties of objects by fitting complicated physical models to broadband brightness measurements. Mastering analysis of the photometric data is essential for modern astronomical research. Based on the transformer architecture for the preprocessing, the proposed model greatly accelerates the sampling process with the help of the MAF Normalizing Flow. Such models are a great step forward compared to the usually used MCMC, as they allow for a much faster sampling procedure. They will become more influential, as the next generation of astronomical surveys will produce unprecedented amounts of data, that need to be processed.

Biography:

Mateusz Kapusta have been working in the field of Observational Astronomy for 3 years, mainly applying various Bayesian models in real-case astronomical scenarios. He is involved in research in Simulation-Based Inference, with an application to big-astronomical surveys.

Emilia Majerz photo

Emilia Majerz

AGH University of Krakow

Co-authors:

Aleksandra Pasternak

Poster 41: Siamese Ensembles for image data augmentation

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Even the most powerful neural network architectures can be useless when provided with a small amount of training data. In this scenario, the use of data augmentation techniques can help with generalization. However, they should be applied carefully, as some modifications can alter the labels of the samples, which may be difficult to spot without expert knowledge. In this work, we introduce a simple label-preserving image data augmentation technique, especially suitable for small datasets. This network training method allows for expanding the data by using pairs of images instead of single samples in an ensemble learning-like manner and is inspired by Siamese neural networks, with two networks working together to achieve a common goal. It can be easily implemented to improve the accuracy of various image classification tasks and be particularly useful for smaller, medical or technology industry-related datasets. In our experiments, we focus on a difficult and very specific aircraft dataset, containing images of fuselage of aircraft structures, with corroded and non-corroded surfaces. We also provide results on standard baseline data. Our preliminary experiments showed that the proposed augmentation improves the classification accuracy by even over a dozen percentage points, and the gain in accuracy is especially visible in the case of a smaller dataset size.

Biography:

Emilia Majerz is a PhD candidate at the AGH University of Krakow, working on theory-inspired Machine Learning. She hold an MSc in Data Science and a BEng in Computer Science. Her main research area is incorporating Physics knowledge into Machine Learning models, focusing on the detectors of ALICE at CERN.

Moritz Staudinger photo

Moritz Staudinger

TU Wien

Co-authors:

Wojciech Kusa, Florina Piroi, Aldo Lipani, Allan Hanbury

Poster 42: Beyond ChatGPT: A Reproducibility and Generalizability Study of Large Language Models for Query Generation

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Systematic literature reviews (SLRs) are a cornerstone of academic research, yet they are often labour-intensive and time-consuming due to the detailed literature curation process. The advent of generative AI and large language models (LLMs) promises to revolutionize this process by assisting researchers in several tedious tasks, one of them being the generation of effective Boolean queries that will select the publications to consider including in a review. This paper presents a extensive study of Boolean query generation using LLMs for systematic reviews, reproducing and extending the work of Wang et al. and Alaniz et al. Our study investigates the replicability and reliability of results achieved using ChatGPT and compares its performance with open-source alternatives like Mistral and Zephyr to provide a comprehensive analysis of LLMs for query generation. Therefore, we implemented a pipeline, which automatically creates a Boolean query for a given review topic by using a previously selected LLM, retrieves all documents for this query from the PubMed database and then evaluates the results. With this pipeline we first assess whether the results obtained using ChatGPT for query generation are reproducible and consistent. We then generalize our results by analyzing and evaluating open-source models and evaluating their efficacy in generating Boolean queries.

Biography:

Moritz Staudinger is currently a PhD student working with Professor Allan Hanbury at the Data Science Research Unit at TU Wien. His research primarily focuses on scientific document processing for comparable research. During his Master's degree, he collaborated with Professor Andreas Rauber on the Database Repository Project, aimed at facilitating the citation of specific subsets of evolving databases. For his Master's Thesis, he worked with the International Soil Moisture Network on Dynamic Data Citation.

Jakub Steczkiewicz photo

Jakub Steczkiewicz

Jagiellonian University

Co-authors:

Georgii Stanishevskii, Tomasz Szczepanik, Sławomir Tadeja, Jacek Tabor, Przemysław Spurek

Poster 43: ImplicitDeepfake: Plausible Face-Swapping through Implicit Deepfake Avatar Generation using NeRF and Gaussian Splatting

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Numerous emerging deep-learning techniques have had a substantial impact on computer graphics. Among the most promising breakthroughs is the rise of Neural Radiance Fields (NeRFs) and, more recently, Gaussian Splatting (GS). These methods have demonstrated their potential across multiple domains, showcasing the ability to reconstruct a 3D representation of a scene from 2D images. In this work, we build on these existing technologies to propose a novel workflow for generating 3D deepfake representations. Our contribution introduces ImplicitDeepfake, an effective method that from a single 2D image of a person (named in further references a celebrity) and an auxiliary dataset of external images we can obtain a high-quality 3D representation of the subject. Our pipeline is as follows: from a 2D image of a celebrity and a set of 2D pictures of an anonymous 3D model, along with associated camera positions ( NeRF Synthetic dataset), we achieve a plausible 3D deepfake of the celebrity, utilizing any of the aforementioned 3D representation solutions. Interestingly, the direct enhancement of the pipeline we propose adapts well in 4D use cases, where time is the fourth dimension. We show this using the existing NeRFace model, which handles dynamic facial expressions in video sequences. The results we receive are of decent quality, despite the simplicity of the introduced modification. Additionally, we propose the extension of our approach that incorporates diffusion models, allowing for text-based modifications and customization of the generated avatars. By conditioning the input images with text prompts before 3D reconstruction, ImplicitDeepfake enables straightforward personalization of the avatars, enhancing their realism and adaptability. Overall, our work demonstrates a novel integration of 2D deepfake technology with cutting-edge 3D image generation techniques, resulting in a powerful tool for creating and customizing 3D digital identities from minimal input data.

Biography:

Jakub Steczkiewicz is a fifth-year Computer Science student specializing in Machine Learning at Jagiellonian University. With a strong passion for Machine Learning and its practical applications, Jakub enjoys exploring new solutions in this area. Outside of academics, he engages in swimming and volleyball.

Antoni Kowalczuk photo

Antoni Kowalczuk

CISPA Helmholtz Center for Information Security

Co-authors:

Jan Dubiński, Atiyeh Ashari Ghomi, Yi Sui, Jiapeng Wu, Jesse C. Cresswell, George Stein, Franziska Boenisch, Adam Dziedzic

Poster 44: Robust Self-Supervised Learning Across Diverse Downstream Tasks

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Large-scale vision models have become integral in many applications due to their unprecedented performance and versatility across downstream tasks. However, the robustness of these founda- tion models has primarily been explored for a sin- gle task, image classification. The vulnerability of other common vision tasks, such as seman- tic segmentation and depth estimation, remains largely unknown. We present a comprehensive empirical evaluation of the adversarial robustness of self-supervised vision encoders across multi- ple downstream tasks. Our attacks operate in the encoder embedding space and at the downstream task output level. In both cases, current state-of- the-art adversarial fine-tuning techniques tested only for classification significantly degrade clean and robust performance on other tasks. Since the purpose of a foundation model is to cater to mul- tiple applications at once, our findings reveal the need to enhance encoder robustness more broadly. We propose potential strategies for more robust foundation vision models across diverse down- stream tasks.

Biography:

Antoni Kowalczuk is a PhD student at CISPA Helmholtz Center for Information Security in SprintML under the supervision of Adam Dziedzic and Franziska Boenisch, working on diffusion models' privacy and adversarial examples against SSL vision encoders. Obtained Bachelor's Degree from Warsaw University of Technology in the field of Computer Science.

Julia May photo

Julia May

Pearson

Co-authors:

Krzysztof Sopyła

Poster 45: What do LLMs know about English language?

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Various available benchmarks for LLM evaluation focus on a broad area of LLM capabilities like mathematical reasoning, academic knowledge and factual questing answering. However, there are significantly less benchmarks and datasets that focus on linguistic capabilities of LLMs and their knowledge of topics related to English language and grammar. As a research team focused mainly on applications of LLMs in English learning we decided to create our custom framework for evaluation of linguistic knowledge and language capabilities of LLMs. We prepared more than 20 different tasks, each measuring some specific linguistic skill of the LLMs to enable comparison of state-of-the-art models for our applications. We also implemented an evaluation pipeline based on the designed tasks using Azure AI Studio and created a dashboard presenting performance of different LLMs on our custom language tasks.

Biography:

For the last few years Julia May have been working for AI Science team at Pearson, focusing mainly on applications of NLP in services used to automatically generate English learning content and evaluate students’ responses. Recently she have been focused on LLM research and applications of LLMs in English language learning. Julia graduated from Computer Science (Artificial Intelligence) at Poznań University of Technology. In her free time she trains bouldering and she is interested in mathematics and neuroscience.

Filip Leonarski photo

Filip Leonarski

Paul Scherrer Institute

Poster 46: Big Data for Structural Biology at the Swiss Light Source Synchrotron Facility

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Macromolecular crystallography (MX) is a primary experimental technique for exploring the three-dimensional structure of macromolecules, such as proteins and ribonucleic acid molecules. Recent advances in MX allow visualization of such molecules at atomic resolution during reactions, with time resolution from femtoseconds to seconds. MX is also heavily used by pharmaceutical companies for structure-based drug design. MX is best performed at accelerator facilities, like synchrotrons and X-ray free electron lasers, which benefit from stable and bright X-ray beams. Paul Scherrer Institute (PSI) operates two large research facilities: the Swiss Light Source (SLS) synchrotron and SwissFEL. SLS is undergoing a major, two-year-long upgrade to become a fourth-generation synchrotron with up to two orders of magnitude higher X-ray brightness. All aspects of the instrumentation must be considered to benefit from the accelerator upgrade. One critical challenge for the new facility will be a significant increase in data volumes. While in 2007 cutting-edge instruments produced data on the order of a few hundredth megabytes per second, today a single 9 Mpixel JUNGFRAU detector can collect X-ray diffraction images at rates of 2000 images per second, producing 36 GB/s of raw data. Simply saving such a data stream is challenging, especially when multiple such detectors are used. The only feasible solution is to analyze and reduce data on the fly. I will highlight the challenges of operating such demanding scientific instruments. I will then present solutions developed at the Paul Scherrer Institute for sustainable handling of exponential increases in experimental data rates. Specifically, I will show my development, Jungfraujoch. Jungfraujoch is a single high-end server capable of handling and reducing a 36 GB/s stream of detector data using a combination of hardware accelerators (GPUs and FPGAs). I will present deterministic algorithms I’ve implemented and ported to the accelerator cards to perform feature extraction. I will also show ongoing efforts to implement deep learning algorithms at GB/s data rates. I will finally discuss the importance of Open Research Data initiatives at the Paul Scherrer Institute and the broad MX community. At the end of the presentation, I will share first-hand experience working on two projects. The first project involved using GPUs and PyTorch to find an optimal crystal lattice for a given diffraction image, improving performance over prior CPU implementation by two orders of magnitude. It was done in collaboration with the Swiss Data Science Centre, a specialized institution within the ETH Domain that supports Swiss scientists with machine learning competence. The second project I’m currently realizing is funded by the Swiss Agency of Innovation, and it involves commercializing my Jungfraujoch development by the leading X-ray detector company, DECTRIS.

Biography:

Filip is a staff scientist at the Paul Scherrer Institute, the largest research institute in Switzerland. He works on data acquisition, analysis, and on-the-fly reduction for macromolecular crystallography beamlines at the Swiss Light Source synchrotron and SwissFEL X-ray free electron laser. He manages datasets of hundreths terabytes, generated at multiple gigabytes per second. He is a principal investigator on a NextGenDCU commercialization grant with company DECTRIS funded by the Swiss Agency of Innovation Innosuisse and participated as a science expert in a collaborative project on data reduction with ML methods (RED-ML) together with the Swiss Data Science Center and the Swiss National Supercomputing Centre. Filip got his PhD from the University of Warsaw for the application of evolutionary algorithms to parameterize RNA molecular dynamics force field. He spent 2-years in Strasbourg on a Mobility Plus scholarship working on RNA structural biology before joining the Paul Scherrer Institute in 2016. Filip has interests in modern hardware for fast data analysis, he has experience in development with both FPGAs and GPUs. He received IBM Champion title for year 2021, awarded for sharing his experience with OpenCAPI memory-coherent interconnect connecting IBM POWER CPUs and FPGAs.

Hubert Baniecki photo

Hubert Baniecki

University of Warsaw

Poster 47: On the Robustness of Global Feature Effect Explanations

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

We study the robustness of global post-hoc explanations for predictive models trained on tabular data. Effects of predictor features in black-box supervised learning are an essential diagnostic tool for model debugging and scientific discovery in applied sciences. However, how vulnerable they are to data and model perturbations remains an open research question. We introduce several theoretical bounds for evaluating the robustness of partial dependence plots and accumulated local effects. Our experimental results with synthetic and real-world datasets quantify the gap between the best and worst-case scenarios of (mis)interpreting machine learning predictions globally. This poster corresponds to a paper (https://arxiv.org/abs/2406.09069) published at ECML PKDD 2024.

Biography:

Hubert is a 3rd year PhD student in Computer Science at the University of Warsaw, advised by Przemysław Biecek. In spring 2024, he was a visiting researcher at LMU Munich, hosted by Bernd Bischl. Prior, Hubert completed a Master’s degree in Data Science at Warsaw University of Technology. His main research interest is explainable machine learning, with a particular emphasis on the robustness of post-hoc explanations to adversarial attacks. Hubert also supports the development of several open-source packages for building predictive models responsibly, for which he received the 2022 John M. Chambers Statistical Software Award.

Hubert Rybka photo

Hubert Rybka

Jagiellonian University, Faculty of Chemistry

Co-authors:

Tomasz Danel, Sabina Podlewska

Poster 48: PROFIS: Design of structurally-novel drug candidates by probing molecular fingerprint space with RNNs

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

The contemporary landscape of drug discovery is characterized by the increasing complexity of the tasks, the rising cost of research and development, and the demand for faster and more efficient ways to bring innovative therapeutics to market. As a solution to these challenges, computational methods have become more prevalent, with generative ML paving the way to faster and more effective drug discovery in the recent years. Before any computational algorithm can process a molecular structure, it needs to be encoded in a way that allows the machine to parse it. Several textual encoding methods have emerged, including SMILES (Simplified Molecular Input Line Entry System), its more recent and ML-suited counterpart DeepSMILES, and SELFIES (Self-referencing Embedded Strings). Another common way to represent molecular structures is to use molecular fingerprints (FPs). Those are structural representations of chemical compounds in the form of binary or numerical vectors that capture critical information about a molecule's constituent atoms, bonds, and substructures. In contrast to molecular graphs or textual encodings, FPs have the potential to extract information about biochemically relevant functional groups and present it in a compact, machine-readable format, and have a great potential to be used as features for ML-based QSAR (quantitative structure-activity relationship) modeling. In this study, we propose a novel generative model, PROFIS, which allows for the design of target-focused compound libraries by probing continuous fingerprint space with RNNs. PROFIS is an innovative molecular VAE that maps molecular fingerprints into a continuous, low-dimensional space and decodes molecule structures in a sequential notation, ensuring alignment with the initial FP description. In the task of generating potential novel ligands, PROFIS employs a Bayesian search algorithm in tandem with a QSAR model to traverse the space of embedded molecular FPs and identify subspaces that correspond to potential good binders. The latent vectors sampled from those subspaces are then decoded into textual formats, such as SMILES or DeepSMILES using a recurrent neural network. Since many FPs do not determine the full chemical structure, our method can generate diverse molecules that match the particular FP description. The generated structures are target-specific, which allows for generating potential ligands tailored to a specific receptor. We prove that PROFIS exhibits excellent scaffold-hopping capabilities, enabling the exploration of novel chemical space, an essential feature of computational tools for de novo ligand generation. We present the application of our protocol in the task of ligand generation for the dopamine D2R. However, the developed methodology is universal and can be applied to any biological target provided a dataset of known ligands is available. To facilitate the widespread use of PROFIS, we share all the scripts needed to run the developed protocol via GitHub.

Biography:

Hubert Rybka graduated from the Faculty of Chemistry, Jagiellonian University in 2023 with a Master's degree in Chemistry. Currently pursuing PhD at Łukasz Skalniak's group of Bioorganic and Medicinal Chemistry, employing computational methods for modern, data-based drug design. Research interests include ML-assisted molecular design, cheminformatics, and molecular dynamics of biologically relevant systems. When not doing research - a rock climber and a friend of small animals.

Jakub Poziemski photo

Jakub Poziemski

Institute of biochemiostry and biophysics Polish Academy of Sciences

Co-authors:

Paweł Siedlecki

Poster 49: Application of vision transformers to protein-ligand affinity prediction.

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

The transformer architecture has revolutionized many areas related to AI. It was originally adopted for natural language processing (NLP), but in recent years there has been rapid development of transformer architectures for computer vision (CV) data, the so-called Vision Transformers (ViT). ViT is achieving spectacular results in many CV areas, displacing architectures based on convolutional neural networks. (CNN). In this paper, we present a successful application of ViT to the problem of protein-ligand affinity prediction based on 3D crystallographic complexes. Despite the relatively small dataset and the very complex nature of the problem, ViT achieves results comparable to the best methods used for this problem. The paper also includes extensive model diagnostics that provide information on important aspects of the input data and its representation.

Biography:

Jakub Poziemski completed my bachelor's and master's degree in Bioinformatics and Systems Biology at the University of Warsaw, Faculty of Mathematics, Informatics and Mechanics. He is currently a PhD student at the Institute of Biochemistry and Biophysics of the Polish Academy of Sciences (IBB PAS) in the Chemoinformatics and Molecular Modeling Laboratory. His PhD thesis focuses on protein-ligand affinity prediction using artificial intelligence and machine learning methods. He has 8 years of experience in the areas of AI and ML, with expertise in natural language processing (NLP), AI applications in bioinformatics and chemoinformatics, programming in Python, data analysis and visualization. Jakub has gained experience in both commercial and scientific projects.

Paweł Skierś photo

Paweł Skierś

Warsaw University of Technology

Co-authors:

Kamil Deja

Poster 50: Joint Diffusion models in Continual Learning

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

In this work, we introduce JDCL - a new method for continual learning with generative rehearsal based on joint diffusion models. Neural networks suffer from catastrophic forgetting defined as abrupt loss in the model's performance when retrained with additional data coming from a different distribution. Generative-replay-based continual learning methods try to mitigate this issue by retraining a model with a combination of new and rehearsal data sampled from a generative model. In this work, we propose to extend this idea by combining a continually trained classifier with a diffusion-based generative model into a single -- jointly optimized neural network. We show that such shared parametrization, combined with the knowledge distillation technique allows for stable adaptation to new tasks without catastrophic forgetting. We evaluate our approach on several benchmarks, where it outperforms recent state-of-the-art generative replay techniques. Additionally, we extend our method to the semi-supervised continual learning setup, where it outperforms competing buffer-based replay techniques, and evaluate, in a self-supervised manner, the quality of trained representations.

Biography:

Paweł Skierś is a student at Warsaw University of Technology. He is a young and ambitious student with a passion for artificial intelligence and machine learning. For the past 3 years, he have been a member of Artificial Intelligence Society Golem. There, he is continuously expanding my knowledge about my subject of interest. In his free time, Paweł enjoys playing chess and bridge, as well as reading about history.

Maksymilian Wojnar photo

Maksymilian Wojnar

AGH University of Krakow

Poster 51: Generative neural networks for fast and accurate Zero Degree Calorimeter simulation

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

The integration of generative neural networks into high-energy physics simulations is rapidly transforming the field, offering unprecedented efficiency and accuracy. A prominent application is the simulation of the Zero Degree Calorimeter (ZDC) in the ALICE experiment at CERN. Traditionally, these simulations have relied on Monte Carlo methods, which, while highly accurate, are computationally intensive and time-consuming. By employing generative networks as surrogate models, we achieve a significant reduction in computational burden while maintaining high accuracy. In this work, we utilize the latest advancements in generative neural networks, specifically focusing on diffusion models and models based on vector quantization, to simulate the ZDC neutron detector. These state-of-the-art architectures enable the generation of high-fidelity data that closely mirrors real experimental results. We explore and compare the performance of the generative frameworks against established simulation methods. Our findings underscore the effectiveness of generative neural networks in providing fast yet accurate simulations, making them a valuable tool in the high-energy physics community.

Biography:

Maksymilian Wojnar is a researcher specializing in computer science and wireless networks, holding a master’s degree from AGH University of Krakow. His work focuses on optimizing wireless networks and machine learning, including reinforcement learning and generative neural networks. Maksymilian has authored several papers on these topics and has been involved in notable research grants, including "ML4WiFi" and "MLDR," which advance machine learning-driven approaches in wireless communications.

Adam Kania photo

Adam Kania

Jagiellonian University

Co-authors:

Marko Mihajlovic, Sergey Prokudin, Jacek Tabor, Przemysław Spurek

Poster 52: FreSh: Frequency Shifting for Accelerated Neural Representation Learning

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Implicit Neural Representations (INRs) have recently gained attention as a powerful approach for continuously representing signals such as images, videos, and 3D shapes using multilayer perceptrons (MLPs). However, MLPs are known to exhibit a low-frequency bias, limiting their ability to capture high-frequency details accurately. This limitation is typically addressed by incorporating high-frequency input embeddings or specialized activation layers. In this work, we demonstrate that these embeddings and activations are often configured with hyperparameters that perform well on average but are suboptimal for specific input signals under consideration, necessitating a costly grid search to identify optimal settings. Our key observation is that the initial frequency spectrum of an untrained model's output correlates strongly with the model's eventual performance on a given target signal. Leveraging this insight, we propose frequency shifting (or FreSh), a method that selects embedding hyperparameters to align the frequency spectrum of the model’s initial output with that of the target signal. We show that this simple initialization technique improves performance across various neural representation methods and tasks, achieving results comparable to extensive hyperparameter sweeps but with only marginal computational overhead compared to training a single model with default hyperparameters.

Biography:

Adam Kania is machine learnign researcher passionate about mathematics.

Emilia Kacprzak photo

Emilia Kacprzak

SentiOne

Co-authors:

Agnieszka Pluwak

Poster 53: Exploring the potential of prompting strategies in TOD creation: humans vs machines in the challenges of dialog generating, annotating and defining text style

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

LLMs show great potential for being applied in the task of TOD dialog corpora generation, but not many works have explored this potential so far. The so far applied SOTA prompting strategies seem quite laborious and complicated (one prompt per dialog act or turn). Available works frequently describe the dialog generation process as successful, but the annotation one - as failed. On the other hand, good quality available human-created language resources (templates, scenarios, slots, intents, annotation methods, instructions) rarely get used together with the generative methods to obtain new, good quality datasets. Therefore, we have formulated a challenging research task: If an LLM was given the same instructions, scenarios and information as language experts would the quality of the outcome be sufficient for training of NLU and DST models for a real-life helpline? In our work we used one shot prompting of the GPT 4o to generate a dataset of 300 task-oriented dialogues in the banking domain in the English language. The generation process involved a different language style for the agent (polite, quite formal) and customer (colloquial) with a list of frequently occurring UGC phenomena (correction, typos, hesitation, etc.) and a list for spoken and written subsets of dialogs. We have manually evaluated the quality of the dialogs and the correctness of the UGC phenomena listed by the model. Finally, we created another few-shot, in-context prompt for dialog annotation using our own DST tagset involving difficult phenomena, such as implied intents, inspired by the log observation of a real-life banking helpline. We arrive at several conclusions. The quality of generated dialogs is comparable with the dialogs created by language experts. The result of the manual quality check was that 97% of dialogs were suitable for training and even some DST and UGC phenomena are addressed with a greater variety. There are quite many hallucinations, though. Among over 20 classes of UGC phenomena we can rate only 5 as “understood” (correctness above 80%, high representation, e.g. colloquial language, small letters), 5 as “not understood” or hallucinated (correctness below 1%, single representation in the corpus): e.g. corrections, mispronunciation, and some as avoided phenomena: vulgarisms (98% of omissions), onomatopoeia, emoticons (75%), popular abbreviations, request of a pause (around 50%). We found the annotation quality of NLU tags good enough for tasks such as Intent Detection (about 90% of correct tags). However, other tags like DST-related classes and discourse markers (most classes with about 40-80% of correct tags) or experimental tags (80%) were overall of not sufficient quality with some tags performing well and some below 70%. The method is applicable to other domains and languages and shortens the time of dataset creation to a few hours/days if the dialog templates are provided. This was further verified with an Intent Detection task model with BERT embeddings where GPT 4o generated data used for training performed reasonably well when tested on two datasets created by language experts (F1: 0.78; 0.79) in comparison to training on those datasets and testing on the other sets (F1 for language experts created set: 0.87; 0.93; GPT 4o created set: 0.85; 0.82). It’s performance was also better than MultiDoGo corpora which when used both as training (F1: 0.61 0.58 0.65) and testing set (F1: 0.37 0.46 0.48) across the board showed lower result.

Biography:

Emilia Kacprzak is a research data scientist focused on Natural Language Processing including dataset creation and conversational machine learning. Her research interests are eXplainable AI (XAI) from the NLP perspective, search log analysis and dataset search.

Jakub Piwko photo

Jakub Piwko

Warsaw University of Technology

Co-authors:

Dawid Płudowski, Antoni Zajko, Jędrzej Ruciński, Franciszek Filipek, Anna Kozak, Katarzyna Woźnica

Poster 54: Can clustering improve the performance of classifiers? Introduction of a new ensemble technique utilizing cluster analysis methods in classification tasks.

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Nowadays, the landscape of machine learning is rich with diverse solutions designed to tackle various challenges. While numerous solutions have been developed, significant performance improvements are often achieved through ensemble methods. Committees treat the entire dataset uniformly, without accounting for the fact that datasets usually contain highly diverse and varied subsets. The question arises if providing a successful method of grouping similar elements into clusters can improve the classification process ensuring a more tailored approach. This is why we introduce a new ensemble classification framework, which implements iterative model training. In this approach, we are focusing on finding a sequence of simple models that cover different clusters of data, in which they are particularly accurate. By generating a list of trained models and masks of incorrect predictions, we can optimally assign new data in an ensemble manner. This approach may constitute a new method that contributes to the advancement of classification tasks with complex datasets. Introducing smart clustering into classification can bring significant benefits, such as a better understanding of data structure, reduction of noise, and more accurate model adaptation to the specifics of analyzed datasets.

Biography:

Jakub Piwko is pursuing a Master's Degree in Data Science at WUT. His primary interests lie in traditional Machine Learning, Data Visualization, and Explainable AI (XAI). He is actively involved in a scientific circle for mathematical modeling, where he and his colleagues have initiated a project aimed at improving classification techniques.

Florian Bürger photo

Florian Bürger

University of Cologne

Co-authors:

Adrián E. Granada, Katarzyna Bozek

Poster 55: Advanced Multi-Object Tracking and Classification of Cancer Cells with Transient Fluorescent Signals

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Monitoring cells in time-lapse videos is an essential technique in biomedical research, facilitating an in-depth examination of cellular activities such as cell division and cell death. This method is particularly beneficial for single-cell analysis, especially in exploring how cancer cells respond to therapeutic drugs, as it yields critical information regarding the efficacy of these treatments. Traditional approaches to cell tracking often rely on classical multi-object tracking (MOT) paradigms, where cells are first detected and then matched across frames. However, unlike conventional MOT challenges, which primarily address occlusion and changes in object appearance, cell tracking introduces unique challenges. This includes the detection of specific biological events, such as cell division or cell death. Furthermore, the tracking of entire cell lineages, even after division, is also essential. For over a decade, state-of-the-art methods have been benchmarked using the Cell Tracking Challenge (CTC), which focuses on 2D and 3D time-lapse microscopy videos. However, the datasets used in the CTC primarily emphasize the event of cell division, neglecting the critical event of cell death, which is crucial in single-cell analysis and understanding treatment effectiveness. In this study, we utilize a dataset comprising videos captured with a long-term, high-temporal resolution microscope. Unlike standard approaches that track cells based on a single constant fluorescent signal, our dataset employs three different fluorescent markers, resulting in transient signals that complicate the tracking process. Moreover, our dataset includes sequences with both cell division and cell death, significantly increasing the challenge compared to conventional CTC sequences. To address these challenges, we propose a deep-learning-based tracking method that simultaneously considers both cell division and cell death events. Our approach first detects cells using a transformer encoder-decoder architecture, which not only identifies cells in each frame but also classifies them as either living or dead. Following detection, we employ a multi-stage association method to match cells across successive frames. This method incorporates both high-confidence and low-confidence detections, allowing for the tracking of cells that are barely visible due to transient signals, as well as the recovery of lost detections. Our method demonstrates robust performance, achieving an average Multiple Object Tracking Accuracy (MOTA) of 75% on the test set, despite the unprecedented challenges posed by transient signals and the inclusion of both cell division and cell death events. Additionally, we successfully track more than 90% of the cells for over 80% of their lifespan. These results highlight the novelty and effectiveness of our approach in tackling complex tracking conditions, setting a new benchmark for future research in single-cell analysis.

Biography:

Since 2023, Florian Bürger is a PhD student in Computer Science at the Bozek Lab at the University of Cologne. He obtined his master's and bachelors degree in Computer Science at the University of Paderborn.

Łukasz Staniszewski photo

Łukasz Staniszewski

Warsaw University of Technology

Co-authors:

Bartosz Cywiński*, Kamil Deja, Adam Dziedzic, Franziska Boenisch

Poster 56: Ready, aim, edit! 🎯 Precise Parameter Localization for Text Editing with Diffusion Models

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Novel diffusion models (DMs) can synthesize photo-realistic images with integrated high-quality text. In this work, we demonstrate through attention activation patching that less than 0.5% of DMs' parameters influence the text generation within the images. In contrast to prior work, our localization approach is broadly applicable across various diffusion model architectures, including both U-Net and Transformer-based, utilizing diverse text encoders. Building on this observation, by precisely targeting specific parameters of the model, we improve the efficiency and performance of existing image-editing methods, which often inadvertently modify not only the text but also the other visual elements within an image. Furthermore, we demonstrate that fine-tuning solely the localized parameters enhances the general text-generation capabilities of large diffusion models, providing a more efficient fine-tuning approach.

Biography:

Łukasz Staniszewski is a graduate student researcher at the Computer Vision Lab at the Warsaw University of Technology. He completed his bachelor's degree with honors, and his work on a novel object detection architecture earned him the Best Engineering Thesis of 2024 award from the 4Science Institute. Łukasz's experience involves research on Large Language Models at Samsung R&D Institute and a research internship on Diffusion Models at the SprintML lab in CISPA, Germany. Currently, he is involved in several projects focused on Image Generation tasks, with plans to continue his research career in this field through PhD studies.

Ignacy Stępka photo

Ignacy Stępka

Carnegie Mellon University, Poznan University of Technology

Co-authors:

Nicholas Gisolfi, Artur Dubrawski

Poster 57: Adaptive fill-in: how to mitigate the loss of an agent in decentralized federated learning

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

In decentralized learning, agents collaborate by training models on their local data while regularizing based on information from their neighboring agents, aiming to achieve a common model and maximize overall performance. However, the permanent loss of an agent, especially one with unique knowledge about the data distribution (non-iid), can significantly degrade system performance. To address this issue, we introduce a model-inversion technique as an adaptive fill-in strategy for agent reconstruction. This method reconstructs data points similar to those used by the lost agent during training and utilizes them to create and deploy a new agent, effectively restoring system performance and maintaining the optimization process. We demonstrate the effectiveness of this approach across various data distribution scenarios, including non-overlapping data distributions, distinct class assignments, and uniform distributions. Via experimental analysis, we show that our adaptive patching method not only recovers performance after a persistent agent failure but also accelerates convergence compared to other baseline approaches.

Biography:

Ignacy Stepka is a fourth-year Artificial Intelligence student at Poznan University of Technology. His research experience includes work at the Robotics Institute of Carnegie Mellon University, where he has contributed to a project on the resilience of decentralized learning algorithms in adverse scenarios, funded by the U.S. Army. He has also developed formal verification approaches for Bayesian Networks in critical care trauma delivery under a DARPA initiative. At Poznan University of Technology, Ignacy's research focuses on robust counterfactual explanations. Previously, he worked on methods utilizing multi-criteria analysis for generating counterfactual explanations, and more recently, he has developed a statistical framework to ensure their robustness against model shifts. In addition to his research, Ignacy has gained significant professional experience over three years at the Poznan Supercomputing and Networking Center, where he has contributed to EU HORIZON-funded projects. His work includes predictive maintenance for Volkswagen assembly lines, explainable AI analyses for air traffic management, and anomaly detection in large HPC clusters. Ignacy is also actively involved in the academic community, having led seminar sessions on Machine Unlearning and Explainable AI as part of his university's student research group, GHOST.

Mikołaj Zieliński photo

Mikołaj Zieliński

Commonwealth Scientific and Industrial Research Organisation, Poznan University of Technology

Co-authors:

Dominik Belter, Peyman Moghadam

Poster 58: Smart sampling for object removal operations in Neural Radiance Fields

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Neural Radiance Fields (NeRF) have emerged as a powerful tool for generating immersive space representations. However, once trained, modifying these representations poses significant challenges due to the implicit storage of scene information within the weights of these coordinated neural networks. Existing approaches can be categorized into two main types: methods that modify the input dataset using inpainting techniques and methods that manipulate the density and sampling functions. The first category often involves time-consuming retraining, as edits cannot be applied to an already trained model without starting the training process anew. In contrast, the second category enables real-time editing without the need for model retraining. In the context of editing, the second category of methods provides significant flexibility, allowing for on-the-fly adjustments. However, object removal often results in distortions and artifacts in the scene behind the removed object. These distortions arise from na\"ive object removal techniques that suppress unwanted density function values, resulting in undersampled regions that should be reconstruced. Although the network properly encodes the knowledge of these regions, their reconstruction is impaired due to insufficient sampling. To address these issues, we propose a novel sampling technique that accounts for spatial regions containing the object to be removed and avoids sampling from these areas. Our approach focuses on sampling from regions underrepresented by existing methods, resulting in enhanced sampling of the regions behind the removed object. This technique mitigates distortion issues and improves the quality of rendered novel views. Additionally, our method reduces the number of samples required for successful rendering. Unlike other approaches, we demonstrate that our sampling strategy enables precise reconstruction of scene geometry, provided the network has seen the reconstructed regions from different angles during training. In cases where this is not possible, the network may exhibit hallucinations. However, it can still interpolate to approximate the geometry of the unseen regions.

Biography:

Mikołaj Zieliński is a PhD student at Poznań University of Technology and an intern at the Commonwealth Scientific and Industrial Research Organisation (CSIRO). He completed a Master’s degree in Automation and Robotics, with my research focusing on neural space representations. My work is dedicated to developing advanced representations to improve how robots manipulate objects and navigate their environments. Outside of my academic and professional pursuits, He enjoy machining, drinking tea and travelling. My hobbies often influence my approach to both my research and everyday life.

Piotr Stefański photo

Piotr Stefański

University of Economics in Katowice

Poster 59: Improved Scene Classification in Dynamic Combat Sports by Video Frame Segmentation

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

#Abstract The current literature on image classification focuses on using images in which significant information objects contain a large part of the image. The problem of classifying images that contain significant objects over an area of only a few percent of the total image is largely ignored. This paper addresses the problem of classifying single video frames in which significant and informative objects contain less than 1.5% of the registered scene. Original video frames were used for the baseline approach, in addition, an algorithm for image segmentation was proposed, which for the balanced accuracy measure obtained an increase of 35 percentage points over the baseline approach. #Introduction Cameras generate increasing amounts of data that cannot be analyzed manually. Thus, solutions are needed that will automatically provide valuable information about the recorded scene. Such a problem is addressed in the field of Computer Vision, where researchers have developed several algorithms for classifying the recorded scene. The paper proposes an approach for segmenting an image before the classification process. The approach applies the operation of subtracting the n-th earliest frame recorded by the camera. Experiments proved that the approach for the balanced accuracy measure obtains an increase of 35 percentage points over the baseline approach using the original video frames. #Experiments As part of the experiments, a binary classification of video frames was created, and the classifier was trained to classify a single frame into the "punch" or "no punch" class. The database contained 11,345 examples of the "punch" class and 100,614 examples of "no punch." As part of the experiments, the proposed algorithm was tested for the set n = {1, 2, 3, 5, 8, 13, 21, 34}, to compare the results and evaluate the impact of the proposed algorithm, a classifier was also trained on the original images(0_original approach). To train the classifiers the own convolutional neural network was used, with fewer convolutional layers and parameters compared to, for example, ResNet50 to speed up the training process. To statistically validate the results, the classifiers were trained and evaluated 30 times. The proposed algorithm was compared with the 0_original baseline approach. In addition, three other algorithms from the literature were tested and evaluated during experiments: - Background subtraction based on K-nearest neighbours. - Background subtraction based on the Gaussian mixture. - Background subtraction based on BSUV-Net algorithm based on the convolution neural networks.

Biography:

Piotr Stefański is a graduate of Computer Science at the University of Economics in Katowice, where he received a master's degree with a very good grade. From the beginning of my career, he combined learning with practice, working as a programmer and then as a team leader. He devoted my bachelor's thesis to the development of a tool for automatic verification of data from photos of ID cards, which was implemented in business. After graduation, Piotr began my research career as an assistant at the University, while preparing a doctorate in technical informatics and telecommunications at the Wroclaw University of Technology. He initiated cooperation between the University and industry, which resulted, among other things, in the development of an algorithm for gambling addiction detection and a publication delivered at an international conference. My research focuses on image processing and neural networks, which results in leading a research club and participating in projects related to the application of vision technologies. In addition to my academic work, He is involved in the community as a volunteer firefighter, developing a support system for rescue operations using drones, image processing and machine learning algorithms.

Gracjan Góral photo

Gracjan Góral

University of Warsaw, IDEAS NCBR, IMPAN

Co-authors:

Alicja Ziarko, Michał Nauman, Maciej Wołczyk

Poster 60: Seeing Through Their Eyes: Evaluating Visual Perspective Taking in Vision Language Models

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Visual perspective-taking (VPT) is the ability to understand another person's viewpoint, allowing individuals to predict the actions of others. For example, a driver can avoid accidents by considering what pedestrians see. Humans generally develop this skill in early childhood, but it is unclear whether recently developed Vision Language Models (VLMs) have this ability. As these models are increasingly used in real-world applications, it is important to understand how they perform on complex tasks like VPT. In this paper, we introduce two manually curated datasets, called "Lego" and "Dots," to test VPT skills, and we use these datasets to evaluate 12 commonly used VLMs. We observe a significant drop in performance across all models when perspective-taking is required. Furthermore, we find that performance in object detection tasks does not strongly correlate with performance on VPT tasks, indicating that existing benchmarks may not be adequate for understanding this problem.

Biography:

Knows everything... except the language. Former math student, currently 'wrestling' with language models (though it's unclear who's winning). PhD candidate at the University of Warsaw, under the watchful eye of IDEAS NCBR.

Aleksandra Dagil photo

Aleksandra Dagil

University College London (UCL) and True North Partners

Poster 61: Probabilistic Time Series Forecasting Transformer Model: comparative analysis with statistical ARIMA method for short-term wind power prediction

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Despite the widespread use of transformer architectures in Natural Language Processing and Computer Vision, they have not yet become the state-of-the-art in wind-power time series forecasting, where statistical methods remain more popular. This research aims to bridge that gap by comparing the accuracy of wind power predictions using two approaches: a novel Probabilistic Time Series Forecasting Transformer Model and the traditional AutoRegressive Integrated Moving Average (ARIMA) method. By analyzing 134 time series from the SDWPF dataset and using three metrics—Mean Absolute Percentage Error (MAPE), Symmetric Mean Absolute Percentage Error (sMAPE), and Mean Arctangent Absolute Percentage Error (MAAPE)—I demonstrate that the transformer model consistently outperforms ARIMA. The transformer model shows higher accuracy across all metrics for most time series and exhibits less variation in performance between different time series. These findings suggest that transformer models have significant potential for broader adoption in very short-term and short-term wind power forecasting.

Biography:

Aleksandra graduated first-class from MSc Data Science at UCL, and BA Economics from University of Oxford (specilizing in Econometrics), where she obtianed the Award of Undergraduate Exhibition and the Junior Schlorship for outstaning academic work. She is also an alumni of Bona Fide scholarship awarded by Fundacja Orlen. Her research experience is in time series forecasts, which she gained at Bank of England in comparing methodologies for inflation forecasts and at UCL, where she developed wind-power predictions. She also fine-tuned the BERT and GPT-3 models for a classification task (fake news detection). Aleksandra is now working as a Data Sciencist at a London-based financial consultancy, True North Partners, deploying statistical and ML methods for credit risk modelling and Anti-Money Laundering applications.

Wojciech Zarzecki photo

Wojciech Zarzecki

Computational Medicine Group, MIMUW

Co-authors:

Paulina Szymczak, Roberto Olayo, Krzysztof Koras, Marcin Możejko, Małgorzata Łazęcka, Krzysztof Oksza Orzechowski, Ewa Szczurek

Poster 62: BATTLE-AMP - Benchmarking Assessment Tests for The Leading Efficacy of Antimicrobial Peptides

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

Antimicrobial peptides (AMPs) have emerged as a promising alternative to combat the growing threat of antibiotic-resistant bacteria. However, they have not been widely adopted in clinic due to their high toxicity and low activity. Therefore, discriminative methods are crucial to select AMP candidates with desired properties. The comparison of predictive models in AMP discovery is challenging and lacks consistency due to the absence of standardized benchmarks. New methods are evaluated on custom datasets that are often not related to key AMP properties such as activity and are typically released in a manner that is difficult to reproduce. In this work we reviewed over 50 methods for AMP prediction and observed that code reproducibility was a significant challenge, with only 10 methods meeting our criteria for reproducibility. To address this fundamental problem, we propose an extendable framework for the systematic comparison of AMP prediction methods. We evaluated the robustness of these methods in key biological contexts, such as activity against specific species or adversarial syntactic variations. Our framework ensures higher reproducibility and plug-and-play assessment of new models, aiming to redefine AMP classification in a way that aligns more closely with the biological context.

Biography:

Wojciech Zarzecki is a Computer Science student at the Warsaw University of Technology. He is also a member of the Computational Medicine Group at MIMUW led by Prof. Ewa Szczurek. His main interests are the application of deep learning in antimicrobial peptide discovery and computer vision.

Mateusz Piechocki photo

Mateusz Piechocki

Poznan University of Technology

Co-authors:

Marek Kraft

Poster 63: Enhancing Solar Irradiance Forecasts with On-Device Continual Learning

Saturday / 9 November 10:30 - 12:00 (Poster Session 2)

Abstract:

With the increasing contribution of solar energy to the overall renewable energy system, accurate solar irradiance forecasting is crucial for optimizing energy production and managing grid stability. Traditional forecasting approaches rely on static and centralized algorithms that often struggle to adapt to local conditions and rapidly changing atmospheric phenomena. These limitations can lead to less stable and reliable forecasts, potentially undermining the consistency and efficiency of solar energy systems. Hence, we propose a novel approach to maintain the highest level of solar irradiance forecasting based on on-device continual learning. The presented solution leverages incoming data and an incremental learning strategy to continuously refine its forecasting skills directly on-site in a decentralized way, without constant communication with a central server. By combining new, relevant data samples with historical training data, our on-device continual learning pipeline can rapidly adjust to evolving environmental conditions, ensuring that the forecasting model remains accurate and responsive to local atmospheric changes. The developed pipeline improves the forecast accuracy of the deployed model, protects against catastrophic forgetting, and maintains this process energy efficient, relying only on the available constrained resources of edge devices. In this study, we present a comprehensive evaluation of our method across various deployment scenarios, demonstrating significant improvements in the precision and reliability of solar irradiance forecasts. Our results highlight the potential of on-device continual learning to advance solar irradiance forecasting, providing a scalable and adaptive solution to enhance energy management and facilitate more effective grid integration in the renewable energy sector.

Biography:

Mateusz has been affiliated with Poznan University of Technology (PUT) since 2021. In 2021, he graduated from the same University with an MSc degree in Automatic Control and Robotics (Robots and Autonomous Systems specialization). Currently, he is a PhD student in the PUT Vision Laboratory at the Institute of Robotics and Machine Intelligence. His research interests include various topics related to machine and deep learning, computer vision, or robotics, focusing on real-time processing and edge computing.

/ Student Research Workshop (SRW) Talks

Pawel Knap photo

Pawel Knap

University of Southampton / University of Freiburg

Co-authors:

Peter Hardy, Alberto Tamajo, Hwasup Lim, Hansung Kim

SRW Talk 1: Real-Time Omnidirectional 3D Multi-Person Human Pose Estimation with Occlusion Handling

Thursday / 7 November 8:10 - 8:30 Main Hall (Student Research Workshop)

Abstract:

Current human pose estimation systems primarily focus on obtaining an accurate 3D global estimate of a single person. Our work introduces one of the first real-time 3D multi-person human pose estimation systems capable of handling basic forms of occlusion. First, we adapt an off-the-shelf 2D detector and an unsupervised 2D-3D lifting model for use with 360° panoramic camera and mmWave radar sensors. We then make several key contributions, including camera and radar calibrations and improved matching of individuals between image and radar spaces. Our system addresses the depth and scale ambiguity problems by utilizing a lightweight 2D-3D pose lifting algorithm that operates in real-time with high accuracy in both indoor and outdoor environments, providing an affordable and scalable solution. Notably, the system maintains nearly constant time complexity regardless of the number of detected individuals, achieving a frame rate of approximately 7-8 fps on a laptop with a commercial-grade GPU. My presentation and poster will feature material from our initial paper, which proposed the system and was presented at the ACM SIGGRAPH European Conference on Visual Media Production in December 2023. I will also cover our second paper, which details system improvements and was presented in January 2024 at the International Conference on Electronics, Information, and Communication. Additionally, I will reference my latest article, "Human Modelling and Pose Estimation Overview," available on arXiv. All these resources can be found on my webpage: pawelknap.github.io.

Biography:

Pawel Knap is a PhD student at the University of Freiburg, Germany. He holds both an MEng and a BEng in Electronic Engineering with Artificial Intelligence from the University of Southampton. Pawel is the first author of several papers on human pose estimation. His current academic focus is on computer vision, with a particular emphasis on medical/neuroscience image data analysis. Additionally, he has published work on reinforcement learning and has contributed to real-time object detection and satellite image segmentation projects in the private sector.

Paulina Kaczyńska photo

Paulina Kaczyńska

University of Warsaw, MIM Faculty / IPPT PAN

SRW Talk 2: Accumulated Local Effects and Graph Neural Networks

Thursday / 7 November 8:30 - 8:50 Main Hall (Student Research Workshop)

Abstract:

I explore how Accumulated Local Effects (ALE), a model-agnostic explanation method, can be tailored to visualize the impact of node features in link prediction tasks in Graph Neural Networks (GNN). A key challenge addressed in this work is the fact that complex interactions of nodes during message passing within GNN layers complicate the direct application of ALE. Since a straightforward solution of modyfying only one node at once would substantially increase computation time, I propose an approximate method that mitigates this challenge. The findings reveal that although the approximate method offers computational efficiency, the exact method yields more stable explanations, particularly when smaller data subsets are used. However, when a large number of data points are employed to estimate the ALE profile, the differences between the two methods diminish. Additionally, I discuss how different parameters influence the accuracy of ALE estimation for both methods.

Biography:

Paulina Kaczyńska obtained a Bachelor's degree in Physics and a Bachelor's degree in Cognitive Science from the college of Inter-Faculty Individual Studies in Mathematics and Natural Sciences (MISMaP) at University of Warsaw. She then completed a Master's degree in Machine Learning at the University of Warsaw, focusing on visualizing node features with Accumulated Local Effects in Graph Neural Networks for her thesis. Paulina is now starting her PhD at the Institute of Fundamental Technological Research of the Polish Academy of Sciences (IPPT PAN) under the supervision of prof. Tomasz Lipniacki. The topic of her PhD concerns modelling regulatory networks with single-cell resolution using machine learning methods. Her research interests centre around modelling social and natural phenomena with Machine Learning tools. She participated in MAIR project at MI2 DataLab focused on describing qualitative and quantitative impact of Big Techs on Artificial Intelligence research. In collaboration with Computational Psychiatry and Phenomenology group at IDEAS NCBR she analysed the cognitive biases of therapeutic chatbots.

Michal Wilinski photo

Michal Wilinski

Carnegie Mellon University / Poznan University of Technology

Co-authors:

Mononito Goswami, Nina Zukowska, Willa Potosnak, Chi-En Teh, Artur Dubrawski

SRW Talk 3: Interrogating Time Series Foundation Models

Thursday / 7 November 8:50 - 9:10 Main Hall (Student Research Workshop)

Abstract:

Time series foundation models have shown significant adaptability across various tasks and domains, providing powerful tools for analyzing and predicting temporal data. These models offer easy-to-use and powerful tools for analyzing and predicting temporal data across diverse applications, from finance to healthcare. However, large scale, in terms of both parameters and training data, poses challenges in evaluating their limitations and determining appropriate use cases. The rapid evolution of this field raises questions about the optimal architecture and training data composition, as well as the mechanisms employed for time series processing. Furthermore, to responsibly deploy these models in real-world scenarios, it is crucial to understand their limitations and potential failure points, enabling stakeholders to make informed decisions. To address these knowledge gaps, we introduce Time Series Interrogator, a novel framework for analyzing time series foundation models, leveraging representation analysis and mechanistic interpretability techniques. Our study applies it to multiple publicly available models, offering a comparative analysis of their learned representations and underlying processes. This approach not only provides insights into the workings of time series foundation models, but also paves the way for more informed model development and application. By elucidating these complex systems, our work contributes to the responsible advancement of time series analysis, enabling researchers and practitioners to harness the full potential of foundation models while understanding their inherent strengths and limitations.

Biography:

Michał is a final year undergraduate student specializing in Artificial Intelligence at Poznan University of Technology. As a former Vice President of the GHOST student organization, he was involved in building a students' community around Machine Learning. Currently, Michał collaborates closely with the Institute of Robotics and Machine Intelligence at PUT and the Auton Lab at Carnegie Mellon University, where he works on a range of applied machine learning projects spanning robotics, deep learning and foundation models.

Tomasz Wojnar photo

Tomasz Wojnar

Jagiellonian University

Co-authors:

Jarosław Hryszko, Adam Roman

SRW Talk 4: Harnessing YouTube for evaluating general-purpose speech recognition machine learning models

Thursday / 7 November 9:10 - 9:30 Main Hall (Student Research Workshop)

Abstract:

Speech recognition has become a critical component in numerous applications, ranging from virtual assistants and transcription services to voice-controlled devices and accessibility tools. The increasing reliance on speech recognition machine learning models requires robust and comprehensive evaluation methodologies to ensure their performance, reliability, and adaptability across diverse scenarios. In our research, we propose YouTube as a data source to evaluate speech recognition models by utilizing the audio from the videos and the subtitles added to them. YouTube offers a rich and continuously updated collection of spoken language data, encompassing various languages, accents, dialects, speaking styles, and audio quality levels. This makes it an ideal source of data which can be used to evaluate the adaptability and performance of speech recognition models in real-world situations. For the purposes of research we created a tool named Mi-Go to help collect data, evaluate models and store results. The experiment that we conducted on open-source models (Whisper, Wav2Vec2 and others) highlights the utility of YouTube as a valuable data source for evaluation of speech recognition models, ensuring their robustness, accuracy, and adaptability to diverse languages and acoustic conditions. Additionally, by contrasting the machine-generated transcriptions against human-made subtitles, the experiments can help pinpoint potential misuse of YouTube subtitles, like search engine optimization.

Biography:

Tomasz Wojnar is a final year Machine Learning M.Sc. student at the Jagiellonian University. His journey into the research began during a High-School Students Internship Programme at CERN in Geneva. He gained his experience at Nokia in the Machine Learning Quantization team, developing tools for 6G. Currently, he is actively involved in projects with the Group of Machine Learning Research (GMUM) at the Jagiellonian University. With a strong interest in various aspects of Deep Learning, he is now focused on generative models and computer vision. Looking ahead, he aspires to advance in the field of automated science discovery.

Nina Zukowska photo

Nina Zukowska

FU Berlin / CMU

Co-authors:

Michal Wilinski, Mononito Goswami, Nick Gisolfi, Artur Dubrawski

SRW Talk 5: Cherish Every MOMENT

Thursday / 7 November 9:50 - 10:10 Main Hall (Student Research Workshop)

Abstract:

Time Series Foundation Models represent a new paradigm for modeling time series data across a variety of tasks and domains with minimal supervision. To simplify pre-training, many foundation models are designed to accept univariate time series of a fixed length as input. However, real-world time series can be both long and multivariate, which limits the widespread adoption of these models. In this paper, we focus on developing effective techniques to expand the context length of time series models, enabling them to handle both long and multivariate time series. We evaluate how extending the context window impacts downstream task performance in both classification and long-horizon forecasting. Additionally, we explore methods to createa multivariate Time Series Foundation Model.

Biography:

Nina Żukowska is a passionate data scientist with a solid foundation in AI and machine learning. She is doing a Master’s degree at Freie Universität Berlin and she did my Bachelor’s at Poznań University of Technology. Her research focuses on time series models and generative AI, and she is always eager to learn and apply new techniques.

Mateusz Smendowski photo

Mateusz Smendowski

AGH University of Kraków

SRW Talk 6: Towards Sustainable Cloud Environments by Leveraging Time Series Forecasting for Enhanced Resource Utilization

Thursday / 7 November 10:10 - 10:30 Main Hall (Student Research Workshop)

Abstract:

The increasing importance of cloud computing underscores its undeniable flexibility, which renders it indispensable in modern organizations. However, the operational model inherent in cloud environments carries significant risks that can impact both cost-effectiveness and energy utilization. While the pay-as-you-go pricing scheme provides a convenient interface for accessing cloud resources, expenses can escalate uncontrollably. Ensuring top-notch QoS (Quality of Service) and steering clear of SLA (Service Level Agreement) violations often involves provisioning more resources than necessary, leaving a considerable margin of unused cloud capacity. While overprovisioning leads to resource wastage and higher costs, underprovisioning risks service downtime. As cloud computing continues to grow in scale and complexity, optimizing resource utilization is crucial for achieving environmental sustainability. Moreover, improving resource management aligns with the principles of GCC (Green Cloud Computing) and sustainable computing. Therefore, machine learning-based resource usage prediction emerges as a significant optimization category. However, workload patterns in cloud environments are influenced by various factors, resulting in time series data that represent historical resource usage characterized by complex, multi-seasonal dependencies and considerable variability. Consequently, introducing a cloud resource usage optimization system tackles the issue of inefficient resource utilization by applying long-term time-series forecasting within the cloud computing domain to generate dynamic resource reservation plans based on predicted demand. Due to the multi-faceted nature of system architecture, the thematic scope covers various aspects, such as the role of exploratory data analysis combined with unsupervised anomaly detection, the critical importance of leveraging cloud FinOps (Financial Operations) principles, the evaluation of different machine learning models for time-series forecasting (including recurrent neural networks and transformers), and the qualitative and quantitative assessment of resource reservation plans. A key focus will be showcasing the role of custom domain-specific evaluation measures and demonstrating their relationship with standard machine learning evaluation metrics, highlighting that the model best suited for forecasting may not always be the one that enables the most efficient dynamic resource reservation planning. Additionally, given the potential negative impacts and risks associated with applying machine learning, aspects related to monitoring and enhancing the interpretability of system detections are of key importance. As cloud environments involve different types of virtual machines, the application of forecasting is considered in the context of both HPC (High-Performance Computing) machines and general-purpose ones, highlighting the differences in approaches between them. In environments dominated by general-purpose or diverse-purpose machines, not just those for long-running scientific workflows, the key to resource usage optimization may lie in focusing on optimizing the time-series forecasting process itself, enabling scalable optimization along with the dynamic evolution of environments. Ultimately, ML-based optimization represents a tradeoff between minimizing costs, maximizing resource utilization, and maintaining high service availability.

Biography:

Mateusz Smendowski, MSc in Computer Science, is a PhD student at AGH University of Krakow. His main interests include the application of machine learning within the domain of cloud resource usage optimization and sustainable computing, with a particular focus on long-term time series forecasting and unsupervised anomaly detection.

Wojciech Zarzecki photo

Wojciech Zarzecki

Computational Medicine Group, MIMUW

Co-authors:

Paulina Szymczak, Roberto Olayo, Krzysztof Koras, Marcin Możejko, Małgorzata Łazęcka, Krzysztof Oksza Orzechowski, Ewa Szczurek

SRW Talk 7: BATTLE-AMP - Benchmarking Assessment Tests for The Leading Efficacy of Antimicrobial Peptides

Thursday / 7 November 10:30 - 10:50 Main Hall (Student Research Workshop)

Abstract:

Antimicrobial peptides (AMPs) have emerged as a promising alternative to combat the growing threat of antibiotic-resistant bacteria. However, they have not been widely adopted in clinic due to their high toxicity and low activity. Therefore, discriminative methods are crucial to select AMP candidates with desired properties. The comparison of predictive models in AMP discovery is challenging and lacks consistency due to the absence of standardized benchmarks. New methods are evaluated on custom datasets that are often not related to key AMP properties such as activity and are typically released in a manner that is difficult to reproduce. In this work we reviewed over 50 methods for AMP prediction and observed that code reproducibility was a significant challenge, with only 10 methods meeting our criteria for reproducibility. To address this fundamental problem, we propose an extendable framework for the systematic comparison of AMP prediction methods. We evaluated the robustness of these methods in key biological contexts, such as activity against specific species or adversarial syntactic variations. Our framework ensures higher reproducibility and plug-and-play assessment of new models, aiming to redefine AMP classification in a way that aligns more closely with the biological context.

Biography:

Wojciech Zarzecki is a Computer Science student at the Warsaw University of Technology. He is also a member of the Computational Medicine Group at MIMUW led by Prof. Ewa Szczurek. His main interests are the application of deep learning in antimicrobial peptide discovery and computer vision.

Sai Preetham, Sata photo

Sai Preetham, Sata

Otto-von-Guericke-University Magdeburg

Co-authors:

Dmitry Puzyrev, Ralf Stannarius

SRW Talk 8: Statistical criteria for the prediction of dynamical clustering in granular gases

Thursday / 7 November 10:50 - 11:10 Main Hall (Student Research Workshop)

Abstract:

Granular matter, which consists of ensembles of interacting macroscopic particles, plays a prominent role in many natural and industrial processes, such as cosmic body formation, processing of coal and ore in mining, etc. It is ubiquitous in our environment (e.g. sand, gravel) and takes great part in our daily life applications (salt, sugar, coffee beans, etc). While exhibiting their own unique properties, most granular materials can be roughly attributed to liquid state (as in case of hourglass), gaseous state (dust cloud) and solid state (e.g. clogged aggregation of particles). Granular gases are relatively sparse ensembles of free-moving macroscopic particles which interact mainly via inelastic collisions. One fascinating property of granular gas is the dynamical clustering, i.e. spontaneous local increase of particle density which leads to decrease of particle mobility. In order to understand dynamical clustering, experiments are performed in microgravity conditions and matching numerical models are developed. Due to several disadvantages of direct numerical simulations such as higher computation time, machine learning based approaches provide a promising alternative to predict dynamical clustering. With the help of machine learning, a function that maps the input parameters of the system (number of particles, container size, etc.) to a variable that states whether the system is in the gaseous or cluster state can be built. With the help of this function, the state of the system for a given set of system parameters can be predicted without the need of extensive numerical simulations. In order to quantify the property of dynamical clustering in granular gases, several statistical criteria have been developed over recent years. Three such criteria include the Kolmogorov-Smirnov Test (KS-Test), the so-called caging-effect that is based on the critical local packing fraction and analysis of local density distributions. We performed multiple numerical simulations based on the VIP-GRAN experiment and the clustering criteria were evaluated for various combinations of system parameters. These criteria were compared in order to investigate their advantages and drawbacks. A dataset that contains system parameters and clustering criteria variables have been prepared, and several machine learning models were trained and validated using this dataset with the help of standard regression performance metrics. Based on the performance on these metrics, best models were identified for each of clustering criteria.

Biography:

Sai Preetham Sata is currently working as a research associate and pursuing PhD studies at Otto-von-Guericke University, Magdeburg in the field of machine learning and granular physics. Previously, he has worked as data scientist and software developer in several startups and research institutes and acquired immense knowledge and passion towards machine learning, computer vision, data science, deep learning, robotics and reinforcement learning. After completing his Master studies in Mechatronics from Technical University of Harburg-Hamburg with intelligent systems and robotics specialisation, he has developed expertise in computer vision, robotics, machine learning and deep learning by working on several projects in various applications. This keen interest and facination about machine learning and its applications in various domains has motivated him to pursue PhD in this field.

/ Tutorials

Przemysław Spurek photo

Przemysław Spurek

IDEAS NCBR / GMUM, Jagiellonian University

Weronika Smolak-Dyżewska photo

Weronika Smolak-Dyżewska

Jagiellonian University

Piotr Borycki photo

Piotr Borycki

Jagiellonian University

Joanna Waczyńska photo

Joanna Waczyńska

Jagiellonian University

Dawid Malarz photo

Dawid Malarz

Jagiellonian University

Tutorial 1: Gaussian Splatting

Sunday / 10 November 9:00 - 13:00 5070, MIM UW

Description:

In this hands-on tutorial, we will introduce you to Gaussian Splatting, an advanced technology for generating detailed 3D scenes from 2D images. You will begin the session by creating your own dataset, where we will capture you or your chosen object. Each participant will work then on their own dataset. Throughout the tutorial, you will dive deep into the practical aspects of Gaussian Splatting. We’ll guide you step-by-step through the entire workflow, from preparing the dataset from videos to training the model and refining your final 3D objects. By the end of the session, you will not only have your own 3D scene but also a solid understanding of the technology behind it. You will gain insight into critical techniques for optimizing the splatting process, addressing common challenges, and achieving high-quality 3D results. Whether you are a beginner or have some experience with computer graphics, this tutorial will equip you with the skills to employ Gaussian Splatting technology in your own projects. Join us for an exciting journey into the world of cutting-edge 3D scene reconstruction!

Prerequisites: You must know 3D Gaussian distribution and how to train fully connected neural networks.

Biography:

Przemysław Spurek is the leader of the Neural Rendering research team at IDEAS NCBR and a researcher in the GMUM group operating at the Jagiellonian University in Krakow. In 2014, he defended his PhD in machine learning and information theory. In 2023, he obtained his habilitation degree and became a university professor. He has published articles at prestigious international conferences such as NeurIPS, ICML, IROS, AISTATS, ECML. He co-authored the book Głębokie uczenie. Wprowadzenie [Deep Learning. Introduction] – a compendium of knowledge about the basics of AI. He was the director of PRELUDIUM, SONATA, OPUS and SONATA BIS NCN grants. Currently, his research focuses mainly on neural rendering, in particular NeRF and Gaussian Splatting models.

Weronika is currently pursuing her PhD in Technical Computer Science at Jagiellonian University in Kraków. Her main area of interest is neural rendering models for 3D scene reconstruction, especially Gaussian Splatting. She employs it in different areas from physical simulations to medical data.

Piotr Borycki is currently pursuing a Master’s degree in Computer Mathematics at Jagiellonian University in Kraków. His research focuses on 3D object representation, particularly using Neural Radiance Fields (NeRF) and Gaussian Splatting.

Joanna is pursuing her PhD in Technical Computer Science at Jagiellonian University in Kraków, where she focuses on object representation in computer vision. Her research primarily revolves around models based on Gaussian Splatting, enabling fast rendering and modification of visual data. In recent years, she has had the privilege of collaborating with CERN and the University of Cambridge, while also participating in the first edition of AI Tech program at Wrocław University of Technology.

Dawid has three years of experience working as a Machine Learning Engineer and is now focused on research in 3D object reconstruction using Neural Radiance Fields (NeRF) and Gaussian Splatting. His work aims to enhance the efficiency and accuracy of 3D object representation techniques in modern computer vision applications.

Natasha <nobr>Al-Khatib</nobr> photo

Natasha Al-Khatib

Symbio

Tutorial 2: How LLMs are Revolutionizing the cybersecurity field

Sunday / 10 November 9:00 - 13:00 5060, MIM UW

Description:

The ever-evolving threat landscape demands constant adaptation. Traditional methods struggle. Large Language Models (LLMs) emerge, wielding the power of language. This talk explores LLMs’ revolution in cybersecurity. LLMs are AI models trained on massive text and code datasets. This grants them an understanding of complex linguistic patterns, invaluable in cybersecurity. Firstly, LLMs excel at advanced threat detection. Analyzing vast amounts of data, they identify subtle anomalies indicating brewing attacks. Traditional methods rely on pre-defined rules, vulnerable to novel attack vectors. LLMs, with their ability to learn and adapt, identify unseen threats, providing a crucial early warning system. Secondly, LLMs offer proactive threat analysis. By ingesting vast quantities of threat intelligence data, including past attack methods and attacker motivations, LLMs uncover patterns and predict future attack vectors. This allows security teams to take a pre-emptive approach, focusing resources on fortifying potential weaknesses before attackers exploit them. Imagine an LLM analyzing a hacker forum, identifying discussions about targeting a specific software vulnerability. This foresight empowers security professionals to patch the vulnerability before a widespread breach. Furthermore, LLMs can revolutionize vulnerability research . Traditionally, identifying vulnerabilities is time-consuming and laborious. LLMs, with their ability to analyze vast code repositories, pinpoint potential vulnerabilities through code patterns and language constructs associated with known weaknesses. This streamlines the vulnerability discovery process, allowing security teams to address critical issues before attackers identify them. While LLMs offer a powerful new frontier, challenges remain. Issues surrounding explainability, bias in training data, and potential misuse require careful consideration. However, the potential benefits are undeniable. As these models continue to evolve and integrate with existing security solutions, they hold the promise of a more secure and resilient digital landscape.

Prerequisites: Participants for this LLM for cybersecurity tutorial ideally should have: Basic understanding of cybersecurity concepts: Familiarity with common threats, vulnerabilities, and security practices is helpful. No prior knowledge of LLMs is required: The tutorial will introduce the concept and core functionalities of LLMs. However, a basic understanding of artificial intelligence (AI) and machine learning (ML) could be beneficial. Regarding the needed software, they should install jupyter notebook or any Python framework. Langchain python library should also be installed for processin cybersecurity tasks with LLM.

Biography:

Dr. Natasha Al-Khatib is a seasoned expert in automotive cybersecurity with a Ph.D. in Artificial Intelligence. Their doctoral research focused on developing cutting-edge intrusion detection systems for autonomous vehicles, demonstrating a deep understanding of the unique challenges and vulnerabilities within this domain. Following their academic pursuits, Natasha joined ETAS Bosch as a cybersecurity consultant, where they contributed to the development and implementation of various tools and services to enhance automotive security. Their expertise in this field allowed them to play a pivotal role in safeguarding critical automotive systems. Currently, Natasha holds the position of Automotive Cybersecurity Leader at Symbio, a leading provider of fuel cell systems for the automotive industry. In this role, they leverage their extensive knowledge and experience to ensure the security of Symbio’s hydrogen-based fuel cell technology, protecting both consumers and manufacturers from potential cyber threats.

Michał Bartoszkiewicz photo

Michał Bartoszkiewicz

Pathway

Jan Chorowski photo

Jan Chorowski

Pathway

Adrian Kosowski photo

Adrian Kosowski

Pathway

Adrian Łańcucki photo

Adrian Łańcucki

NVIDIA

Przemysław Uznański photo

Przemysław Uznański

Pathway

Tutorial 3: Beyond transformers - new sequence processing architectures

Sunday / 10 November 9:00 - 13:00 4070, MIM UW

Description:

The transformer neural architecture took by storm the AI community and is now used in many applications, from language models to image generation. With its widespread use we start to better understand transformer’s operating principles, limitations, and possible solutions to them. This tutorial aims to offer a clear picture of where transformer models are today and where they might be heading in the future or what might replace them. This tutorial will open with an overview of how transformers work, their strengths, their weaknesses, and recent theoretical findings about their capabilities, such as their ability to simulate different types of computations and their scalability with hardware. We will next cover important topics about how transformers handle context and attention and compare them with newly proposed alternatives, such as state-space models, to highlight the differences and trade-offs. We will discuss learning mechanisms both in the case of learning from training data during pre-training and in-context learning doing evaluation. We’ll look at techniques for handling long contexts and speculate on the relationship between in-context and from data learning. This will lead us to open questions about the future of AI models, such as understanding where knowledge is actually stored in sequence prediction models and is there a potential for models with an almost unlimited learning capacity.

Prerequisites: Familiarity with transformer architecture. Python, PyTorch, Colab.

Biography:

Michał Bartoszkiewicz designs the Pathway data processing framework. He is a competitive programmer with a long list of achievements including Topcoder finals, Google Code Jam and Facebook HackerCup. He co-founded nasza-klasa.pl, the first Polish social network.

Jan Chorowski is the CTO at Pathway building Live AI systems, underpinned by a proprietary real-time data processing engine and an AI framework. He received his M.Sc. degree in electrical engineering from Wrocław University of Technology and Ph.D. from University of Louisville. He has worked at the University of Wroclaw and has collaborated with several research teams, including Google Brain, Microsoft Research and Yoshua Bengio’s lab.

Adrian Kosowski specializes in network theory, discrete dynamical systems, graph navigability, and graph learning. He obtained his PhD in Computer Science at the age of 20, and has co-authored over 100 publications across Theoretical Computer Science, Physics, and Biology. Before co-founding Pathway, he was a tenured researcher at Inria and an associate professor at Ecole Polytechnique. He is also a co-founder of Spoj.com.

Adrian Łańcucki is a senior engineer at NVIDIA. His research focuses on representation learning and generative modeling for text and speech, as well as improving quality and efficiency at scale. In 2019, Adrian obtained a Ph.D. in machine learning from the University of Wroclaw, Poland. Since then, he has actively collaborated with academia.

Przemek Uznański is the streaming algorithms and data structure expert at Pathway, and a former competitive programmer (finalist of ACM ICPC, TopCoder Open and Facebook HackerCup). He did his PhD at the INRIA Bordeaux on the topic of distributed computing, then was a Post-doc at ETH Zurich, Aalto (Finland), and in Marseille. He was an assistant professor at University of Wrocław

Jakub Adamczyk photo

Jakub Adamczyk

AGH University of Krakow / Placewise

Piotr Ludynia photo

Piotr Ludynia

AGH University of Krakow

Tutorial 4: Machine learning on molecules and molecular fingerprints

Sunday / 10 November 9:00 - 13:00 4060, MIM UW

Description:

Machine learning on molecules is a vital subject in chemoinformatics and de novo drug design. Tasks like molecular property prediction and virtual screening are crucial in modern pharmaceutical workflows. However, molecules are nontrivial to process, typically being represented as attributed graphs. As such, they are naturally non-Euclidean and have no notion of distance, requiring vectorization before performing classification, regression, or other ML tasks. Dedicated embedding methods are required, in order to encode relevant structural and functional information. Molecular fingerprints are the most popular group of algorithms in this regard, offering efficient solutions for many problems. One of the most recent developments in this area is scikit-fingerprints, a scikit-learn compatible library for easy and efficient computation of molecular fingerprints, which will be extensively used during the tutorial. This workshop will introduce participants to machine learning on molecules, molecular fingerprints, and how to apply them to practical problems. We will cover basics of chemoinformatics, reading and processing data, how molecular fingerprints work, and how to apply them to molecular property prediction or virtual screening. As a bonus, participants will learn why graph neural networks (GNNs) are not a silver bullet, and why molecular fingerprints are still very much relevant in the era of GNNs popularity.

Prerequisites: We assume reasonably good Python programming knowledge, as well as general familiarity with machine learning and popular data science libraries, e.g. NumPy, Pandas, matplotlib, scikit-learn. Any previous experience with chemoinformatics is not required. We will work with Jupyter Notebooks, and attendees can use either local development environment or Google Colab.

Biography:

Jakub Adamczyk is a PhD candidate in Computer Science at AGH University of Krakow. His research concerns graph representation learning, graph classification, chemoinformatics, and molecular property prediction. He also works at Placewise as Data Science Engineer, focusing on various ML problems in tabular learning, CV and NLP, and their end-to-end MLOps. Besides his professional work, he does Historical European Martial Arts (HEMA) and likes reading.

Piotr Ludynia is currently pursuing a Master’s degree at AGH University of Cracow - Poland, specializing in machine learning with a focus on graph and molecular learning. He also works on deep learning research and neural network acceleration at Intel. In his free time he writes, plays modern metal guitar and produces music.

Anastasia Psarou photo

Anastasia Psarou

Jagiellonian University

Ahmet Onur Akman photo

Ahmet Onur Akman

Jagiellonian University

Tutorial 5: Multi-Agent Reinforcement Learning Tutorial for Optimal Urban Route Choice Using TorchRL

Sunday / 10 November 14:30 - 16:30 5060, MIM UW

Description:

In this tutorial, we will demonstrate how to implement Multi-Agent Reinforcement Learning (MARL) scenarios in an urban setting using our custom PettingZoo framework, RouteRL. We will showcase a simplified traffic route choice environment integrating Simulation of Urban MObility (SUMO), an open-source traffic simulation, with the reinforcement learning library, TorchRL. This framework aims to replicate the daily decision-making process involved in route selection. It incorporates two types of agents: human drivers, which are modeled using human route choice behavioral models from transportation research, and Automated Vehicles (AVs), RL agents with individual or collective goals. We will present our environment and its functionality as well as experiments with different state-of-the-art RL algorithms aiming to assess how the agents learn and compare the learning of human agents and AVs. The tutorial is expected to take approximately 2 hours.

Prerequisites: Participants should have a basic understanding of reinforcement learning concepts before attending this tutorial. Additionally, they should prepare their development environment by installing and setting up the necessary tools. This includes creating and activating a Conda environment with Python 3.12. Within this environment, they should use pip to install the required libraries: Gym, PettingZoo, and torchrl. Additionally, the participants should download and install SUMO software from https://eclipse.dev/sumo/.

Biography:

Anastasia Psarou is a PhD student in the Faculty of Mathematics and Computer Science at Jagiellonian University. Currently, she is working as part of the COeXISTENCE team towards discovering what happens in the future cities when intelligent machines (autonomous vehicles) and humans share limited resources of urban mobility.

Ahmet Onur Akman is a Computer Engineer with a specialization in Artificial Intelligence. Currently, he is a PhD student in the Faculty of Mathematics and Computer Science at Jagiellonian University interested in foreseeing what happens when our cities are shared with autonomous, intelligent robots, competing with human drivers for limited resources.

Marek Adamczyk photo

Marek Adamczyk

University of Wrocław

Tutorial 6: Practical Submodular Optimization

Sunday / 10 November 14:30 - 18:30 4060, MIM UW

Description:

In today’s rapidly evolving landscape, Generative AI (GenAI) and large language models (LLMs) dominate the spotlight, sparking excitement and revolutionizing industries across the board. While these advancements capture imaginations, it’s easy to overlook the continued relevance of classical algorithmic tools that remain fundamental to solving real-world problems. Among these, submodular optimization stands as a cornerstone of algorithmic heuristics, providing a powerful and elegant abstraction for a wide range of practical scenarios. This tutorial highlights how submodular optimization offers efficient solutions to problems where greedy algorithms excel — thanks to its unique structure that allows ideas from convex continuous optimization to be applied in discrete scenarios. From sensor placement to data summarization through feature selection, submodular optimization elegantly balances simplicity and performance. Moreover, there are situations where sophisticated deep learning models are just impossible to apply due to the very nature of the problem itself. For example, when working with streaming data—where the full dataset isn’t available in advance and decisions must be made on the fly—submodular optimization becomes indispensable. Its adaptability to such constraints showcases why it remains a critical tool. The tutorial will be divided into three parts. It will begin with an introduction to the fundamental mathematics behind submodular functions, providing a clear and accessible overview. The second part will focus on practical results, drawing from selected ICML, NeurIPS, and AAAI papers to demonstrate how submodular optimization is applied across various domains. The final part will examine a detailed case study, illustrating how submodularity can be used to model dynamic pricing in ride-hailing platforms, connecting theory with a real-world application.

Biography:

Marek Adamczyk works at the University of Wrocław’s Institute of Informatics. He is a theoretical computer scientist focused on algorithms that predict the future. His research addresses combinatorial problems, drawing from fields such as probability theory, stochastic optimization, online optimization, mechanism design, and algorithmic game theory. While his work has a strong foundational and theoretical focus, all of the problems he explores are motivated by natural business and practical applications. Modern internet environments require solutions at the intersection of machine learning, data science, and advanced algorithmics due to the vast amounts of data from which predictions must be inferred and the need for rapid decision-making in large data streams.