L2S Talks
L2S Talks is the lecture series of the Research Unit. It combines regular talks from both international and national guests as well as internal speakers
24 September 2024
Dr Sonali Agarwal (Big Data Analytics Lab, Indian Institute of Information Technology):
Addressing Uncertainties and Challenges in Big Data for Healthcare
Addressing Uncertainties and Challenges in Big Data for Healthcare
This session explores strategies to address uncertainties and challenges in healthcare big data, focusing on concept drift, complex event processing (CEP), and multimodal data integration. Concept drift, where data properties change over time, can undermine predictive models; adaptive techniques are discussed to maintain model accuracy. CEP enables real-time analysis of data streams, supporting timely decision-making and improving patient outcomes. Multimodal data integration synthesizes diverse data types into a unified framework, essential for comprehensive patient understanding. Together, these approaches enhance the reliability and effectiveness of big data analytics in healthcare.
Dr Agrawal is presently working as an Associate Professor in the Information Technology Department of the Indian Institute of Information Technology, Allahabad. She received her Ph.D. at IIIT Allahabad and joined as faculty at IIIT Allahabad, where she is teaching since October 2009. Her main research interests are in the areas of Stream Analytics, Big Data, Stream Data Mining, Complex Event Processing System, Support Vector Machines and Software Engineering.
17 September 2024
Dr Partha Das (University of Amsterdam):
Physics based photometric invariance for image understanding
Physics based photometric invariance for image understanding
The physics for image formation is known: light from light sources get reflected (and absorbed) by an object and reaches the sensor. Inverting this process is called intrinsics image decomposition (IID). Deep learning-based approaches have shown remarkably good performance in inverting this phenomenon. However, they often utilize very large datasets and complicated loss functions. Further, most of the approaches rely purely on image features, ignoring the underlying physics of image formation, adding to the data requirement and failing under the presence of strong photometric effects. In this talk, I will discuss about how the physics of image formation can be exploited to simplify the problem and allowing the network to learn the underlying model, rather than memorising the dataset. I will talk about my work where I focus on utilizing such physics-based modifications to simplify the IID problem, resulting in smaller networks that are trained on much smaller, purely synthetic datasets, while being generalisable to unseen real-world scenes.
Dr Das holds a PhD from the Computer Vision Lab under the Informatics Institute of the University of Amsterdam. He was supervised by Prof Theo Gevers and Dr Sezer Karaoglu. His research interests are in physics based image understanding, using deep learning & traditional approaches. Specifically, tasks like image formation, light estimation and colour correction, etc. He is interested in levying these approaches to build efficient & stable AR/VR/Mixed Reality systems.
26 August 2024
Ilya Chugunov (Princeton University):
Neural Field Representations of Mobile Computational Photography
Neural Field Representations of Mobile Computational Photography
Burst imaging pipelines allow cellphones to compensate for less-than-ideal optical and sensor hardware by computationally merging multiple lower-quality images into a single high-quality output. The main challenge for these pipelines is compensating for pixel motion, estimating how to align and merge measurements across time while the user's natural hand tremor involuntarily shakes the camera. In this work, we explore continuous projective models of burst photography, backed by multi-resolution neural field representations, and fit to real in-the-wild mobile burst captures. These task-specific models not only estimate and compensate for pixel motion, but use it as a powerful source of geometric information to estimate scene depth, see behind occlusions, separate reflections, and erase photographer-cast shadows.
Ilya Chugunov is a PhD candidate in the Princeton Computational Imaging Lab, advised by Professor Felix Heide. His work focuses on neural field representations for inverse imaging problems, depth reconstruction, and computational photography. He received his bachelor's in electrical engineering and computer science from UC Berkeley, where he worked on low-rank reconstruction methods for magnetic resonance imaging with Professors Moriel Vandsburger and Miki Lustig. Ilya is an NSF graduate research fellow.
29 April 2024
Prof Onofre Martorell (University of the Balearic Islands/University of Siegen):
Variational models for High Dynamic Range video
Variational models for High Dynamic Range video
Sequences of images taken with different exposure time can be combined by both multi-exposure fusion (MEF) or high dynamic range (HDR) techniques. In both cases, information from all images of a sequence is combined in order to get an image with a higher amount of details on all areas of the selected reference image. This combination of information can also be used in combination with techniques so as to remove the existing noise on the images. In this presentation we will present two different techniques for joint multi-exposure sequence and denoising which exploit the redundancy of similar patches. The first one is a MEF technique for static images and the second one is a HDR technique applied on RAW data which is able to deal with ghosting artifacts.
Onofre Martorell belongs to the research group of Mathematical Analysis and Processing of Images (TAMI) at the University of the Balearic Islands and is a visiting Professor with L2S. His research areas are computer vision and mathematical digital image processing and, more specifically, on the detection of geometrical structures in images and registration of multi-exposure images.
4 March 2024
Prof Matthias Uhl (TH Ingolstadt):
Behavioral Ethics of Artificial Intelligence
Behavioral Ethics of Artificial Intelligence
Behavioral ethics deviates from armchair philosophy by systematically incorporating the moral concepts and intuitions of laypeople. In his talk, Prof Uhl will discuss behavioral approaches to questions from two different subfields of the ethics of AI: machine ethics and human-machine ethics. In machine ethics, he will consider the problem of algorithm aversion and folk attitudes regarding the use of AI. In human-machine ethics, he considers the influence that the use of AI has on people’s moral behavior. He will present some exemplary research projects from each subfield.
Matthias Uhl is professor of Societal Implications and Ethical Aspects of Artificial Intelligence at Technische Hochschule Ingolstadt. Before this, he was a research associate at the Peter Löscher Chair of Business Ethics (TU München) and Leader of Junior Research Group Ethics of Digitization (TUM School of Governance, TU München). His PhD thesis from Max Planck Institute of Economics and Friedrich Schiller University Jena is on Multiple Selves in Economics - Four Studies on Intrapersonal Conflict is in Economics and his research focuses on the ethical implications of digitization, especially human-machine interaction, moral intuitions regarding the use of algorithms, and polarization tendencies through social media. He regularly gives lectures in the ethics of technology, business ethics and behavioral economics.
27 October 2023
Dr Onofre Martorell (University of the Balearic Islands/University of Siegen):
Variational models for High Dynamic Range video
Variational models for High Dynamic Range video
High Dynamic Range (HDR) reconstruction for multi-exposurevideo sequences is a very challenging task. Consecutive frames areacquired with alternate exposures times, generally only two or threedifferent values. Generally, HDR video methods aim at registeringneighboring frames and fuse them using image HDR techniques. In thistalk we will present variational models for both main steps of HDR videosynthesis. On one hand, a model for optical flow estimation whichincorporates a comparison of non-consecutive images and temporalregularization. On the other hand, a model for HDR video synthesis thatuses a nonlocal regularization term to combine pixel information from neighboring frames.
Onofre Martorell belongs to the research group of Mathematical Analysis and Processing of Images (TAMI) at the University of the Balearic Islands and is a visiting PostDoc with L2S. His research areas are computer vision and mathematical digital image processing and, more specifically, on the detection of geometrical structures in images and registration of multi-exposure images.
14 August 2023
Dr Rajiv Joshi (T. J. Watson Research Center, IBM):
Variability Aware Design in nm Era
Variability Aware Design in nm Era
As the technology scales, process, voltage, and temperature, variations (PVT) and model inaccuracies impact design yield. In this talk, a predictive analytical technique based on statistical analysis methodology targeting both memory and custom logic design applications is highlighted. The methodology hinges on Mixture Important Sampling (MIS) is 5-6 orders of magnitude faster than Monte Carlo and a few orders compared to recent techniques. For advanced technologies, we extend the methodology to enable key features such as the Front End of the Line (FEOL) and back end of the line (BEOL) parasitic extraction and TCAD for manufacturability for 16nm and below. This increases the statistical confidence in the functionality and operability of the system- on-chip as a whole. The methodology is further extended to predict aging effects in memories and the utility of this technique is demonstrated through hardware fabrication.
Rajiv Joshi is a Mercator fellow at L2S. He holds a masters from MIT, doctorate from Columbia and has worked in IBM for almost 40 years now. His primary research has been in memory and recently big data analytics. He is also one of the drivers behind AI in Circuits and Systems conference.
17 July 2023
Jérome Eertmans (UC Louvain):
Differentiable Ray Tracing for Telecommunications
Differentiable Ray Tracing for Telecommunications
Over the last few decades, ray tracing (RT) has established itself as the method of choice for everything to do with modelling wave propagation. Telecoms being no exception, it's common practice to model the transmission channel using RT software. Although RT offers remarkable accuracy, its computational time cost raises many questions, not least that of differentiability.This presentation discusses ray tracing and its applications in telecoms, as well as mathematical tools for making RT differentiable.
Jérome Eertmans is a PhD student working at UCLouvain within COmmunication SYstems (COSY) group. His research topic is Differentiable Ray Tracing for Telecommunications.
5 July 2023
Jovita Lukasik (MPI for Informatics Saarbrücken):
Topology Learning for Prediction, Generation, and Robustness in Neural Architecture Search
Topology Learning for Prediction, Generation, and Robustness in Neural Architecture Search
Jovita Lukasik is a Ph.D. candidate in the focus group of Computer Vision in the Data and Web Science Group and in the computer vision and machine learning group at the Max-Planck-Institute for Informatics in Saarbrücken.
7 June 2023
Jack Naylor (Sydney University Robotic Imaging Lab):
Through the Looking Glass - Neural Fields for Robotics
Through the Looking Glass - Neural Fields for Robotics
Robotic imaging is an emergent field which seeks to synthesize concepts across computational imaging and robotics to create new cameras and algorithms in aid of extending robotic capabilities. Unconventional camera technologies including plenoptic, neuromorphic and hyperspectral cameras enable robotic platforms to deal with unique scenarios and environments, however they provide information which requires additional interpretation and are not suited to all robotic tasks. This talk will provide an overview of recent work towards auto-interpretation of new cameras on-board robotic platforms using neural interpretation of scenes, and regularisation of neural radiance fields to improve scene representation for robotics around complex visual phenomena.
Jack Naylor is a PhD candidate with the Australian Centre for Robotics at the University of Sydney where he also majored in Space Engineering and Physics. His research seeks to adapt neural representations of light, in the form of neural radiance fields (NeRFs), as a new robotic map representation to enable understanding of complex visual phenomena in unstructured environments.
31 May 2023
Dr John Meshreki (University of Siegen):
Modelling Vignetting in Fourier Ptychographic Microscopy
Modelling Vignetting in Fourier Ptychographic Microscopy
John Meshreki completed his masters and PhD in Experimental Particle Physics with the ATLAS experiment at CERN. After his PhD, he started investigating a new research area which has innovative applications in Biology – bringing improvements to the quality of people’s lives – that is Computational Microscopy.
24 May 2023
Prof Wolfgang Heidrich (KAUST):
Learned Optics — Improving Computational Imaging Systems through Deep Learning and Optimization
Learned Optics — Improving Computational Imaging Systems through Deep Learning and Optimization
Computational imaging systems are based on the joint design of optics andassociated image reconstruction algorithms. Historically, many such systemshave employed simple transform-based reconstruction methods. Modernoptimization methods and priors can drastically improve the reconstructionquality in computational imaging systems. Furthermore, learning-basedmethods can be used to design the optics along with the reconstructionmethod, yielding truly end-to-end optimized imaging systems that outperformclassical solutions.
Wolfgang Heidrich is a Mercator fellow at L2S and Professor at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Wolfgang Heidrich is a Professor of Computer Science and Electrical and Computer Engineering in the KAUST Visual Computing Center, for which he alsoserved as director from 2014 to 2021. He received his PhD infrom the University of Erlangen in 1999, and then worked as a ResearchAssociate in the Computer Graphics Group of the Max-Planck-Institute for Computer Science in Saarbrucken, Germany, before joining UBC in 2000. Prof.Heidrich's research interests lie at the intersection of imaging, optics,computer vision, computer graphics, and inverse problems. His more recent interest is in computational imaging, focusing on hardware-software co-design of the next generation of imaging systems, with applications such as High-Dynamic Range imaging, compact computational cameras, and hyperspectral cameras. Prof. Heidrich's work on High Dynamic Range Displays served as the basis for the technology behind Brightside Technologies, which was acquired by Dolby in 2007. Prof. Heidrich is aFellow of the IEEE and Eurographics, and the recipient of a Humboldt Research Award.