Monika Henzinger. Differentially private dynamic data structures
The goal in the design of data structures has traditionally been to minimize the use of resources such as time and space. However, in recent years a variety of applications have shown the need for additional requirements such as differential privacy. In this talk we give an overview of the state of the art of differentially private dynamic data structures.
Elisabeth Lex. Fairness and Bias in Information Access Systems
Stephan Weiss. Towards Online System Identification and Long-Duration Autonomy
Self-awareness is a crucial element in modern, adaptive, and versatile systems. Apart of the regular state estimation, self-awareness extends to automated decisions whether a sensor signal is valid, vanishes, appears or reappears, and how a sensor (or agent) generating such a signal is automatically incorporated into the overall system or swarm. While such elements can be achieved with state-machines and simplifying assumptions, systems that tackle them in an inherent and probabilistic fashion tend to be more robust and versatile.
In this talk, we will focus on such probabilistic approaches that can tackle fluctuating sensors and sensor signal availability in a seamless fashion. Based on this, we will present our new autonomy stack made for long-duration autonomy missions. We then extend the estimation framework to using knowledge of the physical model to online identify model parameters, sensor quality, and environmental conditions.
Naturally such systems, as many multi-sensor systems, have unobservable modes or require specific states to be (re-)initialized upon specific events. We will discuss the idea of quickly (re-)calibrating a specific sensor through observability-aware motions and decision making. In order to better leverage the sensor information, we then extend our discussion to a different formulation of estimators, the equivariant filter formulation, showing the benefit of transforming the estimation problem to a Lie group symmetry rather than keeping it in the original state space. The properties of this new formulation include linear error dynamics and subsequently much faster and better state convergence.
We close the discussion with an outlook on how such multi-sensor frameworks on single agents can be ported to multiple agents. With our approach, the elements of single agent sensor fusion and collaborative state estimation can be seamlessly merged to a holistic swarm estimator.
Marc Streit. Using Embeddings for Exploring and Explaining High-Dimensional Data
Multivariate datasets are ubiquitous. The challenge of making high-dimensional data accessible for humans in two-dimensional space is typically addressed by dimensionality reduction (DR). To effectively explore such embeddings, users need to be able to relate visual patterns to the underlying structure and high-dimensional data. This is complicated by a general drawback of DR techniques—embedding the data in a space with reduced degrees of freedom naturally introduces distortions. In this talk, I will reflect on techniques that leverage the embedding space for interactive exploration, explanation, and storytelling purposes.
Laura Kovacs. Automated Reasoning for Trustworthy Software
We are living in a world that is increasingly run by software. Daily activities, such as online banking, mobile communications and air traffic use, are controlled by software. This software is growing in size and functionality, but its reliability is hardly improving. We are getting used to the fact that computer systems are error-prone and insecure. To (re)gain the trust of end-users in software, formal automated reasoning is one of the main investments made by ICT companies in preventing software errors.
In this talk I will present recent advancement in automated reasoning, in particular computer-supported theorem proving, for generating and proving software properties that prevent programmers from introducing errors while making changes in this software. When testing programs manipulating the computer memory, our initial results show our work is able to prove that over 80% of test cases are guaranteed to have the expected behavior. The work described in this talk, and its results, have supported by an ERC Starting Grant 2014, an ERC Proof of Concept Grant 2018, and ERC Consolidator Grant 2020, and an Amazon Research Award 2020.
Alexandra Ion. Interactive structures: creating (meta)materials that move, walk, compute
We propose unifying material and machine. We investigate and develop interactive computational design tools that enable digital fabrication of complex structures for novice users. Interactive structures embed functionality within their geometry such that they can react to simple input with complex behavior. Such structures enable materials that can, e.g., embed robotic movement, can perform computations, or communicate with users. We focus on material discovery by broadening participation. We develop optimization-based interactive design tools that enable novices to contribute their creativity and experts to apply their intuition in order to foster the advancement of high-tech materials. We investigate the entire pipeline, i.e., their mechanical structure, the algorithms for efficient design, the unforeseen application areas and fabrication methods.
Eleftherios Kokoris-Kogias. Building Practical and Modular Consensus
David Lindlbauer. Automatic Adaption of Mixed Reality User Interfaces
Mixed Reality (MR) has the potential to transform the way we interact with digital information, and promises a rich set of applications, ranging from manufacturing and architecture to interaction with smart devices. Current MR approaches, however, are static, and users need to manually adjust the visibility, placement and appearance of their user interface every time they change their task or environment. This is distracting and leads to information overload. To overcome these challenges, we aim to understand and predict how users perceive and interact with digital information, and use this information in context-aware MR systems that automatically adapt when, where and how to display virtual elements. We create computational approaches that leverage aspects such as users’ cognitive load or the semantic connection between the virtual elements and the surrounding physical objects. Our systems increase the applicability of MR, with the goal to seamlessly blend the virtual and physical world.