Speaker series

Spring 2021

An End-to-End Security and Privacy Framework for Big Data and Machine Learning

Abstract

Recent cyberattacks have shown that the leakage/stealing of big data may result in enormous monetary loss and damage to organizational reputation, and increased identity theft risks for individuals. Furthermore, in the age of big data, protecting the security and privacy of stored data is paramount for maintaining public trust, accountability and getting the full value from the collected data. Therefore, we need to address security and privacy challenges ranging from allowing access to big data to building novel data analytics model using the privacy sensitive data. In this talk, we will provide an overview of our end-to-end solution framework that tries to address these challenges.

We start the talk by discussing the unique security and privacy challenges arise due to big data and the recent systems designed to analyze big data. Later on, we discuss how to add additional security layer for protecting big data using encryption techniques. Especially, we discuss our work on leveraging the modern hardware based trusted execution environments such as Intel SGX for secure encrypted data processing. We focus on how to provide a simple, secure and high level language based framework that is suitable for enabling generic data analytics for non-security experts.

Also, we discuss our work on addressing the security and privacy issues with respect to the resulting data analytics/machine learning (ML) models. First, we discuss how these learned machine ML models could be attacked, how a game theoretic solution concept could be used to learn more robust ML models resistant to various attacks. In addition, we discuss how to build more robust models for federated learning systems. Finally, we discuss why the perceived fragility of the ML models against certain attacks is useful for enhancing individual privacy by showing how to look smarter to a ML model by modifying your social media profile.

Bio

Dr. Murat Kantarcioglu is a Professor in the Computer Science Department and Director of the Data Security and Privacy Lab at The University of Texas at Dallas (UTD). He received a PhD in Computer Science from Purdue University in 2005 where he received the Purdue CERIAS Diamond Award for Academic excellence. He is also a visiting scholar at Harvard Data Privacy Lab. Dr. Kantarcioglu's research focuses on the integration of cyber security, data science and blockchains for creating technologies that can efficiently and securely process and share data.

His research has been supported by grants including from NSF, AFOSR, ARO, ONR, NSA, and NIH. He has published over 170 peer reviewed papers in top tier venues such as ACM KDD, SIGMOD, ICDM, ICDE, PVLDB, NDSS, USENIX Security and several IEEE/ACM Transactions as well as served as program co-chair for conferences such as IEEE ICDE, ACM SACMAT, IEEE Cloud, ACM CODASPY. Some of his research work has been covered by the media outlets such as the Boston Globe, ABC News, PBS/KERA, DFW Television, and has received multiple best paper awards. He is the recipient of various awards including NSF CAREER award, the AMIA (American Medical Informatics Association) 2014 Homer R Warner Award and the IEEE ISI (Intelligence and Security Informatics) 2017 Technical Achievement Award presented jointly by IEEE SMC and IEEE ITS societies for his research in data security and privacy. He is also a fellow of AAAS and distinguished scientist of ACM.

March 22, 2021

Teresa Scantamburlo, Venice University, Italy

The road toward trustworthy AI

Abstract

One of the latest and most relevant trends in Artificial Intelligence (AI) research and industry is the proliferation of ethical principles and guidelines for the design and assessment of AI systems. An example of these efforts is the European Ethics Guidelines for Trustworthy AI delivered in 2019 by a group of experts under the mandate of the European Commission. In this presentation I will outline the key requirements proposed by these guidelines and discuss some challenges underlying their implementation such as the development of meaningful interdisciplinary collaborations.

Bio

Teresa is a post-doc at the European Centre for Living Technology (ECLT), Ca’ Foscari University (Italy), working on the AI4EU project. Previously she worked on the ThinkBIG project at the University of Bristol (UK). Her research interests lie at the intersection of Philosophy and Artificial Intelligence. Currently she is interested in the social and ethical impacts of AI, in particular, on human decision-making and social regulation.

Teresa received her PhD in Computer Science from Ca’ Foscari University (Venice, Italy) under the supervision of professor Marcello Pelillo. Her PhD thesis explored the philosophical foundation of machine learning and pattern recognition. Teresa is the co-editor of the forthcoming MIT press book on "Machines We Trust".

The human side of data science

Abstract

"The data is the data" relieves us from considering where most of our data comes from: people. This phrase abstracts away the complexities of how data are collected, and the biases in the structures generating those data. Instead, the focus of data science education is often placed on the technical data science pipeline and its successes: data are ingested and cleaned, and then modeled and visualized for prediction and decision-making. These data science efforts intersect with so many parts of our lives—both directly and indirectly. Some of these points of intersection are more obvious: when we shop online, stream a TV show or movie, or look up directions in an app. Some are less obvious: advertising and marketing, epidemiology, climate change, and health. Because these data come from (and are about) people—people with plans, hopes, fears, and concerns—it’s critical for compassion, ethics, and social education to be a core component of the data science pipeline. In this talk I explore the foundations of data science, how it's being leveraged in my research field of neuroscience, and how we approach undergraduate data science education at UC San Diego.

Bio

Bradley Voytek is an Associate Professor in the Department of Cognitive Science, the Halıcıoğlu Data Science Institute, and the Neurosciences Graduate Program at UC San Diego. He is both an Alfred P. Sloan Neuroscience Research Fellow and a Kavli Fellow of the National Academies of Sciences, as well as a founding faculty member of the UC San Diego Halıcıoğlu Data Science Institute and the Undergraduate Data Science program, where he serves as Vice-Chair. After his PhD at UC Berkeley he joined Uber as their first data scientist, when it was a 10-person startup, where he helped build their data science strategy and team. His neuroscience research lab combines large-scale data science and machine learning to study how brain regions communicate with one another, and how that communication changes with development, aging, and disease. He is an advocate for promoting science to the public, and speaks extensively with students at all grade levels about the joys of scientific research and discovery. In addition to his academic publications, his outreach work has appeared in outlets ranging from Scientific American and NPR to the San Diego Comic-Con. His most important contribution to science though is his book with fellow neuroscientist Tim Verstynen, "Do Zombies Dream of Undead Sheep?", by Princeton University Press.

May 24, 2021

Olga Isupova, Bath University

Conservation of elephants from AI on satellite images

Abstract

coming soon

Bio

coming soon. appeared in BBC news