Program

7:30-9:00 Breakfast
9:00-9:05 Welcome
9:05-9:50 Encode, Shuffle, Analyze Revisited: Strong Privacy despite a High-Epsilon
Abhradeep Guha Thakurta (Invited Speaker)
9:50-10:05 DPella: A Programming Framework for Differential Privacy with Accuracy
by Elisabet Lobo Vesga, Alejandro Russo, Marco Gaboardi and Gilles Barthe.
10:05-10:45 Coffee Break
10:45-12:00 Private Convex Optimization with Optimal Rates
by Abhradeep Thakurta, Raef Bassily, Kunal Talwar and Vitaly Feldman.

Lessons learned from the NIST DP Synthetic Data Competition
by Ryan McKenna and Gerome Miklau.

Full convergence of the iterative Bayesian update and applications to local differential privacy
by Ehab Elsalamouny and Catuscia Palamidessi.

Differentially Private Real Summation with Single- and Multi-Message Shuffling (this talk is based on two papers)
by Adria Gascon, Borja Balle, Kobbi Nissim and James Bell.

Differentially Private Release of Synthetic Graphs
by Marek Elias, Michael Kapralov, Janardhan Kulkarni and Yin Tat Lee.
12:00-2:00 Lunch Break
2:00-3:00 Private hypothesis testing via robustness
Audra McMillan (Invited Speaker)

Private Hypothesis Selection
by Mark Bun, Gautam Kamath, Thomas Steinke and Zhiwei Steven Wu.
3:00-3:45 Coffee Break
3:45-4:45 The search for anonymous data: attacks against privacy-preserving methods and systems
Yves-Alexandre de Montjoye (Invited Speaker)

Generalizable Deep Learning with Differential Privacy; Using Gradient Compression and De-noising
by Milad Nasr, Reza Shokri and Amir Houmansadr.
4:45-6:00 Poster Session

Posters

Context

Differential privacy is a promising approach to privacy-preserving data analysis. Differential privacy provides strong worst-case guarantees about the harm that a user could suffer from participating in a differentially private data analysis, but is also flexible enough to allow for a wide variety of data analyses to be performed with a high degree of utility. Having already been the subject of a decade of intense scientific study, it has also now been deployed in products at government agencies such as the U.S. Census Bureau and companies like Apple and Google.

Researchers in differential privacy span many distinct research communities, including algorithms, computer security, cryptography, databases, data mining, machine learning, statistics, programming languages, social sciences, and law. This workshop will bring researchers from these communities together to discuss recent developments in both the theory and practice of differential privacy.

Specific topics of interest for the workshop include (but are not limited to):

Submission

The goal of TPDP is to stimulate the discussion on the relevance of differentially private data analyses in practice. For this reason, we seek contributions from different research areas of computer science and statistics.

Authors are invited to submit a short abstract (4 pages maximum) of their work.

Submissions will undergo a lightweight review process and will be judged on originality, relevance, interest and clarity. Submission should describe novel work or work that has already appeared elsewhere but that can stimulate the discussion between different communities at the workshop. Accepted abstracts will be presented at the workshop either as a talk or a poster.

The workshop will not have formal proceedings and is not intended to preclude later publication at another venue.

Selected papers from the workshop will be invited to submit a full version of their work for publication in a special issue of the Journal of Privacy and Confidentiality.

Call for Papers: pdf

Invited Speakers

Important Dates

Abstract Submission
June 21 (anywhere on earth)
Notification
August 9
Workshop
November 11

Organizing and Program Committee

Submission website


Easychair TPDP 2019