Inquest
Leonardo DRS
Modern Technology Solutions, Inc.
University of Dayton
Department of
Computer Science
Gluware
Our LinkedIn Group: OISF at LinkedIn
The in-person portion will be hosted at the MTSI office in Beavercreek
(4141 Colonel Glenn Hwy #202, Beavercreek, OH 45431).
The monthly meeting will be held both in-person and online via Google Meet.
Pre-registration via Eventbrite is encouraged to help plan for food and drink (as well as for keeping within capacity limitations).
(You're still welcome to register at the door.)
When you register for the conference, you will be asked if you would like a CPE certificate to support certification requirements.
6:30pm (In-Person):
Food and drinks served, doors open.
6:50pm (Online via Google Meet):
On-line part of meeting opens for participants to join.
7:00pm (Both)
A brief overview of the Ohio Information Security Forum.
Differential privacy (DP) is considered a de-facto standard for protecting users' privacy in data analysis, machine, and deep learning. Existing DP-based privacy-preserving approaches, in federated learning, consist of adding noise to the clients' gradients before sharing them with the server. However, implementing DP on the gradient is inefficient as the privacy leakage increases by increasing the synchronization training epochs due to the composition theorem.
Recently, researchers were able to recover images of the training dataset using a Generative Regression Neural Network (GRNN). In this work, we propose a novel approach using two layers of privacy protection to overcome the limitations of the existing DP-based methods. The first layer leverages Hensel's Lemma to reduce the training dataset's dimension. The new dimensionality reduction method reduces the dimension of a dataset without losing information since Hensel's Lemma guarantees uniqueness. The second layer applies DP to the compressed dataset generated by the first layer.
The proposed approach overcomes the problem of privacy leakage due to composition by applying DP only once before the training. Therefore, clients train their local model on the privacy-preserving dataset generated by the second layer. Experimental results show that the proposed approach ensures strong privacy protection while achieving high accuracy. In particular, the new dimensionality reduction method achieves an accuracy of 97%, with only 25% of the original dataset size.