Member-only story
Privacy in Machine Learning is Hard, But This Framework Can Really Help
PySyft is an open source framework designed to enable private machine learning workflows.
Trust is a key factor in the implementation of deep learning applications. From training to optimization, the lifecycle of a deep learning model is tied to trusted data exchanges between different parties. That dynamic is certainly effective for a lab environment but results vulnerable to several all sorts of security attacks that manipulate the trusted relationships between the different participants in a model. Let’s take the example of a credit scoring model based that uses financial transaction to classify the credit risk for a specific customer. The traditional mechanisms for training or optimizing a model assume that the entities performing those actions will have full access to those financial datasets which opens the door to all sorts of privacy risks. As deep learning evolves, the need for mechanisms that enforce privacy constraints during the lifecycle of the datasets and model is becoming increasingly important. Among the technologies trying to address this monumental challenge, PySyft is a recent framework that has been steadily gaining traction within the deep learning community.