Details of Each Step

Local Training local models Local Training starts by downloading the latest global model. Initialized with the global model, the models are independently trained with participants’ data.

Uploading local models

After Training, local models are uploaded and opened to the public for evaluation. Here can be three malicious actions:

  1. Uploading an untrained random model

  2. Uploading an intentionally bad-trained model

  3. Copying and slightly changing others’ models to theirs (a.k.a. stealing)

1,2 can be mitigated by the correct peer evaluation, a regular process of Solai. To mitigate 3, the evaluation score considers not only the performance but also how much difference is from existing other local models.

Peer Evaluation of local models

Evaluation of submitted local models in Solai is done in a way of peer review. Here, avoiding false evaluation is important. False evaluation can be made by the followings :

  1. Updating any evaluation score without actually testing it with proper data

  2. Uploading evaluation scores multiple times to intensely fix the score

Both can be handled by allowing multiple branches of the global model. If the trainer is not satisfied with the aggregating result of the local models, then they can make a blacklist to rule the result out.

Aggregation Scheme

Aggregation is done with a weighted average of each model using evaluation scores as weight.

Safe Submission of Local Model

Participants may be afraid of the model decoding which back-tracks the preimage of the data. Solai suggests the participants add Gaussian noise to their models to make use of Differential Privacy, a cryptographic technique to hide the local model in a veil that prevents back-tracking. While the submitted models contain noise, Solai can still evaluate the model by calculating the Shapley value.

Last updated