美文网首页
Federated Learning ( Tensorflow

Federated Learning ( Tensorflow

作者: 小胖子善轩 | 来源:发表于2020-04-22 22:40 被阅读0次

    写代码很重要。。。
    Github:https://github.com/shanxuanchen/FLStudy

    安装

    pip install --upgrade tensorflow_federated
    
    // result
    Successfully installed absl-py-0.9.0 attrs-19.3.0 ....
    

    这个过程非常久,我用光纤外挂着香港VPN,也下了2个多小时。

    Hello World

    import collections
    
    import numpy as np
    import tensorflow as tf
    import tensorflow_federated as tff
    
    tf.compat.v1.enable_v2_behavior()
    
    np.random.seed(0)
    
    tff.federated_computation(lambda: 'Hello, World!')()
    
    // result
    
    b'Hello, World!'
    

    Preparing the input data

    我们需要先准备训练数据。联邦学习的训练数据来源于多个用户,并且允许各个用户non-iid的特性。恰好的是,tensorflow_federated这个包准备好了MNIST数据集。与原始的MNIST数据集不同的是,这个数据是经过处理的(https://arxiv.org/pdf/1812.01097.pdf
    ),使得原来的iid数据变得non-iid,模拟现实的数据孤岛,并且数据分布不同的情况。

    emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()
    
    print (emnist_train)
    
    # 3383
    print (len(emnist_train.client_ids))
    print (emnist_train.client_ids)
    
    example_dataset = emnist_train.create_tf_dataset_for_client(
        emnist_train.client_ids[0])
    
    example_element = next(iter(example_dataset))
    
    example_element['label'].numpy()
    
    from matplotlib import pyplot as plt
    plt.imshow(example_element['pixels'].numpy(), cmap='gray', aspect='equal')
    plt.grid(False)
    
    
    _ = plt.show()
    
    

    下面是其中一个MNIST的数据结果图


    6.png

    Emmm,由于我们要把像素数据压缩成一行,以及划分batch。我们还需要对数据进行预处理。

    NUM_CLIENTS = 10
    NUM_EPOCHS = 5
    BATCH_SIZE = 20
    SHUFFLE_BUFFER = 100
    PREFETCH_BUFFER=10
    
    def preprocess(dataset):
    
      def batch_format_fn(element):
        """Flatten a batch `pixels` and return the features as an `OrderedDict`."""
        return collections.OrderedDict(
            x=tf.reshape(element['pixels'], [-1, 784]),
            y=tf.reshape(element['label'], [-1, 1]))
    
      return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch(
          BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)
    

    Prepare Client Data

    接下来我们要准备一下用于节点训练的client Data。

    def make_federated_data(client_data, client_ids):
      return [
          preprocess(client_data.create_tf_dataset_for_client(x))
          for x in client_ids
      ]
    
    
    sample_clients = emnist_train.client_ids[0:NUM_CLIENTS]
    
    federated_train_data = make_federated_data(emnist_train, sample_clients)
    
    print('Number of client datasets: {l}'.format(l=len(federated_train_data)))
    print('First dataset: {d}'.format(d=federated_train_data[0]))
    

    Creating a model with Keras

    跟传统iid数据的模型训练不同,联邦学习需要两个优化器,一个是client optimizer,一个是server optimizer。两个优化器的learning rate都不同,这个根据情况设定。因为client optimzier是学习本地数据的梯度,所以一般较小;server optimizer是用来整合相加,所以一般是1.0。

    def model_fn():
      # We _must_ create a new model here, and _not_ capture it from an external
      # scope. TFF will call this within different graph contexts.
      keras_model = create_keras_model()
      return tff.learning.from_keras_model(
          keras_model,
          input_spec=preprocessed_example_dataset.element_spec,
          loss=tf.keras.losses.SparseCategoricalCrossentropy(),
          metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
    
    
    iterative_process = tff.learning.build_federated_averaging_process(
        model_fn,
        client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
        server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
    
    

    Training the model on federated data

    str(iterative_process.initialize.type_signature)
    state = iterative_process.initialize()
    
    state, metrics = iterative_process.next(state, federated_train_data)
    print('round  1, metrics={}'.format(metrics))
    
    

    为了使模型快速收敛,各节点重复使用一组数据进行训练。

    NUM_ROUNDS = 11
    for round_num in range(2, NUM_ROUNDS):
      state, metrics = iterative_process.next(state, federated_train_data)
      print('round {:2d}, metrics={}'.format(round_num, metrics))
    
    // result
    
    If using Keras pass *_constraint arguments to layers.
    round  1, metrics=<sparse_categorical_accuracy=0.11419752985239029,loss=3.1054441928863525,keras_training_time_client_sum_sec=0.0>
    round  2, metrics=<sparse_categorical_accuracy=0.13600823283195496,loss=2.933013439178467,keras_training_time_client_sum_sec=0.0>
    round  3, metrics=<sparse_categorical_accuracy=0.15164609253406525,loss=2.8726162910461426,keras_training_time_client_sum_sec=0.0>
    round  4, metrics=<sparse_categorical_accuracy=0.17942386865615845,loss=2.699212074279785,keras_training_time_client_sum_sec=0.0>
    round  5, metrics=<sparse_categorical_accuracy=0.2043209820985794,loss=2.5611214637756348,keras_training_time_client_sum_sec=0.0>
    round  6, metrics=<sparse_categorical_accuracy=0.20617283880710602,loss=2.5576889514923096,keras_training_time_client_sum_sec=0.0>
    round  7, metrics=<sparse_categorical_accuracy=0.24156378209590912,loss=2.408731698989868,keras_training_time_client_sum_sec=0.0>
    round  8, metrics=<sparse_categorical_accuracy=0.2781893014907837,loss=2.230600357055664,keras_training_time_client_sum_sec=0.0>
    round  9, metrics=<sparse_categorical_accuracy=0.3288065791130066,loss=2.0912210941314697,keras_training_time_client_sum_sec=0.0>
    round 10, metrics=<sparse_categorical_accuracy=0.33209875226020813,loss=1.9757834672927856,keras_training_time_client_sum_sec=0.0>
    
    

    Displaying model metrics in TensorBoard

    
    logdir = "/tmp/logs/scalars/training/"
    summary_writer = tf.summary.create_file_writer(logdir)
    state = iterative_process.initialize()
    
    with summary_writer.as_default():
      for round_num in range(1, NUM_ROUNDS):
        state, metrics = iterative_process.next(state, federated_train_data)
        for name, value in metrics._asdict().items():
          tf.summary.scalar(name, value, step=round_num)
    
    

    通过打开tensorboard可以看到loss的变化。

    image.png

    Conclution

    Tensorflow-federate这个包是支持模型和参数自定义的,下一个实验就是自定义模型来跑联邦学习,争取后天可以完成。

    相关文章

      网友评论

          本文标题:Federated Learning ( Tensorflow

          本文链接:https://www.haomeiwen.com/subject/keuvihtx.html