@TOC
1:运行PaddlePaddle的机器环境配置
1: 本套学习之旅使用的阿里云的Centos7.3服务器(Alibaba Cloud Elastic Compute Service)
2: python环境为3.7.2
3: PaddlePaddle1.3
1.1 Centos yum源使用阿里云
cd /etc/yum.repos.d
mv CentOS-Base.repo CentOS-Base.repo.bak
wget -O CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache
1.2 Python环境准备
1.2.1 安装依赖python依赖环境
yum -y install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel libffi-devel
1.2.2 安装python3.7.2,源码安装
wget https://www.python.org/ftp/python/3.7.2/Python-3.7.2.tgz
tar xf Python-3.7.2.tgz
cd Python-3.7.2
./configure --prefix=/usr/local/python3
echo $?
make && make install
echo $?
ln -s /usr/local/python3/bin/python3.7 /usr/local/bin/python3.7
ln -s /usr/local/bin/python3.7 /usr/local/bin/python3
1.2.3 检查安装
可以python2与python3共存,python2.7和3.7版本同时安装在系统中
python -V
Python 2.7.5
python3.7 -V
Python 3.7.2
1.2.4 设置pip3.7
ln -s /usr/local/python3/bin/pip3.7 /usr/local/bin/pip3.7
ln -s /usr/local/bin/pip3.7 /usr/local/bin/pip3
2:PaddlePaddle安装
学习中暂时只安装CPU版本的
pip3 install paddlepaddle
在这里插入图片描述
3:测试PaddlePaddle
测试安装是否成功,进入到Python3环境,并输入以下代码,没有出现任何错误就表示成功。
3.1 在terminal下测试
import paddle.fluid
在这里插入图片描述
3.2 编写代码测试
创建一个Python程序文件,并命名为test_paddle.py,编写并执行以下测试代码,现在看不懂没有关系,跟着这个系列教程来学,我们会熟悉使用PaddlePaddle的:
3.2.1 创建一个python文件,如test.py
# Include libraries.
import paddle
import paddle.fluid as fluid
import numpy
import six
# Configure the neural network.
def net(x, y):
y_predict = fluid.layers.fc(input=x, size=1, act=None)
cost = fluid.layers.square_error_cost(input=y_predict, label=y)
avg_cost = fluid.layers.mean(cost)
return y_predict, avg_cost
# Define train function.
def train(save_dirname):
x = fluid.layers.data(name='x', shape=[13], dtype='float32')
y = fluid.layers.data(name='y', shape=[1], dtype='float32')
y_predict, avg_cost = net(x, y)
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_cost)
train_reader = paddle.batch(
paddle.reader.shuffle(paddle.dataset.uci_housing.train(), buf_size=500),
batch_size=20)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
def train_loop(main_program):
feeder = fluid.DataFeeder(place=place, feed_list=[x, y])
exe.run(fluid.default_startup_program())
PASS_NUM = 1000
for pass_id in range(PASS_NUM):
total_loss_pass = 0
for data in train_reader():
avg_loss_value, = exe.run(
main_program, feed=feeder.feed(data), fetch_list=[avg_cost])
total_loss_pass += avg_loss_value
if avg_loss_value < 5.0:
if save_dirname is not None:
fluid.io.save_inference_model(
save_dirname, ['x'], [y_predict], exe)
return
print("Pass %d, total avg cost = %f" % (pass_id, total_loss_pass))
train_loop(fluid.default_main_program())
# Infer by using provided test data.
def infer(save_dirname=None):
place = fluid.CPUPlace()
exe = fluid.Executor(place)
inference_scope = fluid.core.Scope()
with fluid.scope_guard(inference_scope):
[inference_program, feed_target_names, fetch_targets] = (
fluid.io.load_inference_model(save_dirname, exe))
test_reader = paddle.batch(paddle.dataset.uci_housing.test(), batch_size=20)
test_data = six.next(test_reader())
test_feat = numpy.array(list(map(lambda x: x[0], test_data))).astype("float32")
test_label = numpy.array(list(map(lambda x: x[1], test_data))).astype("float32")
results = exe.run(inference_program,
feed={feed_target_names[0]: numpy.array(test_feat)},
fetch_list=fetch_targets)
print("infer results: ", results[0])
print("ground truth: ", test_label)
# Run train and infer.
if __name__ == "__main__":
save_dirname = "fit_a_line.inference.model"
train(save_dirname)
infer(save_dirname)
3.2.2 运行test.py
,结果如下
-
运行结果:
在这里插入图片描述 -
生成两个目录
在这里插入图片描述
以上就是本次的全部内容。
网友评论