阅读原文请点击
摘要:# 0. 简介 Jetson TX2【1】是基于 NVIDIA Pascal™ 架构的 AI 单模块超级计算机,性能强大(1 TFLOPS),外形小巧,节能高效(7.5W),非常适合机器人、无人机、智能摄像机和便携医疗设备等智能终端设备。 Jatson TX2 与 TX1 相比,内存和 eMMC 提高了一倍,CUDA 架构升级为 Pascal,每瓦性能提高一倍,支持 Jetson TX1
0. 简介
Jetson TX2【1】是基于 NVIDIA Pascal™ 架构的 AI 单模块超级计算机,性能强大(1 TFLOPS),外形小巧,节能高效(7.5W),非常适合机器人、无人机、智能摄像机和便携医疗设备等智能终端设备。
Jatson TX2 与 TX1 相比,内存和 eMMC 提高了一倍,CUDA 架构升级为 Pascal,每瓦性能提高一倍,支持 Jetson TX1 模块的所有功能,支持更大、更深、更复杂的深度神经网络。
TX2 内部结构如下:
1. 开箱
过程细节不展开,板卡上电后来张照片:
2. 刷机
TX2 出厂时,已经自带了 Ubuntu 16.04 系统,可以直接启动。但一般我们会选择刷机,目的是更新到最新的 JetPack L4T,并自动安装最新的驱动、CUDA Toolkit、cuDNN、TensorRT。
刷机注意以下几点:
Host 需要安装 Ubuntu 14.04,至少预留 15 GB 硬盘空间,不要用 root 用户运行 JetPack-${VERSION}.run,我用的是 JetPack-L4T-3.1-linux-x64.run
TX2 需要进入 Recovery Mode,参考随卡自带的说明书步骤
刷机时间大概需要 1~2 小时,会格式化 eMMC,主要备份数据
3. 运行视频目标检测 Demo
刷机成功后,重启 TX2,连接键盘鼠标显示器,就可以跑 Demo 了。
nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/backend$ ./backend1../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt --trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel --trt-forcefp320--trt-proc-interval1-fps10
视频截图如下:
4. 运行 TensorRT Benchmark
TensorRT 【3】是 Nvidia GPU 上的深度学习 inference 优化库,可以将训练好的模型通过优化器生成 inference 引擎
将 TX2 设置为 MAXP (最高性能)模式,运行 TensorRT 加速的 GoogLeNet、VGG16 得到处理性能如下:
5. TX2 不支持的 feature
不支持 int8
待发现
参考文献
【1】嵌入式系统开发者套件和模块 | NVIDIA Jetson | NVIDIA
【2】Download and Install JetPack L4T
【3】TensorRT
附录
deviceQuery
nvidia@tegra-ubuntu:~/work/TensorRT/tmp/usr/src/tensorrt$ cd /usr/local/cuda/samples/1_Utilities/deviceQuerynvidia@tegra-ubuntu:/usr/local/cuda/samples/1_Utilities/deviceQuery$ lsdeviceQuery deviceQuery.cpp deviceQuery.o Makefile NsightEclipse.xml readme.txtnvidia@tegra-ubuntu:/usr/local/cuda/samples/1_Utilities/deviceQuery$ ./deviceQuery./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking)Detected 1 CUDA Capable device(s)Device 0: "NVIDIA Tegra X2" CUDA Driver Version / Runtime Version 8.0 / 8.0 CUDA Capability Major/Minor version number: 6.2 Total amount of global memory: 7851 MBytes (8232062976 bytes) ( 2) Multiprocessors, (128) CUDA Cores/MP: 256 CUDA Cores GPU Max Clock rate: 1301 MHz (1.30 GHz) Memory Clock rate: 1600 Mhz Memory Bus Width: 128-bit L2CacheSize:524288bytesMaximum TextureDimensionSize(x,y,z)1D=(131072),2D=(131072,65536),3D=(16384,16384,16384) Maximum Layered1D TextureSize, (num) layers1D=(32768),2048layers Maximum Layered2D TextureSize, (num) layers2D=(32768,32768),2048layers Total amountofconstantmemory:65536bytesTotal amountofsharedmemoryperblock:49152bytesTotalnumberofregisters available perblock:32768Warpsize:32Maximumnumberofthreads per multiprocessor:2048Maximumnumberofthreads perblock:1024Maxdimensionsizeofathreadblock(x,y,z): (1024,1024,64)Maxdimensionsizeofa gridsize(x,y,z): (2147483647,65535,65535) Maximummemorypitch:2147483647bytesTexture alignment:512bytesConcurrentcopyandkernel execution: Yeswith1copyengine(s) Runtimelimitonkernels:NoIntegrated GPU sharing HostMemory: Yes Support host page-lockedmemorymapping: Yes Alignment requirementforSurfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device PCIDomainID/ BusID/ locationID:0/0/0ComputeMode: deviceQuery, CUDA Driver = CUDART, CUDA DriverVersion=8.0, CUDA RuntimeVersion=8.0, NumDevs =1, Device0 = NVIDIA Tegra X2Result= PASS
内存带宽测试
nvidia@tegra-ubuntu:/usr/local/cuda/samples/1_Utilities/bandwidthTest$ ./bandwidthTest[CUDABandwidthTest] - Starting...Runningon... Device0: NVIDIA Tegra X2 Quick Mode Host to DeviceBandwidth,1Device(s) PINNED Memory Transfers Transfer Size (Bytes)Bandwidth(MB/s)3355443220215.8Device to HostBandwidth,1Device(s) PINNED Memory Transfers Transfer Size (Bytes)Bandwidth(MB/s)3355443220182.2Device to DeviceBandwidth,1Device(s) PINNED Memory Transfers Transfer Size (Bytes)Bandwidth(MB/s)3355443235742.8Result= PASSNOTE:The CUDA Samples are not meant for performance measurements. Results may vary when GPUBoostis enabled.
GEMM 测试
nvidia@tegra-ubuntu:/usr/local/cuda/samples/7_CUDALibraries/batchCUBLAS$ ./batchCUBLAS -m1024 -n1024 -k1024batchCUBLAS Starting...GPU Device 0: "NVIDIA Tegra X2" with compute capability 6.2 ==== Running single kernels ====Testingsgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbf800000,-1) beta= (0x40000000, 2)#### args: lda=1024 ldb=1024 ldc=1024^^^^ elapsed = 0.00372291 sec GFLOPS=576.83@@@@ sgemm test OKTestingdgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0x0000000000000000, 0) beta= (0x0000000000000000, 0)#### args: lda=1024 ldb=1024 ldc=1024^^^^ elapsed = 0.10940003 sec GFLOPS=19.6296@@@@ dgemm test OK ==== Running N=10 without streams ====Testingsgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbf800000,-1) beta= (0x00000000, 0)#### args: lda=1024 ldb=1024 ldc=1024^^^^ elapsed = 0.03462315 sec GFLOPS=620.245@@@@ sgemm test OKTestingdgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbff0000000000000,-1) beta= (0x0000000000000000, 0)#### args: lda=1024 ldb=1024 ldc=1024^^^^ elapsed = 1.09212208 sec GFLOPS=19.6634@@@@ dgemm test OK ==== Running N=10 with streams ====Testingsgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0x40000000, 2) beta= (0x40000000, 2)#### args: lda=1024 ldb=1024 ldc=1024^^^^ elapsed = 0.03504515 sec GFLOPS=612.776@@@@ sgemm test OKTestingdgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbff0000000000000,-1) beta= (0x0000000000000000, 0)#### args: lda=1024 ldb=1024 ldc=1024^^^^ elapsed = 1.09177494 sec GFLOPS=19.6697@@@@ dgemm test OK ==== Running N=10 batched ====Testingsgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0x3f800000, 1) beta= (0xbf800000,-1)#### args: lda=1024 ldb=1024 ldc=1024^^^^ elapsed = 0.03766394 sec GFLOPS=570.17@@@@ sgemm test OKTestingdgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbff0000000000000,-1) beta= (0x4000000000000000, 2)#### args: lda=1024 ldb=1024 ldc=1024^^^^ elapsed = 1.09389901 sec GFLOPS=19.6315@@@@ dgemm test OKTestSummary0 error(s)
网友评论