一、python代码【原生】
import time
t1 = time.time()
sum = 0
for i in range(1000000001):
sum += 1
print(f"累积的和为{sum}")
print(f"10亿次循环耗时为:{time.time() - t1}")
CPU消耗25%左右
输出:
累积的和为1000000001
10亿次循环耗时为:98.97694635391235
耗时98秒,古德!
二、rust代码
use std::time::{Duration, Instant};
let start = Instant::now();
let mut sum = 0;
for i in 1..1000000000{
//println!("{}",i)
sum += 1;
}
let duration = start.elapsed();
println!("累积的和为:{:?}",sum);
println!("循环的耗时: {:?}", duration);
红色箭头处为CPU耗用峰值
输出:
累积的和为:999999999
循环的耗时: 4.8233ms
这个速度,秒杀python!吓死老头子了!
三、python用numba加速一下看看
import numba as nb
@nb.jit
def add():
num = 0
for i in range(1000000001):
num += 1
return num
t1 = time.time()
%timeit add()
输出:
86.3 ns ± 2.27 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
这个速度,吓死老子了!numba静态编译这么变态!CPU都不带抖一下,40年的老胃都被吓了溃疡了!!
四、julia版本
function fn()
sum = 0
for i in 1:1000000000
sum += 1
end
print("sum: ",sum)
end
@time fn()
@time fn()
@time fn()
输出:
sum: 1000000000 0.010965 seconds (21.06 k allocations: 1.183 MiB)
sum: 1000000000 0.000169 seconds (18 allocations: 640 bytes)
sum: 1000000000 0.000243 seconds (21 allocations: 704 byte
网友评论