缓存淘汰算法
FIFO算法(FIFO)
空间优先
先进先出,如果一个数据是最先进入的,那么可以认为在将来它被访问的可能性很小。空间满的时候,最先进入的数据会被最早置换(淘汰)掉。
LRU算法(The Least Recently Used)
时间优先
最近最少使用算法,如果一个数据在最近一段时间没有被访问到,那么可以认为在将来它被访问的可能性也很小。因此,当空间满时,最久没有访问的数据最先被置换(淘汰)。
LFU(Least Frequently Used)
频率优先
最不经常使用算法,如果一个数据在最近一段时间很少被访问到,那么可以认为在将来它被访问的可能性也很小。因此,当空间满时,最小频率访问的数据最先被淘汰
ARC (LFU + LRU)
自适应缓存算法
volley分析
ByteArrayPool
使用LRU算法达到缓存byte[] 目的,减少GC开销
public class ByteArrayPool {
/** 这是一个按照时间排序的回收队列,便于回收 */
private final List<byte[]> mBuffersByLastUse = new ArrayList<>();
/** 这是一个按照内存块大小排序的回收队列,便于查找 */
private final List<byte[]> mBuffersBySize = new ArrayList<>(64);
/** 缓存池当前大小*/
private int mCurrentSize = 0;
/** 最大尺寸限制 */
private final int mSizeLimit;
/** 根据内存尺寸进行排列 */
protected static final Comparator<byte[]> BUF_COMPARATOR =
new Comparator<byte[]>() {
@Override
public int compare(byte[] lhs, byte[] rhs) {
return lhs.length - rhs.length;
}
};
/** 通过设置大小,来创建缓存池 */
public ByteArrayPool(int sizeLimit) {
mSizeLimit = sizeLimit;
}
/**
获取一块内存
*/
public synchronized byte[] getBuf(int len) {
for (int i = 0; i < mBuffersBySize.size(); i++) {
byte[] buf = mBuffersBySize.get(i);
if (buf.length >= len) {
mCurrentSize -= buf.length;
mBuffersBySize.remove(i);
mBuffersByLastUse.remove(buf);
return buf;
}
}
return new byte[len];
}
/**
回收一块内存
*/
public synchronized void returnBuf(byte[] buf) {
if (buf == null || buf.length > mSizeLimit) {
return;
}
mBuffersByLastUse.add(buf);
int pos = Collections.binarySearch(mBuffersBySize, buf, BUF_COMPARATOR);
if (pos < 0) {
pos = -pos - 1;
}
mBuffersBySize.add(pos, buf);
mCurrentSize += buf.length;
trim();
}
/** 移除内存块,直到达到尺寸要求 */
private synchronized void trim() {
while (mCurrentSize > mSizeLimit) {
byte[] buf = mBuffersByLastUse.remove(0);
mBuffersBySize.remove(buf);
mCurrentSize -= buf.length;
}
}
}
Okio分析
目的
- 更快的IO读写性能(特殊场景)
- 抑制GC抖动
- 节约内存
UML图
时序图
Sink sink = Okio.sink(file);
BufferedSink bufferedSink = new Okio.buffer(sink);
bufferedSink.writeXXX(xxx);
sequenceDiagram
participant X as Client
participant A as Okio
participant B as Sink
participant C as RealBufferedSink
participant D as Buffer
participant E as Segment
participant F as Timeout
participant G as OutputStream
X->>A: 1.调用sink
A->>B:2.new
X->>A: 3.调用buffer
A->>C:4.new
C->>D:5.new
D->>E:6.new
X->>C:7.writeUtf8
C->>D:8.writeUtf8
D->>D:9:writeUtf8
D->>D:10.writableSegment
D->>D:11.writeByte
D->>D:12.writableSgement
C->>C:13.emitCompleteSegments
C->>D:14.completeSegmentByteCount
C-->>B:15.write
B->>F:16.throeifReached
B->>G:17.write
缓冲
- Segment
特性:
- 是否共享
- 是否可写
问题:
- 数据查找
- 数据分割
- 数据合并
final class Segment {
/** 所有片段的总长度 */
static final int SIZE = 8192;
/** 共享片段最小长度 */
static final int SHARE_MINIMUM = 1024;
/** 实体数据 */
final byte[] data;
/** 下一个将要被读取的字节位置 */
int pos;
/**将要被写入的位置*/
int limit;
/** 是否是共享片段 */
boolean shared;
/** 是否是私有的块,如果是,则可写入数据 */
boolean owner;
/**引用下一个片段*/
Segment next;
/** 引用上一个片段*/
Segment prev;
//创建一个全新的片段
Segment() {
this.data = new byte[SIZE];
this.owner = true;
this.shared = false;
}
/** 通过配置参数,创建片段 */
Segment(byte[] data, int pos, int limit, boolean shared, boolean owner) {
this.data = data;
this.pos = pos;
this.limit = limit;
this.shared = shared;
this.owner = owner;
}
/** 创建一个共享片段,共享字段段为True,防止被回收,非私有 */
Segment sharedCopy() {
shared = true;
return new Segment(data, pos, limit, true, false);
}
/** 创建一个私有的片段 */
Segment unsharedCopy() {
return new Segment(data.clone(), pos, limit, false, true);
}
/**在链中移除本片段,移除后的链头,能调用这里说明至少还有一个片段,*/
public @Nullable Segment pop() {
Segment result = next != this ? next : null;
prev.next = next;
next.prev = prev;
next = null;
prev = null;
return result;
}
/** 添加一个新的片段,并返回新链头部*/
public Segment push(Segment segment) {
segment.prev = this;
segment.next = next;
next.prev = segment;
next = segment;
return segment;
}
/** 分割片段[pos, pos + byteCount], [pos + byteCount, limit]*/
public Segment split(int byteCount) {
if (byteCount <= 0 || byteCount > limit - pos) throw new IllegalArgumentException();
Segment prefix;
//分割时是否要共享, 是由SHARE_MINIMUM参数决定的
if (byteCount >= SHARE_MINIMUM) {
prefix = sharedCopy();
} else {
prefix = SegmentPool.take();
System.arraycopy(data, pos, prefix.data, 0, byteCount);
}
prefix.limit = prefix.pos + byteCount;
pos += byteCount;
prev.push(prefix);
return prefix;
}
/** 片段合并 */
public void compact() {
if (prev == this) throw new IllegalStateException();
if (!prev.owner) return; // 不可以写入
//需要并入的数据长度
int byteCount = limit - pos;
//上一个片段的剩余有效空间
int availableByteCount = SIZE - prev.limit + (prev.shared ? 0 : prev.pos);
if (byteCount > availableByteCount) return; //空间不足,返回
writeTo(prev, byteCount); //写入
pop(); //移除自身,放到缓存池中
SegmentPool.recycle(this);
}
/**移动数据到其他片段中 */
public void writeTo(Segment sink, int byteCount) {
if (!sink.owner) throw new IllegalArgumentException();
if (sink.limit + byteCount > SIZE) {
//尾部添加的尺寸不足,调整位置
if (sink.shared) throw new IllegalArgumentException();
//检查空间是否足够
if (sink.limit + byteCount - sink.pos > SIZE) throw new IllegalArgumentException();
System.arraycopy(sink.data, sink.pos, sink.data, 0, sink.limit - sink.pos);
sink.limit -= sink.pos;
sink.pos = 0;
}
//拷贝数据
System.arraycopy(data, pos, sink.data, sink.limit, byteCount);
sink.limit += byteCount;
//重置pos
pos += byteCount;
}
}
- SegmentPool
提供缓存的功能
final class SegmentPool {
//最大尺寸 64k
static final long MAX_SIZE = 64 * 1024; // 64 KiB.
/** Singly-linked list of segments. */
static @Nullable Segment next;
/** Total bytes in this pool. */
static long byteCount;
private SegmentPool() {
}
/** 获取一个片段 */
static Segment take() {
synchronized (SegmentPool.class) {
if (next != null) {
Segment result = next;
next = result.next;
result.next = null;
byteCount -= Segment.SIZE;
return result;
}
}
return new Segment();
}
static void recycle(Segment segment) {
if (segment.next != null || segment.prev != null) throw new IllegalArgumentException();
if (segment.shared) return; // 如果是共享,则不能加入回收
synchronized (SegmentPool.class) {
if (byteCount + Segment.SIZE > MAX_SIZE) return; // 缓存池满了, 不处理
byteCount += Segment.SIZE;
segment.next = next;
segment.pos = segment.limit = 0;
next = segment;
}
}
}
- Buffer
使得segment 从外部看起来像是连续一段内存
双链闭环
//继承BufferedSource BufferedSink Cloneable ByteChannel 接口
// Cloneable 标记是个可以复制的对象
public final class Buffer implements BufferedSource, BufferedSink, Cloneable, ByteChannel {
....
//获取输入流,导向到缓冲
@Override
public InputStream inputStream() {
return new InputStream() {
@Override public int read() {
if (size > 0) return readByte() & 0xff;
return -1;
}
@Override public int read(byte[] sink, int offset, int byteCount) {
return Buffer.this.read(sink, offset, byteCount);
}
@Override public int available() {
return (int) Math.min(size, Integer.MAX_VALUE);
}
@Override public void close() {
}
@Override public String toString() {
return Buffer.this + ".inputStream()";
}
};
}
/** 将缓冲拷贝到输出流中 */
public Buffer copyTo(OutputStream out, long offset, long byteCount) throws IOException {
if (out == null) throw new IllegalArgumentException("out == null");
checkOffsetAndCount(size, offset, byteCount);
if (byteCount == 0) return this;
//跳过偏移量不包含的片段
Segment s = head;
for (; offset >= (s.limit - s.pos); s = s.next) {
offset -= (s.limit - s.pos);
}
//写出数据到输出流中
for (; byteCount > 0; s = s.next) {
int pos = (int) (s.pos + offset);
int toCopy = (int) Math.min(s.limit - pos, byteCount);
out.write(s.data, pos, toCopy);
byteCount -= toCopy;
offset = 0;
}
return this;
}
/** 拷贝数据到buffer中 */
public Buffer copyTo(Buffer out, long offset, long byteCount) {
if (out == null) throw new IllegalArgumentException("out == null");
checkOffsetAndCount(size, offset, byteCount);
if (byteCount == 0) return this;
out.size += byteCount;
//跳过偏移量不包含的片段
Segment s = head;
for (; offset >= (s.limit - s.pos); s = s.next) {
offset -= (s.limit - s.pos);
}
//浅拷贝buffer
for (; byteCount > 0; s = s.next) {
Segment copy = s.sharedCopy();
copy.pos += offset;
copy.limit = Math.min(copy.pos + (int) byteCount, copy.limit);
if (out.head == null) {
out.head = copy.next = copy.prev = copy;
} else {
out.head.prev.push(copy);
}
byteCount -= copy.limit - copy.pos;
offset = 0;
}
return this;
}
//写数据, 会清空缓存
public Buffer writeTo(OutputStream out, long byteCount) throws IOException {
if (out == null) throw new IllegalArgumentException("out == null");
checkOffsetAndCount(size, 0, byteCount);
Segment s = head;
while (byteCount > 0) {
int toCopy = (int) Math.min(byteCount, s.limit - s.pos);
out.write(s.data, s.pos, toCopy);
s.pos += toCopy;
size -= toCopy;
byteCount -= toCopy;
if (s.pos == s.limit) {
Segment toRecycle = s;
head = s = toRecycle.pop();
SegmentPool.recycle(toRecycle);
}
}
return this;
}
//** 获取可写入的尾部片段,有个容量限制[0, Segment.SIZE]
private void readFrom(InputStream in, long byteCount, boolean forever) throws IOException {
if (in == null) throw new IllegalArgumentException("in == null");
while (byteCount > 0 || forever) {
//获取可写尾部片段
Segment tail = writableSegment(1);
int maxToCopy = (int) Math.min(byteCount, Segment.SIZE - tail.limit);
int bytesRead = in.read(tail.data, tail.limit, maxToCopy);
if (bytesRead == -1) {
if (forever) return;
throw new EOFException();
}
tail.limit += bytesRead;
size += bytesRead;
byteCount -= bytesRead;
}
}
//获取可以被刷新的到底层接收器片段
public long completeSegmentByteCount() {
long result = size;
if (result == 0) return 0;
//忽略仍然可以被写入的尾部片段
Segment tail = head.prev;
if (tail.limit < Segment.SIZE && tail.owner) {
result -= tail.limit - tail.pos;
}
return result;
}
//读取一个字节
@Override public byte readByte() {
if (size == 0) throw new IllegalStateException("size == 0");
Segment segment = head;
int pos = segment.pos;
int limit = segment.limit;
byte[] data = segment.data;
byte b = data[pos++];
size -= 1;
if (pos == limit) {
head = segment.pop();
SegmentPool.recycle(segment);
} else {
segment.pos = pos;
}
return b;
}
//获取指定位置字节
public byte getByte(long pos) {
checkOffsetAndCount(size, pos, 1);
if (size - pos > pos) {
//从头开始查找
for (Segment s = head; true; s = s.next) {
int segmentByteCount = s.limit - s.pos;
if (pos < segmentByteCount) return s.data[s.pos + (int) pos];
pos -= segmentByteCount;
}
} else {
//从尾部开始查找
pos -= size;
for (Segment s = head.prev; true; s = s.prev) {
pos += s.limit - s.pos;
if (pos >= 0) return s.data[s.pos + (int) pos];
}
}
}
}
IO读写实验
PC
- 实验代码
public class Client {
public static void main(String[] args) throws Exception {
//okioBigCopy();
biyCopy();
}
public static void okioBigCopy() throws Exception {
File file = new File("/tmp/big.dat");
Sink sink = Okio.sink(file);
BufferedSink bufferedSink = Okio.buffer(sink);
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < 1000; i ++) {
stringBuilder.append("Sometimes you have to trust your gut.");
}
long begin = System.currentTimeMillis();
for (int i = 0; i < 100000; i ++) {
bufferedSink.writeUtf8(stringBuilder.toString());
}
System.out.print(String.format("use time: %d" ,System.currentTimeMillis() - begin));
}
public static void biyCopy() throws Exception {
File file = new File("/tmp/big.dat");
FileWriter fileWriter = new FileWriter(file);
BufferedWriter bufferedWriter = new BufferedWriter(fileWriter);
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < 1000; i ++) {
stringBuilder.append("Sometimes you have to trust your gut.");
}
long begin = System.currentTimeMillis();
for (int i = 0; i < 100000; i ++) {
bufferedWriter.write(stringBuilder.toString());
}
System.out.print(String.format("use time: %d" ,System.currentTimeMillis() - begin));
}
}
- 内存充足(32G)
次/100000 | bufferedSink时间(ms) | bufferedWriter时间(ms) |
---|---|---|
1 | 6157 | 5187 |
2 | 11490 | 10605 |
3 | 17301 | 15884 |
4 | 23998 | 20983 |
5 | 29643 | 26151 |
6 | 34201 | 32382 |
7 | 39522 | 37386 |
8 | 46359 | 42466 |
9 | 51699 | 47830 |
10 | 57496 | 51209 |
- 内存紧缺(1G)
次/100000 | bufferedSink时间(ms) | bufferedWriter时间(ms) |
---|---|---|
1 | 34654 | 34982 |
2 | 70728 | 70443 |
3 | 106639 | 105615 |
4 | 141046 | 140839 |
5 | 176432 | 176183 |
6 | 211856 | 211886 |
7 | 247113 | 246797 |
8 | 282475 | 283026 |
9 | 318444 | 319129 |
10 | 353292 | 353617 |
上面两组数据都是在PC上进行测试,发现在PC性能足够强大的时候,使用BufferedSink与BufferedWriter基本相差不大。在高配置PC上BufferedWrite表现的更好,在内存只有1G的PC上BufferedSink在1-3次表现中稍微好一点,于是作者联想到okio的双链结构能更好的利用内存碎片。下面我们使用移动设备来测试一下。
移动设备
条件:1.清空所有应用进程。 2.手机充电操作,关闭节能模式。3.测试应用前台显示。4.存储空间预留2G以上(避免固态硬盘耗尽时,io读写性能将呈现断崖式下降)
- 实验代码
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
new Thread(new CopyRunnable()).start();
}
class CopyRunnable implements Runnable {
private Handler handler = new Handler(Looper.getMainLooper());
@Override
public void run(){
try {
//final long consume = okioBigCopy();
final long consume = biyCopy();
handler.post(new Runnable() {
@Override
public void run() {
TextView textView = findViewById(R.id.tv_consume);
textView.setText(String.format("consume: %d", consume));
}
});
} catch (Exception e) {
handler.post(new Runnable() {
@Override
public void run() {
TextView textView = findViewById(R.id.tv_consume);
textView.setText("error");
}
});
}
}
}
public long okioBigCopy() throws Exception {
File file = new File( this.getCacheDir().getAbsoluteFile() + File.separator + "big.dat");
Sink sink = Okio.sink(file);
BufferedSink bufferedSink = Okio.buffer(sink);
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < 1000; i ++) {
stringBuilder.append("Sometimes you have to trust your gut.");
}
long begin = System.currentTimeMillis();
for (int i = 0; i < 600000; i ++) {
bufferedSink.writeUtf8(stringBuilder.toString());
}
return System.currentTimeMillis() - begin;
}
public long biyCopy() throws Exception {
File file = new File(this.getCacheDir().getAbsoluteFile() + File.separator + "big.dat");
FileWriter fileWriter = new FileWriter(file);
BufferedWriter bufferedWriter = new BufferedWriter(fileWriter);
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < 1000; i ++) {
stringBuilder.append("Sometimes you have to trust your gut.");
}
long begin = System.currentTimeMillis();
for (int i = 0; i < 600000; i ++) {
bufferedWriter.write(stringBuilder.toString());
}
return System.currentTimeMillis() - begin;
}
}
- vivo y31a(3G内存)
由于存储容量即将到达测试允许范围,所以测试次数不多
次/100000 | bufferedSink时间(ms) | bufferedWriter时间(ms) |
---|---|---|
1 | 146974 | 159939 |
2 | 281275 | 314941 |
3 | 427492 | 475487 |
4 | 565977 | 662167 |
总体来看,在第四次中性能提高14.5%
- vivo x9 (4G内存)
次/100000 | bufferedSink时间(ms) | bufferedWriter时间(ms) |
---|---|---|
1 | 66267 | 63834 |
2 | 131017 | 128798 |
3 | 195906 | 192876 |
4 | 261062 | 298457 |
5 | 315665 | 340880 |
6 | 381617 | 386513 |
从上面的数据我们可以分析出,okio的双链结构的确能更好的利用内存碎片,同时也更加稳定,在第6次中,BufferedWriter 性能突然又增强了,百思不得其解。
如果继续把实验做下去,作者觉得没有多大意义了,现在的写文件太大了,一次 3.2G, 六次的文件大小就已经达到19G多,已经脱离实际了。所以接下来我们将在实际应用运行时,同时打开5个应用程序并锁定其中的4个应用程序,并且使用小文件重复写入的方式,来对比bufferedSink和BufferedWriter。
- vivo x9 (4G内存)
次/10000 | bufferedSink时间(ms) | bufferedWriter时间(ms) |
---|---|---|
1 | 662 | 652 |
2 | 1204 | 1234 |
3 | 1792 | 1817 |
4 | 2611 | 2394 |
5 | 2949 | 2963 |
6 | 3718 | 3857 |
7 | 4219 | 4307 |
8 | 4852 | 5000 |
9 | 5445 | 5749 |
10 | 6121 | 6160 |
11 | 7091 | 6903 |
12 | 7411 | 7486 |
13 | 8046 | 8264 |
14 | 8488 | 9300 |
15 | 8709 | 9469 |
16 | 9383 | 9660 |
17 | 9859 | 10233 |
18 | 10517 | 10731 |
19 | 11135 | 11402 |
20 | 11458 | 12039 |
总结:在移动设备中BufferedSink 优于 BufferedWriter, 不算惊艳,在老设备中表现优秀,在内存充足的设备中,表现一般,但是稳定性高于BufferedWriter。
网友评论