概述
也许我们都知道,HashMap实现map接口来存放键值对的,在OC中也有对应的NSDictionary. 那么我们如何实现一个key 和value 的存储呢. 在我们的印象中只有链表和数组的数据结构,这里和大家一起探讨,分享HashMap 源码的实现细节.
原理探索
先看段简单的代码.
class MyMap{
Entry [] table ;
Entry<K,V>{
K k;
V v;
get(K k){
}
put(K k,V v){
boolean isExisitK = false;
for(int i = ; i < table.lenth; i++){
Entry node = table[i];
if(node.getKey().equal(k)){
isExisitK= true;
node.value = v;
}
}
if(!isExisitk){
//考虑扩容
expandCapacity();
//新增对象
Entry newNode = new Entry(k,v);
table[currentSize] = newNode;
}
}}
}
如果我们按照上面代码的设计,内部HashMap的实现仍然是一个数组的存储,只不过这个数组存储的是一个Entry对象.理论上来讲只要数组无限大,那么就能存放无数对象.但是这样会带来的问题是,如果我不需要那么大的数组,你开辟出来就会造成内存的极大浪费,我们如果能动态调整数组大小就好了.于是我们再加入数组扩容代码.
Entry<K,V>[] expandCapacity(){
Entry [] newTable = new Entry[2*currentSize];
for(int i = 0 ; i < currentSize ; i++){
newTable[i] = table[i];
}
table = newTable;
return table.
}
这样我们就能扩容了,也能解决问题,毕竟大多数情况我们是不需要扩充到无限大的.
假设如果要保存的数据量无限多呢?
显然上面仅仅使用数组的方式是不够的,另外在查找效率上是很低的,我们需要一一对比,使用for 循环每次从中找出对应K。解决上面的问题,我们提出两点设想.
1. 如果我们使用key 通过某个算法生成对应的数组索引下标,并且保证这个索引值在数组的范围之内,那么完全可以解决循环带来的查找开销
2. key 生成的hashCode 完全有可能相同,这会导致新问题,数组下标相同。这就需要解决hash冲突,这里我们如果使用链表的方式存储
hash 冲突解决我这里不说了,可以自行百度,google常用的是
1. 线性探测
2. 拉链法
结合上面提到的两点,我们需要结合数组和链表以及hash函数来完成实现.回到源码,这里参考的是android6.0源码,java 的实现可能有差异.
public HashMap(int capacity) {
.......省略次要代码......
//可以看到这个是 产生一个空数组.
makeTable(capacity);
}
private HashMapEntry<K, V>[] makeTable(int newCapacity) {
@SuppressWarnings("unchecked") HashMapEntry<K, V>[] newTable
= (HashMapEntry<K, V>[]) new HashMapEntry[newCapacity];
table = newTable;
//threshold 暂时不晓得干啥用的.
threshold = (newCapacity >> 1) + (newCapacity >> 2); // 3/4 capacity
return newTable;
}
接着看下put 方法
@Override public V put(K key, V value) {
if (key == null) {
return putValueForNullKey(value);
}
//下面两句是hash计算得出对应的数组下标.
int hash = Collections.secondaryHash(key);
HashMapEntry<K, V>[] tab = table;
int index = hash & (tab.length - 1);
//这段代码表示从原来的数组或者链表中找出对应的key
for (HashMapEntry<K, V> e = tab[index]; e != null; e = e.next) {
//如果存在对应的key 则替换相应的值.
if (e.hash == hash && key.equals(e.key)) {
preModify(e);
V oldValue = e.value;
e.value = value;
return oldValue;
}
}
// 这里,显然是如果空间不够,那就扩容.
modCount++;
if (size++ > threshold) {
tab = doubleCapacity();
index = hash & (tab.length - 1);
}
//如果不存在新增一个实体
addNewEntry(key, value, hash, index);
return null;
}
//添加新的对象.
void addNewEntry(K key, V value, int hash, int index) {
table[index] = new HashMapEntry<K, V>(key, value, hash, table[index]);
}
//扩容函数
private HashMapEntry<K, V>[] doubleCapacity() {
HashMapEntry<K, V>[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) {
return oldTable;
}
int newCapacity = oldCapacity * 2;
HashMapEntry<K, V>[] newTable = makeTable(newCapacity);
if (size == 0) {
return newTable;
}
for (int j = 0; j < oldCapacity; j++) {
/*
* Rehash the bucket using the minimum number of field writes.
* This is the most subtle and delicate code in the class.
*/
HashMapEntry<K, V> e = oldTable[j];
if (e == null) {
continue;
}
int highBit = e.hash & oldCapacity;
HashMapEntry<K, V> broken = null;
newTable[j | highBit] = e;
for (HashMapEntry<K, V> n = e.next; n != null; e = n, n = n.next) {
int nextHighBit = n.hash & oldCapacity;
if (nextHighBit != highBit) {
if (broken == null)
newTable[j | nextHighBit] = n;
else
broken.next = n;
broken = e;
highBit = nextHighBit;
}
}
if (broken != null)
broken.next = null;
}
return newTable;
}
put三部曲
1:查找是否存在该key ,存在更新.
2:不存在该key的情况,新增对象之前,确定是否扩容.
3:新增对象.
get 函数不讲了太简单.代码如下
public V get(Object key) {
if (key == null) {
HashMapEntry<K, V> e = entryForNullKey;
return e == null ? null : e.value;
}
int hash = Collections.secondaryHash(key);
HashMapEntry<K, V>[] tab = table;
for (HashMapEntry<K, V> e = tab[hash & (tab.length - 1)];
e != null; e = e.next) {
K eKey = e.key;
if (eKey == key || (e.hash == hash && key.equals(eKey))) {
return e.value;
}
}
return null;
}
先从数组中查找对应的value,再从该对象的所在的链表中查找.
HashMap的基本讲解就到这了,从上总结几点
- HashMap的实质是数组 + 链表存储.
- Hashmap的key 和 value的遍历是无序的,因为本来根据key生成的索引本无序.
LinkedHashMap
在谈到LRU缓存的时候我们第一眼看到的是LinkedHashMap来存储缓存对象,在OC中使用的是NSCache 对象(在某些博文中看到,说也使用的LRU方式),该对象的存储特点是最近加入或者访问的会自动放到队列头部.看下具体代码
@Override void addNewEntry(K key, V value, int hash, int index) {
LinkedEntry<K, V> header = this.header;
// Remove eldest entry if instructed to do so.
LinkedEntry<K, V> eldest = header.nxt;
if (eldest != header && removeEldestEntry(eldest)) {
remove(eldest.key);
}
// Create new entry, link it on to list, and put it into table
LinkedEntry<K, V> oldTail = header.prv;
LinkedEntry<K, V> newTail = new LinkedEntry<K,V>(
key, value, hash, table[index], header, oldTail);
table[index] = oldTail.nxt = header.prv = newTail;
}
@Override public V get(Object key) {
/*
* This method is overridden to eliminate the need for a polymorphic
* invocation in superclass at the expense of code duplication.
*/
if (key == null) {
HashMapEntry<K, V> e = entryForNullKey;
if (e == null)
return null;
if (accessOrder)
makeTail((LinkedEntry<K, V>) e);
return e.value;
}
//
int hash = Collections.secondaryHash(key);
HashMapEntry<K, V>[] tab = table;
for (HashMapEntry<K, V> e = tab[hash & (tab.length - 1)];
e != null; e = e.next) {
K eKey = e.key;
if (eKey == key || (e.hash == hash && key.equals(eKey))) {
if (accessOrder)
makeTail((LinkedEntry<K, V>) e);
return e.value;
}
}
return null;
}
/**
* Relinks the given entry to the tail of the list. Under access ordering,
* this method is invoked whenever the value of a pre-existing entry is
* read by Map.get or modified by Map.put.
*/
private void makeTail(LinkedEntry<K, V> e) {
// Unlink e
e.prv.nxt = e.nxt;
e.nxt.prv = e.prv;
// Relink e as tail
LinkedEntry<K, V> header = this.header;
LinkedEntry<K, V> oldTail = header.prv;
e.nxt = header;
e.prv = oldTail;
oldTail.nxt = header.prv = e;
modCount++;
}
linkedHashMap总结
- 跟HashMap不同的是LinkedHashMap 有另一套将所有对象关联起来的双向链表规则,即LinkedHashMap有两套存储规则.
- 当put or get 对象时,该对象会被排序到队列头部. 当然 accessOrder(访问排序) 必须设置成true
LRU 缓存
下面是一个缓存的设计样本.
/**
*
* LRU 算法的核心 是LinkedHashMap 因其本身在accessOrder = true 时访问(get 和 put)的节点会被移动到最末尾,所以最先删除的就是很少被使用的》
*
* @since 1.8.1
*/
public class LruMemoryCache<K,V> implements MemoryCache<K,V> {
private final LinkedHashMap<String, V> map;
private final int maxSize;
/** Size of this cache in bytes */
private int size;
/** @param maxSize Maximum sum of the sizes of the Bitmaps in this cache */
public LruMemoryCache(int maxSize) {
if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
}
this.maxSize = maxSize;
this.map = new LinkedHashMap<String, V>(0, 0.75f, true);
}
/**
* Returns the Bitmap for {@code key} if it exists in the cache. If a Bitmap was returned, it is moved to the head
* of the queue. This returns null if a Bitmap is not cached.
*/
@Override
public final V get(String key) {
if (key == null) {
throw new NullPointerException("key == null");
}
synchronized (this) {
return map.get(key);
}
}
/** Caches {@code Bitmap} for {@code key}. The Bitmap is moved to the head of the queue. */
public final boolean put(String key, V value) {
if (key == null || value == null) {
throw new NullPointerException("key == null || value == null");
}
synchronized (this) {
size += sizeOf(key, value);
V previous = map.put(key, value);
if (previous != null) {
size -= sizeOf(key, previous);
}
}
trimToSize(maxSize);
return true;
}
/**
* Remove the eldest entries until the total of remaining entries is at or below the requested size.
*
* @param maxSize the maximum size of the cache before returning. May be -1 to evict even 0-sized elements.
*/
private void trimToSize(int maxSize) {
while (true) {
String key;
V value;
synchronized (this) {
if (size < 0 || (map.isEmpty() && size != 0)) {
throw new IllegalStateException(getClass().getName() + ".sizeOf() is reporting inconsistent results!");
}
if (size <= maxSize || map.isEmpty()) {
break;
}
Map.Entry<String, V> toEvict = map.entrySet().iterator().next();
if (toEvict == null) {
break;
}
key = toEvict.getKey();
value = toEvict.getValue();
//这句代码是一个核心,移除对应的缓存,从队列尾移除.
map.remove(key);
size -= sizeOf(key, value);
}
}
}
/** Removes the entry for {@code key} if it exists. */
@Override
public final V remove(String key) {
if (key == null) {
throw new NullPointerException("key == null");
}
synchronized (this) {
V previous = map.remove(key);
if (previous != null) {
size -= sizeOf(key, previous);
}
return previous;
}
}
@Override
public Collection<String> keys() {
synchronized (this) {
return new HashSet<String>(map.keySet());
}
}
@Override
public void clear() {
trimToSize(-1); // -1 will evict 0-sized elements
}
/**
* Returns the size {@code Bitmap} in bytes.
* <p/>
* An entry's size must not change while it is in the cache.
*/
private int sizeOf(String key, V value) {
//if(V instanceof )
return 0;//value.getRowBytes() * value.getHeight();
}
@Override
public synchronized final String toString() {
return String.format("LruCache[maxSize=%d]", maxSize);
}
}
网友评论