最近看了一下ConcurrentHashMap的相關代碼,感覺JDK1.7和JDK1.8差別挺大的,這次先看下JDK1.7是怎么實現的吧
哈希(hash)
先了解一下啥是哈希(網上有很多介紹),是一種散列函數,簡單來說就是將輸入值轉換為固定值的一種壓縮映射,在Java中最常見的就是Object.hashCode(),通過固定算法計算出來的一個值
數據結構
ConcurrentHashMap主要結構是有Segment<K,V>以及HashEntry<K,V>鏈表組成的
我們先看一下HashEntry<K,V>的主要結構,還是單向鏈表的數據結構:
static final class HashEntry<K,V> {final int hash;//hash值final K key;//存儲keyvolatile V value;//存儲值volatile HashEntry<K,V> next;//指向下一個,單向鏈表HashEntry(int hash, K key, V value, HashEntry<K,V> next) {this.hash = hash;this.key = key;this.value = value;this.next = next;}//......}
?再來看一下Segment<K,V>的數據結構,主要還是用到了HashEntry<K,V>數組:
static final class Segment<K,V> extends ReentrantLock implements Serializable {//數據儲存數組transient volatile HashEntry<K,V>[] table;/*** The load factor for the hash table. Even though this value* is same for all segments, it is replicated to avoid needing* links to outer object.* @serial*///擴容因子,當Segment的數量大于initialCapacity* loadFactor就會擴容final float loadFactor;/*** The table is rehashed when its size exceeds this threshold.* (The value of this field is always <tt>(int)(capacity ** loadFactor)</tt>.)*///閾值,超出后就必須重新散列,就是擴容transient int threshold;Segment(float lf, int threshold, HashEntry<K,V>[] tab) {this.loadFactor = lf;this.threshold = threshold;this.table = tab;}//.....
}
接下來看一下ConcurrentHashMap的構造函數以及相關變量:
/*** The default initial capacity for this table,* used when not otherwise specified in a constructor.*///容器的默認大小static final int DEFAULT_INITIAL_CAPACITY = 16;/*** The default load factor for this table, used when not* otherwise specified in a constructor.*///用來調整大小的,就是擴容static final float DEFAULT_LOAD_FACTOR = 0.75f;/*** The default concurrency level for this table, used when not* otherwise specified in a constructor.*///并發時訪問的線程數量static final int DEFAULT_CONCURRENCY_LEVEL = 16; final Segment<K,V>[] segments;//數據存儲的數組//最大并發的線程數,不能超過65536static final int MAX_SEGMENTS = 1 << 16; // slightly conservative//最大容量數,不能超過2的30次方static final int MAXIMUM_CAPACITY = 1 << 30;public ConcurrentHashMap(int initialCapacity,float loadFactor, int concurrencyLevel) {if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)throw new IllegalArgumentException();if (concurrencyLevel > MAX_SEGMENTS)concurrencyLevel = MAX_SEGMENTS;// Find power-of-two sizes best matching argumentsint sshift = 0;int ssize = 1;while (ssize < concurrencyLevel) {++sshift;ssize <<= 1;}this.segmentShift = 32 - sshift;this.segmentMask = ssize - 1;if (initialCapacity > MAXIMUM_CAPACITY)initialCapacity = MAXIMUM_CAPACITY;int c = initialCapacity / ssize;if (c * ssize < initialCapacity)++c;int cap = MIN_SEGMENT_TABLE_CAPACITY;while (cap < c)cap <<= 1;// create segments and segments[0]Segment<K,V> s0 =new Segment<K,V>(loadFactor, (int)(cap * loadFactor),(HashEntry<K,V>[])new HashEntry[cap]);Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]this.segments = ss;}
在構造方法中可以看到,其實還是創建一個Segment的數組,默認的話長度為16,并且將s0變量賦值進去,s0中的HashEntry數組的大小默認為2。
接下來看一下我們經常用put()方法,源代碼如下:
首先需要計算key值的hash值,計算方法是固定的算法,然后判斷Segment數組中是否有這個hash值的數據,如果不存在的話,則進入擴容方法ensureSegment(j);在這個方法中可以看到擴容新數組的長度為table.length *?loadFactor,即每次擴容為initialCapacity* loadFactor,只會擴容HashEntry數組,并非Segment數組;如果存在的話,則調用Segment的put()方法,這個方法總共有四個參數,最后一個參數是用于區別putIfAbsent()以及put(),這兩個方法區別簡單來說就是,判斷當前key存不存在,如果存在的話put()方法就是覆蓋,而putIfAbsent()就是不覆蓋,并且這兩個方法都會返回舊值,在下面的有Segment的put方法解析。
@SuppressWarnings("unchecked")public V put(K key, V value) {Segment<K,V> s;if (value == null)throw new NullPointerException();int hash = hash(key);int j = (hash >>> segmentShift) & segmentMask;if ((s = (Segment<K,V>)UNSAFE.getObject // nonvolatile; recheck(segments, (j << SSHIFT) + SBASE)) == null) // in ensureSegments = ensureSegment(j);return s.put(key, hash, value, false);}private int hash(Object k) {int h = hashSeed;if ((0 != h) && (k instanceof String)) {return sun.misc.Hashing.stringHash32((String) k);}h ^= k.hashCode();// Spread bits to regularize both segment and index locations,// using variant of single-word Wang/Jenkins hash.h += (h << 15) ^ 0xffffcd7d;h ^= (h >>> 10);h += (h << 3);h ^= (h >>> 6);h += (h << 2) + (h << 14);return h ^ (h >>> 16);}//擴容Segment的數組,private Segment<K,V> ensureSegment(int k) {final Segment<K,V>[] ss = this.segments;long u = (k << SSHIFT) + SBASE; // raw offsetSegment<K,V> seg;if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) {Segment<K,V> proto = ss[0]; // use segment 0 as prototypeint cap = proto.table.length;float lf = proto.loadFactor;int threshold = (int)(cap * lf);HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap];if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))== null) { // recheckSegment<K,V> s = new Segment<K,V>(lf, threshold, tab);while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))== null) {if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s))break;}}}return seg;}//Segement中的put方法:可以看到,首先會先去獲取鎖final V put(K key, int hash, V value, boolean onlyIfAbsent) {HashEntry<K,V> node = tryLock() ? null :scanAndLockForPut(key, hash, value);V oldValue;try {HashEntry<K,V>[] tab = table;int index = (tab.length - 1) & hash;HashEntry<K,V> first = entryAt(tab, index);for (HashEntry<K,V> e = first;;) {//循環鏈表上的節點判斷if (e != null) {K k;if ((k = e.key) == key ||(e.hash == hash && key.equals(k))) {oldValue = e.value;//返回舊值if (!onlyIfAbsent) {e.value = value;//如果是putIfAbsent()則不執行這段覆蓋代碼++modCount;}break;}e = e.next;//鏈表的下一個節點}else {//如果在對應的table數組中不存在則創建一個HashEntry節點,或者創建一個if (node != null)node.setNext(first);elsenode = new HashEntry<K,V>(hash, key, value, first);int c = count + 1;if (c > threshold && tab.length < MAXIMUM_CAPACITY)rehash(node);elsesetEntryAt(tab, index, node);++modCount;count = c;oldValue = null;break;}}} finally {unlock();//釋放鎖}return oldValue;
接下來看看get()方法,其實get()方法的現對來說較為簡單,在定位segment和定位table后,依次掃描這個table元素下的的鏈表,要么找到元素,要么返回null。這里可能會有個并發問題如何獲取是最新的,因為在HashEntry設計當中value屬性的使用了 volatile保證了數據的可見性。但是在獲取的時候并未上鎖,所以在使用get()以及containsKey()方法會存在一致性問題,由于HashEntry是鏈表結構,所以在并發情況下如果其他線程進行修改HashEntry鏈表值的話(即會修改鏈表結構,導致鏈表的next節點地址錯亂),返回值并非是實時數據。
public V get(Object key) {Segment<K,V> s; // manually integrate access methods to reduce overheadHashEntry<K,V>[] tab;int h = hash(key);long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&(tab = s.table) != null) {for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile(tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);e != null; e = e.next) {K k;if ((k = e.key) == key || (e.hash == h && key.equals(k)))return e.value;}}return null;}//獲取containsKey的值public boolean containsKey(Object key) {Segment<K,V> s; // same as get() except no need for volatile value readHashEntry<K,V>[] tab;int h = hash(key);long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&(tab = s.table) != null) {for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile(tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);e != null; e = e.next) {K k;if ((k = e.key) == key || (e.hash == h && key.equals(k)))return true;}}return false;}
//所以在使用key為Object的時候需要重寫一下equals以及hashCode方法
在使用size()時候,會進去兩次統計,并且不是加鎖統計,兩次一致直接返回結果,不一致,重新加鎖再次統計
public int size() {// Try a few times to get accurate count. On failure due to// continuous async changes in table, resort to locking.final Segment<K,V>[] segments = this.segments;int size;boolean overflow; // true if size overflows 32 bitslong sum; // sum of modCountslong last = 0L; // previous sumint retries = -1; // first iteration isn't retrytry {for (;;) {//第一次統計if (retries++ == RETRIES_BEFORE_LOCK) {for (int j = 0; j < segments.length; ++j)ensureSegment(j).lock(); // force creation}sum = 0L;size = 0;overflow = false;//第二次統計for (int j = 0; j < segments.length; ++j) {Segment<K,V> seg = segmentAt(segments, j);if (seg != null) {sum += seg.modCount;int c = seg.count;if (c < 0 || (size += c) < 0)overflow = true;}}if (sum == last)break;last = sum;}} finally {if (retries > RETRIES_BEFORE_LOCK) {for (int j = 0; j < segments.length; ++j)segmentAt(segments, j).unlock();}}return overflow ? Integer.MAX_VALUE : size;}
其他方法我就不介紹啦,下次再看一點JDK1.8的ConcurrentHashMap源代碼,寫的不是很好,不要見怪咯