序
本文主要研究一下storm的PartialKeyGrouping
實例
@Testpublic void testPartialKeyGrouping() throws InvalidTopologyException, AuthorizationException, AlreadyAliveException {String spoutId = "wordGenerator";String counterId = "counter";String aggId = "aggregator";String intermediateRankerId = "intermediateRanker";String totalRankerId = "finalRanker";int TOP_N = 5;TopologyBuilder builder = new TopologyBuilder();builder.setSpout(spoutId, new TestWordSpout(), 5);//NOTE 通過partialKeyGrouping替代fieldsGrouping,實現較為均衡的負載到countBoltbuilder.setBolt(counterId, new RollingCountBolt(9, 3), 4).partialKeyGrouping(spoutId, new Fields("word"));builder.setBolt(aggId, new RollingCountAggBolt(), 4).fieldsGrouping(counterId, new Fields("obj"));builder.setBolt(intermediateRankerId, new IntermediateRankingsBolt(TOP_N), 4).fieldsGrouping(aggId, new Fields("obj"));builder.setBolt(totalRankerId, new TotalRankingsBolt(TOP_N)).globalGrouping(intermediateRankerId);submitRemote(builder);}
復制代碼
- 值得注意的是在wordCount的bolt使用PartialKeyGrouping,同一個單詞不再固定發給相同的task,因此這里還需要RollingCountAggBolt按fieldsGrouping進行合并。
PartialKeyGrouping(1.2.2版
)
storm-core-1.2.2-sources.jar!/org/apache/storm/grouping/PartialKeyGrouping.java
public class PartialKeyGrouping implements CustomStreamGrouping, Serializable {private static final long serialVersionUID = -447379837314000353L;private List<Integer> targetTasks;private long[] targetTaskStats;private HashFunction h1 = Hashing.murmur3_128(13);private HashFunction h2 = Hashing.murmur3_128(17);private Fields fields = null;private Fields outFields = null;public PartialKeyGrouping() {//Empty}public PartialKeyGrouping(Fields fields) {this.fields = fields;}@Overridepublic void prepare(WorkerTopologyContext context, GlobalStreamId stream, List<Integer> targetTasks) {this.targetTasks = targetTasks;targetTaskStats = new long[this.targetTasks.size()];if (this.fields != null) {this.outFields = context.getComponentOutputFields(stream);}}@Overridepublic List<Integer> chooseTasks(int taskId, List<Object> values) {List<Integer> boltIds = new ArrayList<>(1);if (values.size() > 0) {byte[] raw;if (fields != null) {List<Object> selectedFields = outFields.select(fields, values);ByteBuffer out = ByteBuffer.allocate(selectedFields.size() * 4);for (Object o: selectedFields) {if (o instanceof List) {out.putInt(Arrays.deepHashCode(((List)o).toArray()));} else if (o instanceof Object[]) {out.putInt(Arrays.deepHashCode((Object[])o));} else if (o instanceof byte[]) {out.putInt(Arrays.hashCode((byte[]) o));} else if (o instanceof short[]) {out.putInt(Arrays.hashCode((short[]) o));} else if (o instanceof int[]) {out.putInt(Arrays.hashCode((int[]) o));} else if (o instanceof long[]) {out.putInt(Arrays.hashCode((long[]) o));} else if (o instanceof char[]) {out.putInt(Arrays.hashCode((char[]) o));} else if (o instanceof float[]) {out.putInt(Arrays.hashCode((float[]) o));} else if (o instanceof double[]) {out.putInt(Arrays.hashCode((double[]) o));} else if (o instanceof boolean[]) {out.putInt(Arrays.hashCode((boolean[]) o));} else if (o != null) {out.putInt(o.hashCode());} else {out.putInt(0);}}raw = out.array();} else {raw = values.get(0).toString().getBytes(); // assume key is the first field}int firstChoice = (int) (Math.abs(h1.hashBytes(raw).asLong()) % this.targetTasks.size());int secondChoice = (int) (Math.abs(h2.hashBytes(raw).asLong()) % this.targetTasks.size());int selected = targetTaskStats[firstChoice] > targetTaskStats[secondChoice] ? secondChoice : firstChoice;boltIds.add(targetTasks.get(selected));targetTaskStats[selected]++;}return boltIds;}
}
復制代碼
- 可以看到PartialKeyGrouping是一種CustomStreamGrouping,在prepare的時候,初始化了long[] targetTaskStats用于統計每個task
- partialKeyGrouping如果沒有指定fields,則默認按outputFields的第一個field來計算
- 這里使用guava類庫提供的Hashing.murmur3_128函數,構造了兩個HashFunction,然后計算哈希值的絕對值與targetTasks.size()取余數得到兩個可選的taskId下標
- 然后根據targetTaskStats的統計值,取用過的次數小的那個taskId,選中之后更新targetTaskStats
PartialKeyGrouping(2.0.0版
)
storm-2.0.0/storm-client/src/jvm/org/apache/storm/grouping/PartialKeyGrouping.java
/*** A variation on FieldGrouping. This grouping operates on a partitioning of the incoming tuples (like a FieldGrouping), but it can send* Tuples from a given partition to multiple downstream tasks.** Given a total pool of target tasks, this grouping will always send Tuples with a given key to one member of a subset of those tasks. Each* key is assigned a subset of tasks. Each tuple is then sent to one task from that subset.** Notes: - the default TaskSelector ensures each task gets as close to a balanced number of Tuples as possible - the default* AssignmentCreator hashes the key and produces an assignment of two tasks*/
public class PartialKeyGrouping implements CustomStreamGrouping, Serializable {private static final long serialVersionUID = -1672360572274911808L;private List<Integer> targetTasks;private Fields fields = null;private Fields outFields = null;private AssignmentCreator assignmentCreator;private TargetSelector targetSelector;public PartialKeyGrouping() {this(null);}public PartialKeyGrouping(Fields fields) {this(fields, new RandomTwoTaskAssignmentCreator(), new BalancedTargetSelector());}public PartialKeyGrouping(Fields fields, AssignmentCreator assignmentCreator) {this(fields, assignmentCreator, new BalancedTargetSelector());}public PartialKeyGrouping(Fields fields, AssignmentCreator assignmentCreator, TargetSelector targetSelector) {this.fields = fields;this.assignmentCreator = assignmentCreator;this.targetSelector = targetSelector;}@Overridepublic void prepare(WorkerTopologyContext context, GlobalStreamId stream, List<Integer> targetTasks) {this.targetTasks = targetTasks;if (this.fields != null) {this.outFields = context.getComponentOutputFields(stream);}}@Overridepublic List<Integer> chooseTasks(int taskId, List<Object> values) {List<Integer> boltIds = new ArrayList<>(1);if (values.size() > 0) {final byte[] rawKeyBytes = getKeyBytes(values);final int[] taskAssignmentForKey = assignmentCreator.createAssignment(this.targetTasks, rawKeyBytes);final int selectedTask = targetSelector.chooseTask(taskAssignmentForKey);boltIds.add(selectedTask);}return boltIds;}/*** Extract the key from the input Tuple.*/private byte[] getKeyBytes(List<Object> values) {byte[] raw;if (fields != null) {List<Object> selectedFields = outFields.select(fields, values);ByteBuffer out = ByteBuffer.allocate(selectedFields.size() * 4);for (Object o : selectedFields) {if (o instanceof List) {out.putInt(Arrays.deepHashCode(((List) o).toArray()));} else if (o instanceof Object[]) {out.putInt(Arrays.deepHashCode((Object[]) o));} else if (o instanceof byte[]) {out.putInt(Arrays.hashCode((byte[]) o));} else if (o instanceof short[]) {out.putInt(Arrays.hashCode((short[]) o));} else if (o instanceof int[]) {out.putInt(Arrays.hashCode((int[]) o));} else if (o instanceof long[]) {out.putInt(Arrays.hashCode((long[]) o));} else if (o instanceof char[]) {out.putInt(Arrays.hashCode((char[]) o));} else if (o instanceof float[]) {out.putInt(Arrays.hashCode((float[]) o));} else if (o instanceof double[]) {out.putInt(Arrays.hashCode((double[]) o));} else if (o instanceof boolean[]) {out.putInt(Arrays.hashCode((boolean[]) o));} else if (o != null) {out.putInt(o.hashCode());} else {out.putInt(0);}}raw = out.array();} else {raw = values.get(0).toString().getBytes(); // assume key is the first field}return raw;}//......
}
復制代碼
- 2.0.0版本將邏輯封裝到了RandomTwoTaskAssignmentCreator以及BalancedTargetSelector中
RandomTwoTaskAssignmentCreator
storm-2.0.0/storm-client/src/jvm/org/apache/storm/grouping/PartialKeyGrouping.java
/*** This interface is responsible for choosing a subset of the target tasks to use for a given key.** NOTE: whatever scheme you use to create the assignment should be deterministic. This may be executed on multiple Storm Workers, thus* each of them needs to come up with the same assignment for a given key.*/public interface AssignmentCreator extends Serializable {int[] createAssignment(List<Integer> targetTasks, byte[] key);}/*========== Implementations ==========*//*** This implementation of AssignmentCreator chooses two arbitrary tasks.*/public static class RandomTwoTaskAssignmentCreator implements AssignmentCreator {/*** Creates a two task assignment by selecting random tasks.*/public int[] createAssignment(List<Integer> tasks, byte[] key) {// It is necessary that this produce a deterministic assignment based on the key, so seed the Random from the keyfinal long seedForRandom = Arrays.hashCode(key);final Random random = new Random(seedForRandom);final int choice1 = random.nextInt(tasks.size());int choice2 = random.nextInt(tasks.size());// ensure that choice1 and choice2 are not the same taskchoice2 = choice1 == choice2 ? (choice2 + 1) % tasks.size() : choice2;return new int[]{ tasks.get(choice1), tasks.get(choice2) };}}
復制代碼
- 2.0.0版本不再使用guava類庫提供的Hashing.murmur3_128哈希函數,轉而使用key的哈希值作為seed,采用Random函數來計算兩個taskId的下標,這里返回兩個值供bolt做負載均衡選擇
BalancedTargetSelector
storm-2.0.0/storm-client/src/jvm/org/apache/storm/grouping/PartialKeyGrouping.java
/*** This interface chooses one element from a task assignment to send a specific Tuple to.*/public interface TargetSelector extends Serializable {Integer chooseTask(int[] assignedTasks);}/*** A basic implementation of target selection. This strategy chooses the task within the assignment that has received the fewest Tuples* overall from this instance of the grouping.*/public static class BalancedTargetSelector implements TargetSelector {private Map<Integer, Long> targetTaskStats = Maps.newHashMap();/*** Chooses one of the incoming tasks and selects the one that has been selected the fewest times so far.*/public Integer chooseTask(int[] assignedTasks) {Integer taskIdWithMinLoad = null;Long minTaskLoad = Long.MAX_VALUE;for (Integer currentTaskId : assignedTasks) {final Long currentTaskLoad = targetTaskStats.getOrDefault(currentTaskId, 0L);if (currentTaskLoad < minTaskLoad) {minTaskLoad = currentTaskLoad;taskIdWithMinLoad = currentTaskId;}}targetTaskStats.put(taskIdWithMinLoad, targetTaskStats.getOrDefault(taskIdWithMinLoad, 0L) + 1);return taskIdWithMinLoad;}}
復制代碼
- BalancedTargetSelector根據選中的taskId,然后根據targetTaskStats計算taskIdWithMinLoad返回
FieldsGrouper
storm-2.0.0/storm-client/src/jvm/org/apache/storm/daemon/GrouperFactory.java
public static class FieldsGrouper implements CustomStreamGrouping {private Fields outFields;private List<List<Integer>> targetTasks;private Fields groupFields;private int numTasks;public FieldsGrouper(Fields outFields, Grouping thriftGrouping) {this.outFields = outFields;this.groupFields = new Fields(Thrift.fieldGrouping(thriftGrouping));}@Overridepublic void prepare(WorkerTopologyContext context, GlobalStreamId stream, List<Integer> targetTasks) {this.targetTasks = new ArrayList<List<Integer>>();for (Integer targetTask : targetTasks) {this.targetTasks.add(Collections.singletonList(targetTask));}this.numTasks = targetTasks.size();}@Overridepublic List<Integer> chooseTasks(int taskId, List<Object> values) {int targetTaskIndex = TupleUtils.chooseTaskIndex(outFields.select(groupFields, values), numTasks);return targetTasks.get(targetTaskIndex);}}
復制代碼
- 這里可以看到FieldsGrouper的chooseTasks方法使用TupleUtils.chooseTaskIndex來選擇taskId下標
TupleUtils.chooseTaskIndex
storm-2.0.0/storm-client/src/jvm/org/apache/storm/utils/TupleUtils.java
public static <T> int chooseTaskIndex(List<T> keys, int numTasks) {return Math.floorMod(listHashCode(keys), numTasks);}private static <T> int listHashCode(List<T> alist) {if (alist == null) {return 1;} else {return Arrays.deepHashCode(alist.toArray());}}
復制代碼
- 這里先對keys進行listHashCode,然后與numTasks進行Math.floorMod運算,即向下取模
- listHashCode調用了Arrays.deepHashCode(alist.toArray())進行哈希值計算
小結
- storm的PartialKeyGrouping是解決fieldsGrouping造成的bolt節點skewed load的問題
- fieldsGrouping采取的是對所選字段進行哈希然后與taskId數量向下取模來選擇taskId的下標
- PartialKeyGrouping在1.2.2版本的實現是使用guava提供的Hashing.murmur3_128哈希函數計算哈希值,然后取絕對值與taskId數量取余數得到兩個可選的taskId下標;在2.0.0版本則使用key的哈希值作為seed,采用Random函數來計算兩個taskId的下標。注意這里返回兩個值供bolt做負載均衡選擇,這是與fieldsGrouping的差別。在得到兩個候選taskId之后,PartialKeyGrouping額外維護了taskId的使用數,每次選擇使用少的,與此同時也更新每次選擇的計數。
- 值得注意的是在wordCount的bolt使用PartialKeyGrouping,同一個單詞不再固定發給相同的task,因此這里還需要RollingCountAggBolt按fieldsGrouping進行合并。
doc
- Common Topology Patterns
- The Power of Both Choices: Practical Load Balancing for Distributed Stream Processing Engines
- Storm-源碼分析-Streaming Grouping (backtype.storm.daemon.executor)