call
call和funcation都是发送到分布式节点执行的代码。
是实现了IgniteCallable接口的算子,被ignite.compute()发送到节点去执行。
call和funcation可以同步或者异步执行,大多数情况下,我们会使用异步执行。
this.compute.broadcast(() -> System.out.println("Hello Node: " + ignite.cluster().localNode().id()));
private Collection<IgniteCallable<Integer>> createCalls(){
Collection<IgniteCallable<Integer>> calls = new ArrayList<>();
for(String word : "How many characters".split(" ")) {
calls.add(() -> {
return word.length();
});
}
return calls;
}
public boolean call(){
Collection<Integer> res = this.compute.call(createCalls());
int total = res.stream().mapToInt(Integer::intValue).sum();
logger.info("call: the total lengths of all words is = " + total);
return true;
}
public boolean asyncCall(){
IgniteFuture<Collection<Integer>> future = this.compute.callAsync(createCalls());
future.listen(fut -> {
int total = fut.get().stream().mapToInt(Integer::intValue).sum();
logger.info("asyncCall: Total number of characters = " + total);
});
return true;
}
map-reduce
call和map-reduce的场景比较适合replicated的方式,当所有的节点通过复制模式拿到数据之后,使用call和map-redice可以从local快速获得数据.
但是“Replicated caches are ideal when data sets are small and updates are infrequent.”,这个就有点搞笑。
但当由于时钟同步差异,节点的数据不一致时会怎样?
public boolean mapReduce(){
String text = "Hello Ignite Enable World!";
int cnt = ignite.compute().execute(MapExampleCharacterCountTask.class, text);
logger.info("mapReduce: text length without spaces = " + cnt);
return true;
}
private static class MapExampleCharacterCountTask extends ComputeTaskAdapter<String, Integer> {
@Override
public Map<? extends ComputeJob, ClusterNode> map(List<ClusterNode> nodes, String arg) throws IgniteException {
Map<ComputeJob, ClusterNode> map = new HashMap<>();
Iterator<ClusterNode> it = nodes.iterator();
for (final String word : arg.split(" ")) {
if (!it.hasNext()) {
it = nodes.iterator();
}
ClusterNode node = it.next();
map.put(new ComputeJobAdapter() {
@Override
public Object execute() throws IgniteException {
System.out.println("** node map reduce call **>" + word);
return word.length();
}
}, node);
}
return map;
}
@Override
public Integer reduce(List<ComputeJobResult> results) throws IgniteException {
int sum = 0;
for (ComputeJobResult res : results) {
sum += res.<Integer>getData();
}
return sum;
}
}
affinity compute
当cache使用partition方式部署时,affinity compute使用cache对象相同的算法调度compute到指定的节点,这样算子的执行和cache的位置一致,可以取得本地的查询速度。假设cache中的对象包含一个1000个随机数数组,我们的计算是对这个数据进行sum。
import org.apache.ignite.*;
import org.apache.ignite.binary.BinaryObject;
import org.apache.ignite.lang.IgniteCallable;
import org.apache.ignite.resources.IgniteInstanceResource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.List;
public class AffinityComputeExample {
Ignite ignite;
IgniteCache<Long, Organization> cache;
final static int COUNT_ORG = 1000;
Logger logger = LoggerFactory.getLogger(getClass());
final static String cacheName = "organization";
final static int QUERY_TIMES = 10;
Long[] idxes;
public AffinityComputeExample(){
idxes = new Long[QUERY_TIMES];
for (int i = 0; i < QUERY_TIMES; i++) {
idxes[i] = Long.valueOf(i + 1);
}
}
public void setUp() {
String path = AffinityKeyExample.class.getResource("/example-affinitykey.xml").getFile();
this.ignite = Ignition.start(path);
this.cache = this.ignite.getOrCreateCache(cacheName);
this.cache.clear();
IgniteDataStreamer<Long, Organization> streamerOrg = ignite.dataStreamer(cacheName);
logger.info("load data ...");
for (int i = 1; i <= COUNT_ORG; i++) {
Organization r = new Organization("org_" + i);
streamerOrg.addData(r.id, r);
}
streamerOrg.flush();
streamerOrg.close();
}
public void run() {
setUp();
IgniteCompute compute = this.ignite.compute();
for (Long k:idxes) {
Long sum = compute.affinityCall(cacheName, k, new SumTask(k));
logger.info(k + " sum = " + sum);
}
}
private static class SumTask implements IgniteCallable<Long> {
Long key;
public SumTask(Long k) {
this.key = k;
}
@IgniteInstanceResource
private Ignite ignite;
@Override
public Long call() throws Exception {
IgniteCache<Long, BinaryObject> cache = ignite.cache(cacheName).withKeepBinary();
System.out.println(this.key);
BinaryObject obj = cache.get(this.key);
if (obj != null) {
List<Long> data = obj.field("data");
Long sum = data.stream().mapToLong(Long::longValue).sum();
return sum;
}
return null;
}
}
}
https://ignite.apache.org/docs/latest/data-modeling/affinity-collocation
网友评论