美文网首页
Intro to Storm

Intro to Storm

作者: Crazy_Maomao | 来源:发表于2017-02-04 03:52 被阅读44次

    This tutorial will introduce the basic concepts of Apache Storm, the installation and working examples.

    Core Concept

    1. What is Storm.
      Storm is a real-time stream processing tool.
    2. Core Concepts
    Topology.png Architecture.png
    Components Description
    Tuple Tuple is the main data structure in Storm. It is a list of ordered elements. By default, a Tuple supports all data types. Generally, it is modelled as a set of comma separated values and passed to a Storm cluster.
    Stream Stream is an unordered sequence of tuples.
    Spouts Source of stream. Generally, Storm accepts input data from raw data sources like Twitter Streaming API, Apache Kafka queue, Kestrel queue, etc. Otherwise you can write spouts to read data from datasources. “ISpout" is the core interface for implementing spouts. Some of the specific interfaces are IRichSpout, BaseRichSpout, KafkaSpout, etc.
    Bolts Bolts are logical processing units. Spouts pass data to bolts and bolts process and produce a new output stream. Bolts can perform the operations of filtering, aggregation, joining, interacting with data sources and databases. Bolt receives data and emits to one or more bolts. “IBolt” is the core interface for implementing bolts. Some of the common interfaces are IRichBolt, IBasicBolt, etc.
    Topology Topology is a directed graph where vertices are computation and edges are stream of data. Storm keeps the topology running until you kill them.
    Tasks A task is the execution of a spout or bolt. A spout or bolt has multi tasks.
    Workers Every spout or bolt has many workers on worker node whose role is process jobs. But we don't need to care.
    Nimbus Master node of Storm which is responsible for assigning data and tasks.
    Supervisor Run the tasks assigned by the nimbus
    ZooKeeper Monitor the status of nimbus and supervisor and transfer the data and messages between them.

    ** feature **:

    • Stateless
    • Fault tolerant
    • Efficient, vary fast and extensible

    Installation

    1. Install storm on mac os
    brew install zookeeper
    brew install zeromq
    brew install storm
    

    Edit your storm config file storm.yaml in storm/libexec/conf

    storm.zookeeper.servers:
    - "localhost"
    # – “server2”
    #
    nimbus.host: "localhost"
    nimbus.thrift.port: 6627
    ui.port: 8772
    storm.local.dir: "/Users/yourowndictionary/storm/data"
    supervisor.slots.ports:
    - 6700
    - 6701
    - 6702
    - 6703
    

    Then, start each service in a console

    zkserver start
    storm nimbus
    storm supervisor
    storm ui
    

    Install in develop mode with intellij

    1. Download the source file on storm
    2. unzip it and download the Intellij
    3. Follow the instruction Using storm-starter with IntelliJ IDEA part
      4 . Change the dependency
      <dependency>
          <groupId>org.apache.storm</groupId>
          <artifactId>storm-core</artifactId>
          <version>${project.version}</version>
          <!--
            Use "provided" scope to keep storm out of the jar-with-dependencies
            For IntelliJ dev, intellij will load properly.
          -->
          <!--  <scope>${provided.scope}</scope> -->
        </dependency>
    

    Define Spout Action

    A spout class will define the actions about data generation. We will implement an IRichSpout interface. And it has the following methods:

    • open : open an environment include data source
    • nextTuple : Emits the generated data
    • close : shut down the source
    public void nextTuple() {
        Utils.sleep(100);
        final String[] words = new String[] {"nathan", "mike", "jackson", "golda", "bertels"};
        final Random rand = new Random();
        final String word = words[rand.nextInt(words.length)];
        _collector.emit(new Values(word));
    }
    

    Define Bolt

    Bolt subscribe input from spout or other bolt and take actions on them. It will implement IRichBolt interface.

    • prepare − Provides the bolt with an environment to execute. The executors will run this method to initialize the spout.
    • execute − Process a single tuple of input.
    • cleanup − Called when a bolt is going to shutdown.
    • declareOutputFields − Declares the output schema of the tuple.
    public static class ExclamationBolt extends BaseRichBolt {
        OutputCollector _collector;
    
        @Override
        public void prepare(Map conf, TopologyContext context, OutputCollector collector) {
          _collector = collector;
        }
    
        @Override
        public void execute(Tuple tuple) {
          _collector.emit(tuple, new Values(tuple.getString(0) + "!!!"));
          _collector.ack(tuple);
        }
    
        @Override
        public void declareOutputFields(OutputFieldsDeclarer declarer) {
          declarer.declare(new Fields("word"));
        }
      }
    

    Define Topology

    TopologyBuilder class provides simple and easy methods to create complex topologies

        TopologyBuilder builder = new TopologyBuilder();
    
        builder.setSpout("word", new TestWordSpout(), 10);
        builder.setBolt("exclaim1", new ExclamationBolt(), 3).shuffleGrouping("word");
        builder.setBolt("exclaim2", new ExclamationBolt(), 2).shuffleGrouping("exclaim1");
    

    Stream Grouping

    When task is send from Bolt A to Bolt B, which task in Bolt B should accept the task?
    Stream Grouping will define this.

    TopologyBuilder builder = new TopologyBuilder();
    
    builder.setSpout("sentences", new RandomSentenceSpout(), 5);        
    builder.setBolt("split", new SplitSentence(), 8)
            .shuffleGrouping("sentences");
    builder.setBolt("count", new WordCount(), 12)
            .fieldsGrouping("split", new Fields("word"));
    

    You can refer other methods on doc

    Trident on Apache Storm

    It is high level stream processing method just like SQL.
    Trident API exposes an easy option to create Trident topology using “TridentTopology” class. Basically, Trident topology receives input stream from spout and do ordered sequence of operation (filter, aggregation, grouping, etc.,) on the stream. Storm Tuple is replaced by Trident Tuple and Bolts are replaced by operations.
    It includes these important methods:

    • Filter : get the subset of dataset
    public class MyFilter extends BaseFilter {
       public boolean isKeep(TridentTuple tuple) {
          return tuple.getInteger(1) % 2 == 0;
       }
    }
    
    input
    
    [1, 2]
    [1, 3]
    [1, 4]
    
    output
    
    [1, 2]
    [1, 4]
    

    In storm you can use it like this.

    TridentTopology topology = new TridentTopology();
    topology.newStream("spout", spout)
    .each(new Fields("a", "b"), new MyFilter())
    
    • Function: Perform a simple operation on a tuple.
    public class MyFunction extends BaseFunction {
       public void execute(TridentTuple tuple, TridentCollector collector) {
          int a = tuple.getInteger(0);
          int b = tuple.getInteger(1);
          collector.emit(new Values(a + b));
       }
    }
    
    • Aggregation : perform aggregation operations

    • Grouping

    • Merging and Joining

    What's next ? More practice example.

    相关文章

      网友评论

          本文标题:Intro to Storm

          本文链接:https://www.haomeiwen.com/subject/hbzrittx.html