美文网首页
HBase伪分布式安装与JAVA基本操作

HBase伪分布式安装与JAVA基本操作

作者: 像鸣人 | 来源:发表于2018-06-30 16:59 被阅读0次

    CentOS单机安装

    参照:https://www.yiibai.com/hadoop/

    配置ssh

    参照:https://blog.csdn.net/liaoguolingxian/article/details/70233910

    下载 & 安装 hadoop

    下载:http://www-eu.apache.org/dist/hadoop/common/hadoop-2.8.4/hadoop-2.8.4.tar.gz

    安装:tar -xzvf hadoop-2.8.4.tar.gz -C /usr/local/hadoop

    配置环境变量:

    export HADOOP_HOME=/usr/local/hadoop
    export PATH=$HADOOP_HOME/bin:$PATH
    

    hadoop配置文件

    目录 hadoop/etc

    • core-site.xml
    <property>
      <name>fs.default.name</name>
      <value>hdfs://localhost:9000</value> 
    </property>
    
    • hdfs-site.xml
    <property>
      <name>dfs.replication</name>
      <value>1</value>
    </property>
    <property>
      <name>dfs.name.dir</name>
      <value>file:///home/docker/hadoop/hdfs/namenode</value>
    </property>
    <property>
      <name>dfs.data.dir</name> 
      <value>file:///home/docker/hadoop/hdfs/datanode</value> 
    </property>
    
    • yarn-site.xml
    <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value> 
    </property>
    
    • mapred-site.xml
    <property> 
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
    </property>
    

    启动hadoop

    • 修改 home/docker/hadoop/hdfs 权限
      chmod 777 -R home/docker/hadoop/hdfs

    • 格式化hadoop
      hdfs namenode -format

    • 启动
      hadoop/sbin/start-all.sh

    一些命令

    查看文件列表
    hadoop fs -ls /

    创建目录
    hadoop fs -mkdir -p /user/input

    上传文件
    hadoop fs -put file.txt /user/input

    查看文件内容
    hadoop fs -cat /user/input/file.txt

    下载文件
    hadoop fs -get /user/input/ tmp

    打开/关闭 hadoop 调试日志

    export HADOOP_ROOT_LOGGER=DEBUG,console
    export HADOOP_ROOT_LOGGER=INFO,console
    

    安装 HBase

    1. 下载:http://archive.apache.org/dist/hbase/2.0.0/hbase-2.0.0-bin.tar.gz

    2. 解压:tar -xzvf hbase-2.0.0-bin.tar.gz

    3. 移动:mv hbase-2.0.0 /usr/local/hbase

    4. 配置映射:参照:https://blog.csdn.net/qq_31570685/article/details/51757604

    • /etc/hosts中添加映射
    192.168.1.199   zk_master
    

    192.168.1.199 为本机IP,zk_master 为映射名,可随意指定

    经测试,此步(5)可以不执行

    1. 修改静态IP
    • 使用 ifconfig 查看本机使用 IP 配置
    • cd /etc/sysconfig/network-scriptsls 找到IP配置
    • 修改 DEVICE=zk_master 保存,退出
    1. 添加环境变量:
    export HBASE_HOME=/usr/local/hbase
    
    1. 修改HBase配置
    • hbase-env.sh
    export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64
    export HBASE_MANAGES_ZK=true
    
    • hbase-site.xml
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://192.168.1.199:9000/hbase</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>zk_master</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/home/docker/hadoop/hbase/zk/data</value>
    </property>
    <property>  
        <name>hbase.tmp.dir</name>  
        <value>/home/docker/hadoop/hbase/tmp</value>  
    </property>
    
    1. 启动之
      /usr/local/hbase/start-hbase.sh

    2. 关闭之

    • /usr/local/hbase/start-hbase.sh

    此命令偶尔会出现长时间阻塞无法关闭的现象,则可使用下面命令关闭

    • 使用 ls -1 /tmp |grep hbase-查看/tmp下所有hbase相关的文件,找出*.pid后缀的文件执行下面指令:
    cat /tmp/hbase-user-zookeeper.pid |xargs kill -9
    cat /tmp/hbase-user-master.pid |xargs kill -9
    cat /tmp/hbase-user-1-regionserver.pid |xargs kill -9
    

    4、5这2步一直是看到上文的文章(4中的参照)才解决的,4、5不配置,本机命令行操作不存在问题,但远程是无法成功连接的,这个地方卡了3天才解决,汗!!!

    JAVA操作

    基本java api实现;commons-pool2连接池;类结构如下:


    image.png
    1. pom.xml
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <exclusions>
                <exclusion>
                    <artifactId>logback-core</artifactId>
                    <groupId>ch.qos.logback</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>logback-classic</artifactId>
                    <groupId>ch.qos.logback</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.springframework.data</groupId>
            <artifactId>spring-data-hadoop</artifactId>
            <version>2.5.0.RELEASE</version>
        </dependency>
    
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.7.3</version>
            <exclusions>
                <exclusion>
                    <artifactId>log4j</artifactId>
                    <groupId>log4j</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>slf4j-log4j12</artifactId>
                    <groupId>org.slf4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>1.2.6.1</version>
            <exclusions>
                <exclusion>
                    <artifactId>log4j</artifactId>
                    <groupId>log4j</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>slf4j-log4j12</artifactId>
                    <groupId>org.slf4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>
    
        <dependency>
            <groupId>io.springfox</groupId>
            <artifactId>springfox-swagger2</artifactId>
            <version>2.6.1</version>
        </dependency>
        <dependency>
            <groupId>io.springfox</groupId>
            <artifactId>springfox-swagger-ui</artifactId>
            <version>2.6.1</version>
        </dependency>
    
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-core</artifactId>
            <version>1.2.3</version>
        </dependency>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.2.3</version>
        </dependency>
    
        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>16.0.1</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.16.16</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.31</version>
        </dependency>
    
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
            <version>2.4.2</version>
        </dependency>
    
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
    

    本来计划是用spring-boot/spring-data-hadoop实现的,由于种种原因还是先选择了使用基本java api先实现,上面很多jar用不上,可以尝试删除,也可以直接无视,不影响正常运行!

    1. 添加配置文件:hbase-site.xml
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
    /**
     *
     * Licensed to the Apache Software Foundation (ASF) under one
     * or more contributor license agreements.  See the NOTICE file
     * distributed with this work for additional information
     * regarding copyright ownership.  The ASF licenses this file
     * to you under the Apache License, Version 2.0 (the
     * "License"); you may not use this file except in compliance
     * with the License.  You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    -->
    <configuration>
    
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://192.168.1.199:9000/hbase</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>zk_master</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/home/docker/hadoop/hbase/zk/data</value>
    </property>
    <property>  
        <name>hbase.tmp.dir</name>  
        <value>/home/docker/hadoop/hbase/tmp</value>  
    </property>
    
    </configuration>
    

    直接从hbase/conf中复制的!

    1. 重写连接池实现
    package com.frinder.hadoop.original;
    
    import org.apache.commons.pool2.PooledObject;
    import org.apache.commons.pool2.PooledObjectFactory;
    import org.apache.commons.pool2.impl.DefaultPooledObject;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.hbase.HBaseConfiguration;
    import org.apache.hadoop.hbase.client.Connection;
    import org.apache.hadoop.hbase.client.ConnectionFactory;
    
    /**
     * @author frinder
     * @date 2018/6/30
     * @Description: 需要 resource 目录下的 hbase-site.xml 文件
     */
    public class HBasePooledObjectFactory implements PooledObjectFactory<Connection> {
    
        protected static Configuration configuration;
    
        static {
            configuration = HBaseConfiguration.create();
            configuration.set("hbase.zookeeper.quorum", "zk_master");
            configuration.set("hbase.zookeeper.property.clientPort", "2181");
        }
    
        @Override
        public PooledObject<Connection> makeObject() throws Exception {
            Connection connection = ConnectionFactory.createConnection(configuration);
            return new DefaultPooledObject<>(connection);
        }
    
        @Override
        public void destroyObject(PooledObject<Connection> pooledObject) throws Exception {
            Connection connection;
            if (null != (connection = pooledObject.getObject()) && !connection.isClosed()) {
                connection.close();
            }
        }
    
        @Override
        public boolean validateObject(PooledObject<Connection> pooledObject) {
            Connection connection;
            return null != (connection = pooledObject.getObject()) && !connection.isClosed();
        }
    
        @Override
        public void activateObject(PooledObject<Connection> pooledObject) throws Exception {
        }
    
        @Override
        public void passivateObject(PooledObject<Connection> pooledObject) throws Exception {
        }
    }
    

    commons-pool2使用可参照:https://www.jianshu.com/p/dea5ba90971e

    1. 获取 & 返回 连接接口与实现类:

    接口

    package com.frinder.hadoop.original;
    
    import org.apache.hadoop.hbase.client.Connection;
    
    /**
     * @author frinder
     * @date 2018/6/30
     * @Description: ${todo}
     */
    public interface ConnectionProvider {
    
        /**
         * 从池中获取 conn
         *
         * @return
         */
        Connection getConn();
    
        /**
         * 返回 conn 到池
         *
         * @param conn
         */
        void returnConn(Connection conn);
    }
    

    实现类

    package com.frinder.hadoop.original;
    
    import lombok.extern.slf4j.Slf4j;
    import org.apache.commons.pool2.impl.GenericObjectPool;
    import org.apache.hadoop.hbase.client.Connection;
    
    /**
     * @author frinder
     * @date 2018/6/30
     * @Description: 获取 & 归还 连接实现
     */
    @Slf4j
    public class MyConnectionProvider implements ConnectionProvider {
    
        private static GenericObjectPool<Connection> pool;
    
        static {
            HBasePooledObjectFactory factory = new HBasePooledObjectFactory();
            try {
                pool = new GenericObjectPool<>(factory);
            } catch (Exception e) {
                log.error(e.getMessage(), e);
            }
        }
    
        @Override
        public Connection getConn() {
            try {
                log.info("*** 从连接池获取连接!");
                return pool.borrowObject();
            } catch (Exception e) {
                throw new RuntimeException("getConn 失败,请检查配置", e);
            }
        }
    
        @Override
        public void returnConn(Connection conn) {
            try {
                if (null != conn) {
                    log.info("*** 归还连接池连接!");
                    pool.returnObject(conn);
                }
            } catch (Exception e) {
                throw new RuntimeException("returnConn 失败,请检查配置", e);
            }
        }
    }
    
    1. hbase业务类
    package com.frinder.hadoop.original;
    
    import com.google.common.base.Joiner;
    import com.google.common.collect.Lists;
    import lombok.extern.slf4j.Slf4j;
    import org.apache.hadoop.hbase.*;
    import org.apache.hadoop.hbase.client.*;
    import org.apache.hadoop.hbase.util.Bytes;
    
    import java.util.Iterator;
    import java.util.List;
    
    /**
     * @author frinder
     * @date 2018/6/30
     * @Description: 实际业务类,实现 hbase 各种操作
     */
    @Slf4j
    public class HBaseHelper {
    
        private Admin getAdmin(Connection conn) throws Exception {
            return conn.getAdmin();
        }
    
        /**
         * 创建表
         *
         * @param conn
         * @param tableName
         * @param cols
         * @throws Exception
         */
        public void createTable(Connection conn, String tableName, String... cols) throws Exception {
            log.info("*** begin create table: {}, cols: {}", tableName, Joiner.on(",").join(cols));
            try (Admin admin = getAdmin(conn)) {
                TableName table = TableName.valueOf(tableName);
                if (admin.tableExists(table)) {
                    log.warn("*** table {} already exists!", tableName);
                } else {
                    HTableDescriptor tableDescriptor = new HTableDescriptor(table);
                    for (String col : cols) {
                        tableDescriptor.addFamily(new HColumnDescriptor(col));
                    }
                    admin.createTable(tableDescriptor);
                    log.info("*** finish create table: {}, cols: {}", tableName, Joiner.on(",").join(cols));
                }
            }
        }
    
        /**
         * 删除表
         *
         * @param conn
         * @param tableName
         * @throws Exception
         */
        public void deleteTable(Connection conn, String tableName) throws Exception {
            log.info("*** begin delete table: {}", tableName);
            try (Admin admin = getAdmin(conn)) {
                TableName table = TableName.valueOf(tableName);
                if (admin.tableExists(table)) {
                    admin.disableTable(table);
                    admin.deleteTable(table);
                    log.info("*** finish delete table: {}", tableName);
                }
            }
        }
    
        /**
         * 查询所有表
         *
         * @param conn
         * @return
         * @throws Exception
         */
        public List<TableName> getTables(Connection conn) throws Exception {
            log.info("*** begin get tables");
            try (Admin admin = getAdmin(conn)) {
                TableName[] tableNames = admin.listTableNames();
                log.info("*** finish get tables: {}", Joiner.on(",").join(tableNames));
                return Lists.newArrayList(tableNames);
            }
        }
    
        /**
         * @param conn
         * @param tableName 表名
         * @param rowKey    行标识
         * @param colFamily 簇名
         * @param col       列名
         * @param val       值
         */
        public void insertRow(Connection conn, String tableName, String rowKey, String colFamily, String col, String val) throws Exception {
            log.info("*** begin insert table: {}, rowKey: {}, colFamily: {}, col: {}, val: {}",
                    tableName,
                    rowKey,
                    colFamily,
                    col,
                    val
            );
            try (Table table = conn.getTable(TableName.valueOf(tableName))) {
                Put put = new Put(Bytes.toBytes(rowKey));
                put.addColumn(Bytes.toBytes(colFamily), Bytes.toBytes(col), Bytes.toBytes(val));
                table.put(put);
                log.info("*** finish insert table: {}, rowKey: {}, colFamily: {}, col: {}, val: {}",
                        tableName,
                        rowKey,
                        colFamily,
                        col,
                        val
                );
            }
        }
    
        /**
         * 删除记录
         *
         * @param conn
         * @param tableName
         * @param rowKey
         * @throws Exception
         */
        public void deleteRow(Connection conn, String tableName, String rowKey) throws Exception {
            log.info("*** begin delete table: {}, rowKey: {}",
                    tableName,
                    rowKey
            );
            try (Table table = conn.getTable(TableName.valueOf(tableName))) {
                Delete delete = new Delete(Bytes.toBytes(rowKey));
                table.delete(delete);
                log.info("*** finish delete table: {}, rowKey: {}",
                        tableName,
                        rowKey
                );
            }
        }
    
        /**
         * 删除 簇列
         *
         * @param conn
         * @param tableName
         * @param rowKey
         * @param colFamily
         * @param col
         * @throws Exception
         */
        public void deleteCol(Connection conn, String tableName, String rowKey, String colFamily, String col) throws Exception {
            log.info("*** begin delete table: {}, rowKey: {}, colFamily: {}, col: {}",
                    tableName,
                    rowKey,
                    colFamily,
                    col
            );
            try (Table table = conn.getTable(TableName.valueOf(tableName))) {
                Delete delete = new Delete(Bytes.toBytes(rowKey));
                delete.addFamily(Bytes.toBytes(colFamily));
                delete.addColumn(Bytes.toBytes(colFamily), Bytes.toBytes(col));
                table.delete(delete);
                log.info("*** finish delete table: {}, rowKey: {}, colFamily: {}, col: {}",
                        tableName,
                        rowKey,
                        colFamily,
                        col
                );
            }
        }
    
        /**
         * 获取记录
         *
         * @param conn
         * @param tableName
         * @param rowKey
         * @return
         * @throws Exception
         */
        public Result getRowData(Connection conn, String tableName, String rowKey) throws Exception {
            log.info("*** begin get table: {}, rowKey: {}",
                    tableName,
                    rowKey
            );
            try (Table table = conn.getTable(TableName.valueOf(tableName))) {
                Get get = new Get(Bytes.toBytes(rowKey));
                Result result = table.get(get);
                formatResult(result);
                return result;
            }
        }
    
        /**
         * 获取 簇列
         *
         * @param conn
         * @param tableName
         * @param rowKey
         * @param colFamily
         * @param col
         * @return
         * @throws Exception
         */
        public Result getColData(Connection conn, String tableName, String rowKey, String colFamily, String col) throws Exception {
            log.info("*** begin get table: {}, rowKey: {}, colFamily: {}, col: {}",
                    tableName,
                    rowKey,
                    colFamily,
                    col
            );
            try (Table table = conn.getTable(TableName.valueOf(tableName))) {
                Get get = new Get(Bytes.toBytes(rowKey));
                get.addFamily(Bytes.toBytes(colFamily));
                get.addColumn(Bytes.toBytes(colFamily), Bytes.toBytes(col));
                Result result = table.get(get);
                formatResult(result);
                return result;
            }
        }
    
        /**
         * 模式化结果
         *
         * @param result
         */
        public void formatResult(Result result) {
            Cell[] cells = result.rawCells();
            for (Cell cell : cells) {
                log.info("*** RowName: {}, Timestamp: {}, ColFamily: {}, Col: {}, Val: {}",
                        new String(CellUtil.cloneRow(cell)),
                        cell.getTimestamp(),
                        new String(CellUtil.cloneFamily(cell)),
                        new String(CellUtil.cloneQualifier(cell)),
                        new String(CellUtil.cloneValue(cell))
                );
            }
        }
    
        /**
         * 批量查看记录
         *
         * @param conn
         * @param tableName
         * @param startRow
         * @param stopRow
         * @return
         * @throws Exception
         */
        public List<Result> scanData(Connection conn, String tableName, String startRow, String stopRow) throws Exception {
            try (Table table = conn.getTable(TableName.valueOf(tableName))) {
                Scan scan = new Scan();
                scan.setStartRow(Bytes.toBytes(startRow));
                scan.setStopRow(Bytes.toBytes(stopRow));
                ResultScanner scanner = table.getScanner(scan);
                Iterator<Result> it = scanner.iterator();
                List<Result> resultList = Lists.newArrayList();
                while (it.hasNext()) {
                    Result result = it.next();
                    formatResult(result);
                    resultList.add(result);
                }
                return resultList;
            }
        }
    
    }
    
    1. hbase业务类代理,主要实现:获取 & 返回 连接到连接池
    package com.frinder.hadoop.original;
    
    import org.apache.hadoop.hbase.TableName;
    import org.apache.hadoop.hbase.client.Connection;
    import org.apache.hadoop.hbase.client.Result;
    
    import java.util.List;
    
    /**
     * @author frinder
     * @date 2018/6/30
     * @Description: 代理类,管理连接,获取 & 归还连接
     */
    public class HBaseHelperProxy {
    
        protected static ConnectionProvider provider = new MyConnectionProvider();
    
        private HBaseHelper helper;
    
        public HBaseHelperProxy() {
            helper = new HBaseHelper();
        }
    
        public Connection getConn() throws Exception {
            return provider.getConn();
        }
    
        /**
         * 创建表
         *
         * @param tableName
         * @param cols
         * @throws Exception
         */
        public void createTable(String tableName, String... cols) throws Exception {
            Connection conn = getConn();
            try {
                helper.createTable(conn, tableName, cols);
            } finally {
                provider.returnConn(conn);
            }
        }
    
        /**
         * 删除表
         *
         * @param tableName
         * @throws Exception
         */
        public void deleteTable(String tableName) throws Exception {
            Connection conn = getConn();
            try {
                helper.deleteTable(conn, tableName);
            } finally {
                provider.returnConn(conn);
            }
        }
    
        /**
         * 获取表
         *
         * @return
         * @throws Exception
         */
        public List<TableName> getTables() throws Exception {
            Connection conn = getConn();
            try {
                return helper.getTables(conn);
            } finally {
                provider.returnConn(conn);
            }
        }
    
        /**
         * 插入记录
         *
         * @param tableName
         * @param rowKey
         * @param colFamily
         * @param col
         * @param val
         * @throws Exception
         */
        public void insertRow(String tableName, String rowKey, String colFamily, String col, String val) throws Exception {
            Connection conn = getConn();
            try {
                helper.insertRow(conn, tableName, rowKey, colFamily, col, val);
            } finally {
                provider.returnConn(conn);
            }
        }
    
        /**
         * 删除记录
         *
         * @param tableName
         * @param rowKey
         * @throws Exception
         */
        public void deleteRow(String tableName, String rowKey) throws Exception {
            Connection conn = getConn();
            try {
                helper.deleteRow(conn, tableName, rowKey);
            } finally {
                provider.returnConn(conn);
            }
        }
    
        /**
         * 删除 簇列
         *
         * @param tableName
         * @param rowKey
         * @param colFamily
         * @param col
         * @throws Exception
         */
        public void deleteCol(String tableName, String rowKey, String colFamily, String col) throws Exception {
            Connection conn = getConn();
            try {
                helper.deleteCol(conn, tableName, rowKey, colFamily, col);
            } finally {
                provider.returnConn(conn);
            }
        }
    
        /**
         * 获取记录
         *
         * @param tableName
         * @param rowKey
         * @return
         * @throws Exception
         */
        public Result getRowData(String tableName, String rowKey) throws Exception {
            Connection conn = getConn();
            try {
                return helper.getRowData(conn, tableName, rowKey);
            } finally {
                provider.returnConn(conn);
            }
        }
    
        /**
         * 获取 簇列
         *
         * @param tableName
         * @param rowKey
         * @param colFamily
         * @param col
         * @return
         * @throws Exception
         */
        public Result getColData(String tableName, String rowKey, String colFamily, String col) throws Exception {
            Connection conn = getConn();
            try {
                return helper.getColData(conn, tableName, rowKey, colFamily, col);
            } finally {
                provider.returnConn(conn);
            }
        }
    
        /**
         * 批量查询记录
         *
         * @param tableName
         * @param startRow
         * @param stopRow
         * @return
         * @throws Exception
         */
        public List<Result> scanData(String tableName, String startRow, String stopRow) throws Exception {
            Connection conn = getConn();
            try {
                return helper.scanData(conn, tableName, startRow, stopRow);
            } finally {
                provider.returnConn(conn);
            }
        }
    }
    
    1. 测试类
    package com.frinder.hadoop.original;
    
    import lombok.extern.slf4j.Slf4j;
    
    /**
     * @author frinder
     * @date 2018/6/30
     * @Description: 此例需要 resource 目录下的 hbase-site.xml 文件
     */
    @Slf4j
    public class HBaseTest {
    
        public static void main(String[] args) throws Exception {
            HBaseHelperProxy helper = new HBaseHelperProxy();
            helper.createTable("t_user", "col1", "col2");
            helper.getTables();
            helper.insertRow("t_user", "1", "col1", "name", "Jack");
            helper.insertRow("t_user", "1", "col2", "age", "28");
            helper.scanData("t_user", "1", "10");
            helper.getRowData("t_user", "1");
            helper.getColData("t_user", "1", "col1", "name");
    //        helper.deleteCol("t_user", "1", "col1", "name");
    //        helper.deleteRow("t_user", "1");
    //        helper.getRowData("t_user", "1");
        }
    
    }
    

    测试结果:

    ……
    16:43:49.480 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** begin create table: t_user, cols: col1,col2
    16:43:49.554 [main-SendThread(zk_master:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1644ed237e9001b, packet:: clientPath:null serverPath:null finished:false header:: 3,4  replyHeader:: 3,222,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3136323031ffffffc114ffffffbeffffffc6ffffff9cffffffec7b5250425546a15a97a6b5f6d617374657210ffffffc97e18ffffff98ffffff93ffffffc9fffffff6ffffffc42c100183,s{157,157,1530330772647,1530330772647,0,0,0,0,62,0,157} 
    16:43:49.571 [main-SendThread(zk_master:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1644ed237e9001b, packet:: clientPath:null serverPath:null finished:false header:: 4,8  replyHeader:: 4,222,0  request:: '/hbase,F  response:: v{'replication,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'recovering-regions,'draining,'namespace,'hbaseid,'table} 
    16:43:49.997 [hconnection-0x192cfe-shared--pool1-t1] DEBUG org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service ClientService, sasl=false
    16:43:50.074 [hconnection-0x192cfe-shared--pool1-t1] DEBUG org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to zk_master/192.168.1.199:16201
    16:43:50.186 [main] WARN com.frinder.hadoop.original.HBaseHelper - *** table t_user already exists!
    16:43:50.187 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 归还连接池连接!
    16:43:50.187 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 从连接池获取连接!
    16:43:50.187 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** begin get tables
    16:43:50.190 [main-SendThread(zk_master:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1644ed237e9001b, packet:: clientPath:null serverPath:null finished:false header:: 5,3  replyHeader:: 5,222,0  request:: '/hbase,F  response:: s{2,2,1530329550609,1530329550609,0,28,0,0,0,16,157} 
    16:43:50.195 [main-SendThread(zk_master:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1644ed237e9001b, packet:: clientPath:null serverPath:null finished:false header:: 6,4  replyHeader:: 6,222,0  request:: '/hbase/master,F  response:: #ffffffff000146d61737465723a31363030306affffffe870ffffff8affffffea186f4550425546a15a97a6b5f6d617374657210ffffff807d18ffffffacffffff86ffffffc9fffffff6ffffffc42c10018ffffff8a7d,s{137,137,1530330764267,1530330764267,0,0,0,100291756497108992,57,0,137} 
    16:43:50.222 [main] DEBUG org.apache.hadoop.hbase.ipc.RpcClientImpl - Use SIMPLE authentication for service MasterService, sasl=false
    16:43:50.222 [main] DEBUG org.apache.hadoop.hbase.ipc.RpcClientImpl - Connecting to zk_master/192.168.1.199:16000
    16:43:50.257 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** finish get tables: t_user
    16:43:50.268 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 归还连接池连接!
    16:43:50.268 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 从连接池获取连接!
    16:43:50.268 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** begin insert table: t_user, rowKey: 1, colFamily: col1, col: name, val: Jack
    16:43:50.280 [main-SendThread(zk_master:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1644ed237e9001b, packet:: clientPath:null serverPath:null finished:false header:: 7,4  replyHeader:: 7,222,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3136323031ffffffc114ffffffbeffffffc6ffffff9cffffffec7b5250425546a15a97a6b5f6d617374657210ffffffc97e18ffffff98ffffff93ffffffc9fffffff6ffffffc42c100183,s{157,157,1530330772647,1530330772647,0,0,0,0,62,0,157} 
    16:43:50.282 [main-SendThread(zk_master:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1644ed237e9001b, packet:: clientPath:null serverPath:null finished:false header:: 8,8  replyHeader:: 8,222,0  request:: '/hbase,F  response:: v{'replication,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'recovering-regions,'draining,'namespace,'hbaseid,'table} 
    16:43:50.453 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** finish insert table: t_user, rowKey: 1, colFamily: col1, col: name, val: Jack
    16:43:50.453 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 归还连接池连接!
    16:43:50.453 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 从连接池获取连接!
    16:43:50.453 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** begin insert table: t_user, rowKey: 1, colFamily: col2, col: age, val: 28
    16:43:50.461 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** finish insert table: t_user, rowKey: 1, colFamily: col2, col: age, val: 28
    16:43:50.461 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 归还连接池连接!
    16:43:50.461 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 从连接池获取连接!
    16:43:50.593 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** RowName: 1, Timestamp: 1530348230074, ColFamily: col1, Col: name, Val: Jack
    16:43:50.593 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** RowName: 1, Timestamp: 1530348230113, ColFamily: col2, Col: age, Val: 28
    16:43:50.593 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 归还连接池连接!
    16:43:50.593 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 从连接池获取连接!
    16:43:50.593 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** begin get table: t_user, rowKey: 1
    16:43:50.669 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** RowName: 1, Timestamp: 1530348230074, ColFamily: col1, Col: name, Val: Jack
    16:43:50.670 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** RowName: 1, Timestamp: 1530348230113, ColFamily: col2, Col: age, Val: 28
    16:43:50.670 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 归还连接池连接!
    16:43:50.670 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 从连接池获取连接!
    16:43:50.670 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** begin get table: t_user, rowKey: 1, colFamily: col1, col: name
    16:43:50.689 [main] INFO com.frinder.hadoop.original.HBaseHelper - *** RowName: 1, Timestamp: 1530348230074, ColFamily: col1, Col: name, Val: Jack
    16:43:50.689 [main] INFO com.frinder.hadoop.original.MyConnectionProvider - *** 归还连接池连接!
    16:43:50.706 [Thread-2] DEBUG org.apache.hadoop.ipc.Client - stopping client from cache: org.apache.hadoop.ipc.Client@12e19ca
    16:43:50.706 [Thread-2] DEBUG org.apache.hadoop.ipc.Client - removing client from cache: org.apache.hadoop.ipc.Client@12e19ca
    16:43:50.706 [Thread-2] DEBUG org.apache.hadoop.ipc.Client - stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@12e19ca
    16:43:50.706 [Thread-2] DEBUG org.apache.hadoop.ipc.Client - Stopping client
    
    1. 项目地址:https://gitee.com/frinder/hadoop-learning

    相关文章

      网友评论

          本文标题:HBase伪分布式安装与JAVA基本操作

          本文链接:https://www.haomeiwen.com/subject/htztuftx.html