8、HBase批量装载-Bulk load(示例三:写千万级数据-mysql数据直接写成Hbase需要的数据,然后导入hbase)
  nNPyvzOmRTFq 2023年11月02日 96 0

Apache Hbase 系列文章

1、hbase-2.1.0介绍及分布式集群部署、HA集群部署、验证、硬件配置推荐 2、hbase-2.1.0 shell基本操作详解 3、HBase的java API基本操作(创建、删除表以及对数据的添加、删除、查询以及多条件查询) 4、HBase使用(namespace、数据分区、rowkey设计、原生api访问hbase) 5、Apache Phoenix(5.0.0-5.1.2) 介绍及部署、使用(基本使用、综合使用、二级索引示例)、数据分区示例 6、Base批量装载——Bulk load(示例一:基本使用示例) 7、Base批量装载-Bulk load(示例二:写千万级数据-mysql数据以ORCFile写入hdfs,然后导入hbase) 8、HBase批量装载-Bulk load(示例三:写千万级数据-mysql数据直接写成Hbase需要的数据,然后导入hbase)


(文章目录)


本文主要介绍Bulk load的用法,即MapReduce和bulk load的配合使用。 本文是介绍Bulk load用法的系列的第三篇。 本文前提依赖hbase可用、phoenix可用、mysql可用、hadoop是HA环境。 本文分为2个部分,即实现步骤和实现。

一、实现步骤

示例三:1260万数据,介绍bulk load的基本用法,包含mapper的写法、bulk load的导入、phoenix的客户端视图用法 1、从mysql数据库中读取1260万(12606948)数据并以HBase要求的数据格式写入HDFS中 2、加载数据到Hbase中 3、通过phoenix客户端查询数据 4、hadoop环境是HA

二、实现

1、创建Hbase表

create_namespace "USER"
create "USER:DBUSER", { NAME => "C1", COMPRESSION => "GZ"}, { NUMREGIONS => 6, SPLITALGO => "HexStringSplit"}

在这里插入图片描述

2、编码并运行

1)、pom.xml

<?xml version="1.0"?>
<project
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
	xmlns="http://maven.apache.org/POM/4.0.0"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.okcard</groupId>
		<artifactId>bigdata-component</artifactId>
		<version>0.0.1-SNAPSHOT</version>
	</parent>
	<groupId>com.okcard</groupId>
	<artifactId>hbase</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>hbase</name>
	<url>http://maven.apache.org</url>
	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
	</properties>
	<dependencies>

		<dependency>
			<groupId>org.apache.hbase</groupId>
			<artifactId>hbase-client</artifactId>
			<version>2.1.0</version>
		</dependency>
		<dependency>
			<groupId>commons-io</groupId>
			<artifactId>commons-io</artifactId>
			<version>2.6</version>
		</dependency>
		<!-- hadoop的通用包 -->
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-common</artifactId>
			<version>2.7.5</version>
		</dependency>

		<!-- Xml操作相关 -->
		<dependency>
			<groupId>com.github.cloudecho</groupId>
			<artifactId>xmlbean</artifactId>
			<version>1.5.5</version>
		</dependency>
		<!-- 操作Office库 -->
		<dependency>
			<groupId>org.apache.poi</groupId>
			<artifactId>poi</artifactId>
			<version>4.0.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.poi</groupId>
			<artifactId>poi-ooxml</artifactId>
			<version>4.0.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.poi</groupId>
			<artifactId>poi-ooxml-schemas</artifactId>
			<version>4.0.1</version>
		</dependency>
		<!-- 操作JSON -->
		<dependency>
			<groupId>com.alibaba</groupId>
			<artifactId>fastjson</artifactId>
			<version>1.2.62</version>
		</dependency>

		<!-- phoenix core -->
		<dependency>
			<groupId>org.apache.phoenix</groupId>
			<artifactId>phoenix-core</artifactId>
			<version>5.0.0-HBase-2.0</version>
			<!-- 解决打包 Failure to find org.glassfish:javax.el:pom:3.0.1-b08-SNAPSHOT in xxx -->
			<exclusions>
                <exclusion>
                    <groupId>org.glassfish</groupId>
                    <artifactId>javax.el</artifactId>
                </exclusion>
            </exclusions>
		</dependency>
		<!-- phoenix 客户端 -->
		<dependency>
			<groupId>org.apache.phoenix</groupId>
			<artifactId>phoenix-queryserver-client</artifactId>
			<version>5.0.0-HBase-2.0</version>
			<!-- 解决打包 Failure to find org.glassfish:javax.el:pom:3.0.1-b08-SNAPSHOT in xxx -->
			<exclusions>
                <exclusion>
                    <groupId>org.glassfish</groupId>
                    <artifactId>javax.el</artifactId>
                </exclusion>
            </exclusions>
		</dependency>

		<!-- HBase对mapreduce的支持 -->
		<dependency>
			<groupId>org.apache.hbase</groupId>
			<artifactId>hbase-mapreduce</artifactId>
			<version>2.1.0</version>
		</dependency>
		<!-- hadoop mr任务客户端 -->
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-mapreduce-client-jobclient</artifactId>
			<version>2.7.5</version>
		</dependency>
		<!-- 客户端 -->
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-mapreduce-client-core</artifactId>
			<version>2.7.5</version>
		</dependency>
		<!-- hadoop权限认证相关 -->
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-auth</artifactId>
			<version>2.7.5</version>
		</dependency>
		<!-- 方便操作文件apache的工具类包 -->
		<dependency>
			<groupId>commons-io</groupId>
			<artifactId>commons-io</artifactId>
			<version>2.6</version>
		</dependency>
		<dependency>
			<groupId>org.apache.htrace</groupId>
			<artifactId>htrace-core</artifactId>
			<version>3.2.0-incubating</version>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-core</artifactId>
			<version>2.5.6</version>
		</dependency>

		<dependency>
			<groupId>mysql</groupId>
			<artifactId>mysql-connector-java</artifactId>
			<version>5.1.46</version>
		</dependency>

		<!-- ORC文件依赖 -->
		<dependency>
			<groupId>org.apache.orc</groupId>
			<artifactId>orc-shims</artifactId>
			<version>1.6.3</version>
		</dependency>
		<dependency>
			<groupId>org.apache.orc</groupId>
			<artifactId>orc-core</artifactId>
			<version>1.6.3</version>
		</dependency>
		<dependency>
			<groupId>org.apache.orc</groupId>
			<artifactId>orc-mapreduce</artifactId>
			<version>1.6.3</version>
		</dependency>
		<dependency>
			<groupId>jdk.tools</groupId>
			<artifactId>jdk.tools</artifactId>
			<version>1.8</version>
			<scope>system</scope>
			<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
		</dependency>
	</dependencies>
</project>

2)、javabean

package org.hbase.mr.largedata;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.lib.db.DBWritable;
import lombok.Data;

/**
 * 实现Hadoop序列化接口Writable 从数据库读取/写入数据库的对象应实现DBWritable
 * 
 * @author chenw
 */
@Data
public class User implements Writable, DBWritable {
	private String id;
	private String userName;
	private String password;
	private String phone;
	private String email;
	private String createDay;

	@Override
	public void write(PreparedStatement ps) throws SQLException {
		ps.setString(1, id);
		ps.setString(2, userName);
		ps.setString(3, password);
		ps.setString(4, phone);
		ps.setString(5, email);
		ps.setString(6, createDay);
	}

	@Override
	public void readFields(ResultSet rs) throws SQLException {
		this.id = rs.getString(1);
		this.userName = rs.getString(2);
		this.password = rs.getString(3);
		this.phone = rs.getString(4);
		this.email = rs.getString(5);
		this.createDay = rs.getString(6);
	}

	@Override
	public void write(DataOutput out) throws IOException {
		out.writeUTF(id);
		out.writeUTF(userName);
		out.writeUTF(password);
		out.writeUTF(phone);
		out.writeUTF(email);
		out.writeUTF(createDay);
	}

	@Override
	public void readFields(DataInput in) throws IOException {
		id = in.readUTF();
		userName = in.readUTF();
		password = in.readUTF();
		phone = in.readUTF();
		email = in.readUTF();
		createDay = in.readUTF();
	}

	public String toString() {
		return id + "\t" + userName + "\t" + password + "\t" + phone + "\t" + email + "\t" + createDay;
	}

}

3)、Mapper和Driver

package org.hbase.mr.largedata.db;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.RegionLocator;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.MD5Hash;
import org.apache.hadoop.hbase.util.MapReduceExtendedCell;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Counter;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.DBInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.hbase.mr.largedata.User;
import org.springframework.util.StopWatch;

/**
 * 将mysql數據解析成Hbase导入所需文件,即须与Hbase导入文件的key-value保持一致
 * 
 * @author chenw
 *
 */
public class UserBulkLoadDBToHbase extends Configured implements Tool {
//	static String in = "hdfs://HadoopHAcluster/hbasetest/user/textuser/out";
	static String out = "hdfs://HadoopHAcluster/hbasetest/user/dbuser/data";
	TableName tableName = TableName.valueOf("USER:DBUSER");

	public static void main(String[] args) throws Exception {
		StopWatch clock = new StopWatch();
		clock.start(UserBulkLoadDBToHbase.class.getSimpleName());

		// 1. 使用HBaseConfiguration.create()加载配置文件
		System.setProperty("HADOOP_USER_NAME", "alanchan");

		Configuration configuration = HBaseConfiguration.create();
		configuration.set("fs.defaultFS", "hdfs://HadoopHAcluster");
		configuration.set("dfs.nameservices", "HadoopHAcluster");
		configuration.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
		configuration.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
		configuration.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
		configuration.set("dfs.client.failover.proxy.provider.HadoopHAcluster","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");

		int status = ToolRunner.run(configuration, new UserBulkLoadDBToHbase(), args);
		System.exit(status);

		clock.stop();
		System.out.println(clock.prettyPrint());
	}

	@Override
	public int run(String[] args) throws Exception {
		// 1. 创建HBase连接
		Connection connection = ConnectionFactory.createConnection(getConf());
		// 2. 获取HTable
		Table table = connection.getTable(tableName);

		DBConfiguration.configureDB(getConf(), "com.mysql.jdbc.Driver", "jdbc:mysql://192.168.10.44:3306/test", "root","root");

		Job job = Job.getInstance(getConf(), this.getClass().getName());
		job.setJarByClass(this.getClass());

		job.setMapperClass(UserBulkLoadDBToHbaseMapper.class);
		job.setMapOutputKeyClass(ImmutableBytesWritable.class);
		job.setMapOutputValueClass(MapReduceExtendedCell.class);

		job.setInputFormatClass(DBInputFormat.class);
		
		DBInputFormat.setInput(job, User.class, "select id, user_Name,pass_word,phone,email,create_day from dx_user",
				// 12606948 条数据
				"select count(*) from dx_user ");

//		DBInputFormat.setInput(job, User.class,
//				"select id, user_Name,pass_word,phone,email,create_day from dx_user where user_name = 'alan2452'",
//				"select count(*) from dx_user where user_name = 'alan2452'");

		Path outputDir = new Path(out);
		outputDir.getFileSystem(this.getConf()).delete(outputDir, true);
		FileOutputFormat.setOutputPath(job, outputDir);

		RegionLocator regionLocator = connection.getRegionLocator(tableName);
		HFileOutputFormat2.configureIncrementalLoad(job, table, regionLocator);

		return job.waitForCompletion(true) ? 0 : 1;
	}

	static class UserBulkLoadDBToHbaseMapper
			extends Mapper<LongWritable, User, ImmutableBytesWritable, MapReduceExtendedCell> {
		LongWritable outKey = new LongWritable();
		Text outValue = new Text();

		// LongWritable 是数据库记录的符合条件读取的每行行号,不是数据库中的原始行号
		// User 是每行记录的字段值,经过user的DBWrite实现
		protected void map(LongWritable key, User value, Context context)
				throws IOException, InterruptedException {
			Counter counter = context.getCounter("mysql_records_counters", "User Records");
			counter.increment(1);
			
			//数据格式:1145643347	alan2452	837132	13977776789	alan.chan.chn@163.com	2021-12-28 00:00:00.0
			// 1、 从实体类中获取ID,并转换为rowkey,hbase中的字段设置为大写,为phoenix建索引奠定基础
			byte[] rowkeyByteArray = getRowkey(value);
			
			byte[] columnFamily = Bytes.toBytes("C1");
			byte[] colId = Bytes.toBytes("ID");
			byte[] colUserName = Bytes.toBytes("USERNAME");
			byte[] colPassword = Bytes.toBytes("PASSWORD");
			byte[] colPhone = Bytes.toBytes("PHONE");
			byte[] colEmail = Bytes.toBytes("EMAIL");
			byte[] colCreateDay = Bytes.toBytes("CREATEDAY");

			// 2、 构建输出key:new ImmutableBytesWrite(rowkey)
			ImmutableBytesWritable immutableBytesWritable = new ImmutableBytesWritable(rowkeyByteArray);
			
			// 3、 使用KeyValue类构建单元格,每个需要写入到表中的字段都需要构建出来单元格
			KeyValue kvId = new KeyValue(rowkeyByteArray, columnFamily, colId, Bytes.toBytes(value.getId()));
			KeyValue kvUserName = new KeyValue(rowkeyByteArray, columnFamily, colUserName,Bytes.toBytes(value.getUserName()));
			KeyValue kvPassword = new KeyValue(rowkeyByteArray, columnFamily, colPassword,Bytes.toBytes(value.getPassword()));
			KeyValue kvPhone = new KeyValue(rowkeyByteArray, columnFamily, colPhone, Bytes.toBytes(value.getPhone()));
			KeyValue kvEmail = new KeyValue(rowkeyByteArray, columnFamily, colEmail, Bytes.toBytes(value.getEmail()));
			KeyValue kvCreateDay = new KeyValue(rowkeyByteArray, columnFamily, colCreateDay,Bytes.toBytes(value.getCreateDay()));

			// 4、使用context.write将输出输出
			// 构建输出的value:new MapReduceExtendedCell(keyvalue对象)
			context.write(immutableBytesWritable, new MapReduceExtendedCell(kvId));
			context.write(immutableBytesWritable, new MapReduceExtendedCell(kvUserName));
			context.write(immutableBytesWritable, new MapReduceExtendedCell(kvPassword));
			context.write(immutableBytesWritable, new MapReduceExtendedCell(kvPhone));
			context.write(immutableBytesWritable, new MapReduceExtendedCell(kvEmail));
			context.write(immutableBytesWritable, new MapReduceExtendedCell(kvCreateDay));
		}
		protected static byte[] getRowkey(User user)  {
			// rowkey = MD5Hash_id_userName_phone_createDay
			StringBuilder rowkeyBuilder  =new StringBuilder();
			rowkeyBuilder.append(user.getId()).append("_").append(user.getUserName()).append("_").append(user.getPhone()).append("_").append(user.getCreateDay());
			
			// 使用Bytes.toBytes将拼接出来的字符串转换为byte[]数组
			// 使用MD5Hash.getMD5AsHex生成MD5值,并取其前8位
			String md5AsHex = MD5Hash.getMD5AsHex(rowkeyBuilder.toString().getBytes());
			String md5Hex8bit = md5AsHex.substring(0, 8);

			return Bytes.toBytes(md5Hex8bit + "_" + rowkeyBuilder.toString());
		}
	}

}

4)、运行

在这里插入图片描述

3、加载HDFS数据到Hbase

hbase org.apache.hadoop.hbase.tool.LoadIncrementalHFiles /hbasetest/user/dbuser/data USER:DBUSER
hbase shell中执行
hbase(main):021:0> get "USER:DBUSER","26cfac1e_728232503_alan2452_13977776789_2021-12-28 00:00:00.0",{FORMATTER=>'toString'}
COLUMN                                                                    CELL                                                                                                                                                                                                                   
 C1:CREATEDAY                                                             timestamp=1665536417451, value=2021-12-28 00:00:00.0                                                                                                                                                                   
 C1:EMAIL                                                                 timestamp=1665536417451, value=alan.chan.chn@163.com                                                                                                                                                                   
 C1:ID                                                                    timestamp=1665536417451, value=728232503                                                                                                                                                                               
 C1:PASSWORD                                                              timestamp=1665536417451, value=337518                                                                                                                                                                                  
 C1:PHONE                                                                 timestamp=1665536417451, value=13977776789                                                                                                                                                                             
 C1:USERNAME                                                              timestamp=1665536417451, value=alan2452                                                                                                                                                                                
1 row(s)
Took 0.0233 second

执行完成后,表数据分布如下图(rowkey设计合理) 在这里插入图片描述

4、在phoenix创建表并验证

#1、在phoenix中创建namespace
CREATE SCHEMA IF NOT EXISTS USER;

#2、在phoenix中创建表(实际上就是与Hbase中的表进行映射)
CREATE TABLE IF NOT EXISTS USER.DBUSER(
rk varchar primary key,
C1.id varchar,
C1.username varchar,
C1.password varchar,
C1.phone varchar,
C1.email varchar,
C1.createday varchar
);

0: jdbc:phoenix:server2:2118> CREATE SCHEMA IF NOT EXISTS USER;
No rows affected (0.011 seconds)
0: jdbc:phoenix:server2:2118> CREATE TABLE IF NOT EXISTS USER.DBUSER(
. . . . . . . . . . . . . . > rk varchar primary key,
. . . . . . . . . . . . . . > C1.id varchar,
. . . . . . . . . . . . . . > C1.username varchar,
. . . . . . . . . . . . . . > C1.password varchar,
. . . . . . . . . . . . . . > C1.phone varchar,
. . . . . . . . . . . . . . > C1.email varchar,
. . . . . . . . . . . . . . > C1.createday varchar
. . . . . . . . . . . . . . > );
No rows affected (24.487 seconds)
0: jdbc:phoenix:server2:2118> !desc user.dbuser
+------------+--------------+-------------+--------------+------------+------------+--------------+----------------+-----------------+-----------------+-----------+----------+-------------+----------------+-------------------+--------------------+-------------------+--------------+------+
| TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  | COLUMN_NAME  | DATA_TYPE  | TYPE_NAME  | COLUMN_SIZE  | BUFFER_LENGTH  | DECIMAL_DIGITS  | NUM_PREC_RADIX  | NULLABLE  | REMARKS  | COLUMN_DEF  | SQL_DATA_TYPE  | SQL_DATETIME_SUB  | CHAR_OCTET_LENGTH  | ORDINAL_POSITION  | IS_NULLABLE  | SCOP |
+------------+--------------+-------------+--------------+------------+------------+--------------+----------------+-----------------+-----------------+-----------+----------+-------------+----------------+-------------------+--------------------+-------------------+--------------+------+
|            | USER         | DBUSER      | RK           | 12         | VARCHAR    | null         | null           | null            | null            | 0         |          |             | null           | null              | null               | 1                 | false        |      |
|            | USER         | DBUSER      | ID           | 12         | VARCHAR    | null         | null           | null            | null            | 1         |          |             | null           | null              | null               | 2                 | true         |      |
|            | USER         | DBUSER      | USERNAME     | 12         | VARCHAR    | null         | null           | null            | null            | 1         |          |             | null           | null              | null               | 3                 | true         |      |
|            | USER         | DBUSER      | PASSWORD     | 12         | VARCHAR    | null         | null           | null            | null            | 1         |          |             | null           | null              | null               | 4                 | true         |      |
|            | USER         | DBUSER      | PHONE        | 12         | VARCHAR    | null         | null           | null            | null            | 1         |          |             | null           | null              | null               | 5                 | true         |      |
|            | USER         | DBUSER      | EMAIL        | 12         | VARCHAR    | null         | null           | null            | null            | 1         |          |             | null           | null              | null               | 6                 | true         |      |
|            | USER         | DBUSER      | CREATEDAY    | 12         | VARCHAR    | null         | null           | null            | null            | 1         |          |             | null           | null              | null               | 7                 | true         |      |
+------------+--------------+-------------+--------------+------------+------------+--------------+----------------+-----------------+-----------------+-----------+----------+-------------+----------------+-------------------+--------------------+-------------------+--------------+------+
0: jdbc:phoenix:server2:2118>

0: jdbc:phoenix:server2:2118> select count(*) from user.dbuser;
+-----------+
| COUNT(1)  |
+-----------+
| 12606948  |
+-----------+
1 row selected (12.978 seconds)

发现如此创建的表,除了rowkey能查出来外,其他的查询均没有数据 此时创建的phoenix表看不到hbase中已有的数据,但若hbase表有新增数据或对phoenix表进行增删改查则可以看到hbase里的增量数据, 这个是因为phoenix4.10版本之后对列映射编码规则做了修改

0: jdbc:phoenix:server2:2118> drop table user.dbuser;
No rows affected (2.683 seconds)
0: jdbc:phoenix:server2:2118> !table
+------------+--------------+------------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---------------+-----------------+------------+-------------+----------------+----------------------+---+
| TABLE_CAT  | TABLE_SCHEM  |    TABLE_NAME    |  TABLE_TYPE   | REMARKS  | TYPE_NAME  | SELF_REFERENCING_COL_NAME  | REF_GENERATION  | INDEX_STATE  | IMMUTABLE_ROWS  | SALT_BUCKETS  | MULTI_TENANT  | VIEW_STATEMENT  | VIEW_TYPE  | INDEX_TYPE  | TRANSACTIONAL  | IS_NAMESPACE_MAPPED  | G |
+------------+--------------+------------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---------------+-----------------+------------+-------------+----------------+----------------------+---+
|            | SYSTEM       | CATALOG          | SYSTEM TABLE  |          |            |                            |                 |              | false           | null          | false         |                 |            |             | false          | true                 | n |
|            | SYSTEM       | FUNCTION         | SYSTEM TABLE  |          |            |                            |                 |              | false           | null          | false         |                 |            |             | false          | true                 | n |
|            | SYSTEM       | LOG              | SYSTEM TABLE  |          |            |                            |                 |              | true            | 32            | false         |                 |            |             | false          | true                 | n |
|            | SYSTEM       | SEQUENCE         | SYSTEM TABLE  |          |            |                            |                 |              | false           | null          | false         |                 |            |             | false          | true                 | n |
|            | SYSTEM       | STATS            | SYSTEM TABLE  |          |            |                            |                 |              | false           | null          | false         |                 |            |             | false          | true                 | n |
|            |              | WB               | VIEW          |          |            |                            |                 |              | false           | null          | false         |                 | MAPPED     |             | false          | true                 | n |
|            | BANK_BU      | TRANSFER_RECORD  | VIEW          |          |            |                            |                 |              | false           | null          | false         |                 | MAPPED     |             | false          | true                 | n |
|            | USER         | TEXTUSER         | VIEW          |          |            |                            |                 |              | false           | null          | false         |                 | MAPPED     |             | false          | true                 | n |
+------------+--------------+------------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---------------+-----------------+------------+-------------+----------------+----------------------+---+

运行程序生成数据

加载hdfs数据到hbase
hbase org.apache.hadoop.hbase.tool.LoadIncrementalHFiles /hbasetest/user/dbuser/data USER:DBUSER
创建表,增加column_encoded_bytes=0 设置,具体如下
https://phoenix.apache.org/columnencoding.html
CREATE TABLE IF NOT EXISTS USER.DBUSER(
    rk varchar primary key,
    C1.id varchar,
    C1.username varchar,
    C1.password varchar,
    C1.phone varchar,
    C1.email varchar,
    C1.createday varchar
)
column_encoded_bytes=0;

0: jdbc:phoenix:server2:2118> CREATE TABLE IF NOT EXISTS USER.DBUSER(
. . . . . . . . . . . . . . > rk varchar primary key,
. . . . . . . . . . . . . . > C1.id varchar,
. . . . . . . . . . . . . . > C1.username varchar,
. . . . . . . . . . . . . . > C1.password varchar,
. . . . . . . . . . . . . . > C1.phone varchar,
. . . . . . . . . . . . . . > C1.email varchar,
. . . . . . . . . . . . . . > C1.createday varchar
. . . . . . . . . . . . . . > )
. . . . . . . . . . . . . . > column_encoded_bytes=0;
0: jdbc:phoenix:server2:2118> !table
+------------+--------------+------------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---------------+-----------------+------------+-------------+----------------+----------------------+---+
| TABLE_CAT  | TABLE_SCHEM  |    TABLE_NAME    |  TABLE_TYPE   | REMARKS  | TYPE_NAME  | SELF_REFERENCING_COL_NAME  | REF_GENERATION  | INDEX_STATE  | IMMUTABLE_ROWS  | SALT_BUCKETS  | MULTI_TENANT  | VIEW_STATEMENT  | VIEW_TYPE  | INDEX_TYPE  | TRANSACTIONAL  | IS_NAMESPACE_MAPPED  | G |
+------------+--------------+------------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---------------+-----------------+------------+-------------+----------------+----------------------+---+
|            | SYSTEM       | CATALOG          | SYSTEM TABLE  |          |            |                            |                 |              | false           | null          | false         |                 |            |             | false          | true                 | n |
|            | SYSTEM       | FUNCTION         | SYSTEM TABLE  |          |            |                            |                 |              | false           | null          | false         |                 |            |             | false          | true                 | n |
|            | SYSTEM       | LOG              | SYSTEM TABLE  |          |            |                            |                 |              | true            | 32            | false         |                 |            |             | false          | true                 | n |
|            | SYSTEM       | SEQUENCE         | SYSTEM TABLE  |          |            |                            |                 |              | false           | null          | false         |                 |            |             | false          | true                 | n |
|            | SYSTEM       | STATS            | SYSTEM TABLE  |          |            |                            |                 |              | false           | null          | false         |                 |            |             | false          | true                 | n |
|            | USER         | DBUSER           | TABLE         |          |            |                            |                 |              | false           | null          | false         |                 |            |             | false          | true                 | n |
|            |              | WB               | VIEW          |          |            |                            |                 |              | false           | null          | false         |                 | MAPPED     |             | false          | true                 | n |
|            | BANK_BU      | TRANSFER_RECORD  | VIEW          |          |            |                            |                 |              | false           | null          | false         |                 | MAPPED     |             | false          | true                 | n |
|            | USER         | TEXTUSER         | VIEW          |          |            |                            |                 |              | false           | null          | false         |                 | MAPPED     |             | false          | true                 | n |
+------------+--------------+------------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---------------+-----------------+------------+-------------+----------------+----------------------+---+
0: jdbc:phoenix:server2:2118> select * from user.dbuser where rk = '26cfac1e_728232503_alan2452_13977776789_2021-12-28 00:00:00.0';
+----------------------------------------------------------------+------------+-----------+-----------+--------------+------------------------+------------------------+
|                               RK                               |     ID     | USERNAME  | PASSWORD  |    PHONE     |         EMAIL          |       CREATEDAY        |
+----------------------------------------------------------------+------------+-----------+-----------+--------------+------------------------+------------------------+
| 26cfac1e_728232503_alan2452_13977776789_2021-12-28 00:00:00.0  | 728232503  | alan2452  | 337518    | 13977776789  | alan.chan.chn@163.com  | 2021-12-28 00:00:00.0  |
+----------------------------------------------------------------+------------+-----------+-----------+--------------+------------------------+------------------------+
1 row selected (0.025 seconds)
0: jdbc:phoenix:server2:2118> select count(*) from user.dbuser;
+-----------+
| COUNT(1)  |
+-----------+
| 12606948  |
+-----------+
1 row selected (13.648 seconds)
0: jdbc:phoenix:server2:2118> select * from user.dbuser where id = '728232503';
+----------------------------------------------------------------+------------+-----------+-----------+--------------+------------------------+------------------------+
|                               RK                               |     ID     | USERNAME  | PASSWORD  |    PHONE     |         EMAIL          |       CREATEDAY        |
+----------------------------------------------------------------+------------+-----------+-----------+--------------+------------------------+------------------------+
| 26cfac1e_728232503_alan2452_13977776789_2021-12-28 00:00:00.0  | 728232503  | alan2452  | 337518    | 13977776789  | alan.chan.chn@163.com  | 2021-12-28 00:00:00.0  |
+----------------------------------------------------------------+------------+-----------+-----------+--------------+------------------------+------------------------+
1 row selected (19.569 seconds)

至此已经完成了从数据库到hbase的数据导入,并且可以使用phoenix客户端进行查询。如果需要使用join等相关操作,则需要建立二级索引进行操作

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

推荐阅读
  KRe60ogUm4le   2024年05月31日   101   0   0 flink大数据
  KRe60ogUm4le   2024年05月31日   34   0   0 flink大数据
nNPyvzOmRTFq
最新推荐 更多

2024-05-31