• Archive by category "数据库"
  • (Page 3)

Blog Archives

R利剑NoSQL系列文章 之 Hive

R利剑NoSQL系列文章,主要介绍通过R语言连接使用nosql数据库。涉及的NoSQL产品,包括Redis,MongoDBHBaseHiveCassandra, Neo4j。希望通过我的介绍让广大的R语言爱好者,有更多的开发选择,做出更多地激动人心的应用。

关于作者:

  • 张丹(Conan), 程序员Java,R,PHP,Javascript
  • weibo:@Conan_Z
  • blog: http://blog.fens.me
  • email: bsspirit@gmail.com

转载请注明:
http://blog.fens.me/nosql-r-hive/

rhive

第四篇 R利剑Hive,分为5个章节。

  1. Hive介绍
  2. Hive安装
  3. RHive安装
  4. RHive函数库
  5. RHive基本使用操作

1. Hive介绍

Hive是建立在Hadoop上的数据仓库基础构架。它提供了一系列的工具,可以用来进行数据提取转化加载(ETL),这是一种可以存储、查询和分析存储在 Hadoop 中的大规模数据的机制。Hive 定义了简单的类 SQL 查询语言,称为 HQL,它允许熟悉 SQL 的用户查询数据。同时,这个语言也允许熟悉 MapReduce 开发者的开发自定义的 mapper 和 reducer 来处理内建的 mapper 和 reducer 无法完成的复杂的分析工作。

Hive 没有专门的数据格式。 Hive 可以很好的工作在 Thrift 之上,控制分隔符,也允许用户指定数据格式

上面内容摘自 百度百科(http://baike.baidu.com/view/699292.htm)

hive与关系数据库的区别:

  • 数据存储不同:hive基于hadoop的HDFS,关系数据库则基于本地文件系统
  • 计算模型不同:hive基于hadoop的mapreduce,关系数据库则基于索引的内存计算模型
  • 应用场景不同:hive是OLAP数据仓库系统提供海量数据查询的,实时性很差;关系数据库是OLTP事务系统,为实时查询业务服务
  • 扩展性不同:hive基于hadoop很容易通过分布式增加存储能力和计算能力,关系数据库水平扩展很难,要不断增加单机的性能

2. Hive安装

Hive是基于Hadoop开发的数据仓库产品,所以首先我们要先有Hadoop的环境。

rhive

Hadoop安装,请参考:Hadoop环境搭建, 创建Hadoop母体虚拟机

Hive的安装,请参考:Hive安装及使用攻略

Hadoop-1.0.3的下载地址
http://archive.apache.org/dist/hadoop/core/hadoop-1.0.3/

Hive-0.9.0的下载地址
http://archive.apache.org/dist/hive/hive-0.9.0/

Hive安装好后
启动hiveserver的服务

~ nohup hive --service hiveserver  &
Starting Hive Thrift Server

打开hive shell

~ hive shell
Logging initialized using configuration in file:/home/conan/hadoop/hive-0.9.0/conf/hive-log4j.proper             ties
Hive history file=/tmp/conan/hive_job_log_conan_201306261459_153868095.txt

#查看hive的表
hive> show tables;
hive_algo_t_account
o_account
r_t_account
Time taken: 2.12 seconds

#查看o_account表的数据
hive> select * from o_account;
1       abc@163.com     2013-04-22 12:21:39
2       dedac@163.com   2013-04-22 12:21:39
3       qq8fed@163.com  2013-04-22 12:21:39
4       qw1@163.com     2013-04-22 12:21:39
5       af3d@163.com    2013-04-22 12:21:39
6       ab34@163.com    2013-04-22 12:21:39
7       q8d1@gmail.com  2013-04-23 09:21:24
8       conan@gmail.com 2013-04-23 09:21:24
9       adeg@sohu.com   2013-04-23 09:21:24
10      ade121@sohu.com 2013-04-23 09:21:24
11      addde@sohu.com  2013-04-23 09:21:24
Time taken: 0.469 seconds

3. RHive安装

请提前配置好JAVA的环境:

~ java -version
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

安装R:Ubuntu 12.04,请更新源再下载R2.15.3版本

~ sudo sh -c "echo deb http://mirror.bjtu.edu.cn/cran/bin/linux/ubuntu precise/ >>/etc/apt/sources.list"
~ sudo apt-get update
~ sudo apt-get install r-base-core=2.15.3-1precise0precise1

安装R依赖库:rjava

#配置rJava
~ sudo R CMD javareconf

#启动R程序
~ sudo R
install.packages("rJava")

安装RHive


install.packages("RHive")
library(RHive)
Loading required package: rJava
Loading required package: Rserve
This is RHive 0.0-7. For overview type ‘?RHive’.
HIVE_HOME=/home/conan/hadoop/hive-0.9.0
call rhive.init() because HIVE_HOME is set.

由于RHive已经从CRAN上移除,需要动手下载安装,下载地址:https://cran.r-project.org/src/contrib/Archive/RHive/。我们需要动手下载RHive_0.0-7.tar.gz包,然后通过命令进行安装。


# 安装RHive
~ R CMD INSTALL RHive_0.0-7.tar.gz

4. RHive函数库

rhive.aggregate        rhive.connect          rhive.hdfs.exists      rhive.mapapply
rhive.assign           rhive.desc.table       rhive.hdfs.get         rhive.mrapply
rhive.basic.by         rhive.drop.table       rhive.hdfs.info        rhive.napply
rhive.basic.cut        rhive.env              rhive.hdfs.ls          rhive.query
rhive.basic.cut2       rhive.exist.table      rhive.hdfs.mkdirs      rhive.reduceapply
rhive.basic.merge      rhive.export           rhive.hdfs.put         rhive.rm
rhive.basic.mode       rhive.exportAll        rhive.hdfs.rename      rhive.sapply
rhive.basic.range      rhive.hdfs.cat         rhive.hdfs.rm          rhive.save
rhive.basic.scale      rhive.hdfs.chgrp       rhive.hdfs.tail        rhive.script.export
rhive.basic.t.test     rhive.hdfs.chmod       rhive.init             rhive.script.unexport
rhive.basic.xtabs      rhive.hdfs.chown       rhive.list.tables      
rhive.size.table
rhive.big.query        rhive.hdfs.close       rhive.load             rhive.write.table
rhive.block.sample     rhive.hdfs.connect     rhive.load.table
rhive.close            rhive.hdfs.du          rhive.load.table2

Hive和RHive的基本操作对比:


#连接到hive
Hive:  hive shell
RHive: rhive.connect("192.168.1.210")

#列出所有hive的表
Hive:  show tables;
RHive: rhive.list.tables()

#查看表结构
Hive:  desc o_account;
RHive: rhive.desc.table('o_account'), rhive.desc.table('o_account',TRUE)

#执行HQL查询
Hive:  select * from o_account;
RHive: rhive.query('select * from o_account')

#查看hdfs目录
Hive:  dfs -ls /;
RHive: rhive.hdfs.ls()

#查看hdfs文件内容
Hive:  dfs -cat /user/hive/warehouse/o_account/part-m-00000;
RHive: rhive.hdfs.cat('/user/hive/warehouse/o_account/part-m-00000')

#断开连接
Hive:  quit;
RHive: rhive.close()

5. RHive基本使用操作

#初始化
rhive.init()

#连接hive
rhive.connect("192.168.1.210")

#查看所有表
rhive.list.tables()
             tab_name
1 hive_algo_t_account
2           o_account
3         r_t_account

#查看表结构
rhive.desc.table('o_account');
     col_name data_type comment
1          id       int
2       email    string
3 create_date    string

#执行HQL查询
rhive.query("select * from o_account");
   id           email         create_date
1   1     abc@163.com 2013-04-22 12:21:39
2   2   dedac@163.com 2013-04-22 12:21:39
3   3  qq8fed@163.com 2013-04-22 12:21:39
4   4     qw1@163.com 2013-04-22 12:21:39
5   5    af3d@163.com 2013-04-22 12:21:39
6   6    ab34@163.com 2013-04-22 12:21:39
7   7  q8d1@gmail.com 2013-04-23 09:21:24
8   8 conan@gmail.com 2013-04-23 09:21:24
9   9   adeg@sohu.com 2013-04-23 09:21:24
10 10 ade121@sohu.com 2013-04-23 09:21:24
11 11  addde@sohu.com 2013-04-23 09:21:24

#关闭连接
rhive.close()
[1] TRUE

创建临时表


rhive.block.sample('o_account', subset="id<5")
[1] "rhive_sblk_1372238856"

rhive.query("select * from rhive_sblk_1372238856");
  id          email         create_date
1  1    abc@163.com 2013-04-22 12:21:39
2  2  dedac@163.com 2013-04-22 12:21:39
3  3 qq8fed@163.com 2013-04-22 12:21:39
4  4    qw1@163.com 2013-04-22 12:21:39

#查看hdfs的文件
rhive.hdfs.ls('/user/hive/warehouse/rhive_sblk_1372238856/')
  permission owner      group length      modify-time
1  rw-r--r-- conan supergroup    141 2013-06-26 17:28
                                                 file
1 /user/hive/warehouse/rhive_sblk_1372238856/000000_0

rhive.hdfs.cat('/user/hive/warehouse/rhive_sblk_1372238856/000000_0')
1abc@163.com2013-04-22 12:21:39
2dedac@163.com2013-04-22 12:21:39
3qq8fed@163.com2013-04-22 12:21:39
4qw1@163.com2013-04-22 12:21:39

按范围分割字段数据


rhive.basic.cut('o_account','id',breaks='0:100:3')
[1] "rhive_result_20130626173626"
attr(,"result:size")
[1] 443

rhive.query("select * from rhive_result_20130626173626");
             email         create_date     id
1      abc@163.com 2013-04-22 12:21:39  (0,3]
2    dedac@163.com 2013-04-22 12:21:39  (0,3]
3   qq8fed@163.com 2013-04-22 12:21:39  (0,3]
4      qw1@163.com 2013-04-22 12:21:39  (3,6]
5     af3d@163.com 2013-04-22 12:21:39  (3,6]
6     ab34@163.com 2013-04-22 12:21:39  (3,6]
7   q8d1@gmail.com 2013-04-23 09:21:24  (6,9]
8  conan@gmail.com 2013-04-23 09:21:24  (6,9]
9    adeg@sohu.com 2013-04-23 09:21:24  (6,9]
10 ade121@sohu.com 2013-04-23 09:21:24 (9,12]
11  addde@sohu.com 2013-04-23 09:21:24 (9,12]

Hive操作HDFS


#查看hdfs文件目录
rhive.hdfs.ls()
  permission owner      group length      modify-time   file
1  rwxr-xr-x conan supergroup      0 2013-04-24 01:52 /hbase
2  rwxr-xr-x conan supergroup      0 2013-06-23 10:59  /home
3  rwxr-xr-x conan supergroup      0 2013-06-26 11:18 /rhive
4  rwxr-xr-x conan supergroup      0 2013-06-23 13:27   /tmp
5  rwxr-xr-x conan supergroup      0 2013-04-24 19:28  /user

#查看hdfs文件内容
rhive.hdfs.cat('/user/hive/warehouse/o_account/part-m-00000')
1abc@163.com2013-04-22 12:21:39
2dedac@163.com2013-04-22 12:21:39
3qq8fed@163.com2013-04-22 12:21:39

转载请注明:
http://blog.fens.me/nosql-r-hive/

打赏作者

Hive导入10G数据的测试

让Hadoop跑在云端系列文章,介绍了如何整合虚拟化和Hadoop,让Hadoop集群跑在VPS虚拟主机上,通过云向用户提供存储和计算的服务。

现在硬件越来越便宜,一台非品牌服务器,2颗24核CPU,配48G内存,2T的硬盘,已经降到2万块人民币以下了。这种配置如果简单地放几个web应用,显然是奢侈的浪费。就算是用来实现单节点的hadoop,对计算资源浪费也是非常高的。对于这么高性能的计算机,如何有效利用计算资源,就成为成本控制的一项重要议题了。

通过虚拟化技术,我们可以将一台服务器,拆分成12台VPS,每台2核CPU,4G内存,40G硬盘,并且支持资源重新分配。多么伟大的技术啊!现在我们有了12个节点的hadoop集群, 让Hadoop跑在云端,让世界加速。

关于作者:

  • 张丹(Conan), 程序员Java,R,PHP,Javascript
  • weibo:@Conan_Z
  • blog: http://blog.fens.me
  • email: bsspirit@gmail.com

转载请注明出处:
 http://blog.fens.me/hadoop-hive-10g/

hadoop-hive-10g

前言

Hadoop和Hive的环境已经搭建起来了,开始导入数据进行测试。我的数据1G大概对应500W行,MySQL的查询500W行大概3.29秒,用hive同样的查询大概30秒。如果我们把数据增加到10G,100G,让我们来看看Hive的表现吧。

目录

  1. 导出MySQL数据
  2. 导入到Hive
  3. 优化导入过程Hive Bucket
  4. 执行查询

1. 导出MySQL数据

下面是我的表,每天会产生一新表,用日期的方式命名。今天是2013年7月19日,对应的表是cb_hft,记录数646W条记录。


mysql> show tables;
+-----------------+
| Tables_in_CB    |
+-----------------+
| NSpremium       |
| cb_hft          |
| cb_hft_20130710 |
| cb_hft_20130712 |
| cb_hft_20130715 |
| cb_hft_20130716 |
+-----------------+
6 rows in set (0.00 sec)

mysql> select count(1) from cb_hft;
+----------+
| count(1) |
+----------+
|  6461338 |
+----------+
1 row in set (3.29 sec)

快速复制表:
由于这个表是离线系统的,没有线上应用,我重命名表cb_hft为cb_hft_20130719,再复制表结构。


mysql> RENAME TABLE cb_hft TO cb_hft_20130719;
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE TABLE cb_hft like cb_hft_20130719;
Query OK, 0 rows affected (0.02 sec)

mysql> show tables;
+-----------------+
| Tables_in_CB    |
+-----------------+
| NSpremium       |
| cb_hft          |
| cb_hft_20130710 |
| cb_hft_20130712 |
| cb_hft_20130715 |
| cb_hft_20130716 |
| cb_hft_20130719 |
+-----------------+
7 rows in set (0.00 sec)

导出表到csv
以hft_20130712表为例


mysql> SELECT
  SecurityID,TradeTime,PreClosePx,OpenPx,HighPx,LowPx,LastPx, 
  BidSize1,BidPx1,BidSize2,BidPx2,BidSize3,BidPx3,BidSize4,BidPx4,BidSize5,BidPx5,
  OfferSize1,OfferPx1,OfferSize2,OfferPx2,OfferSize3,OfferPx3,OfferSize4,OfferPx4,OfferSize5,OfferPx5,
  NumTrades,TotalVolumeTrade,TotalValueTrade,PE,PE1,PriceChange1,PriceChange2,Positions
FROM cb_hft_20130712
INTO OUTFILE '/tmp/export_cb_hft_20130712.csv'  FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'; 

Query OK, 6127080 rows affected (2 min 55.04 sec)

查看数据文件


~ ls -l /tmp
-rw-rw-rw- 1 mysql mysql 1068707117 Jul 19 15:59 export_cb_hft_20130712.csv

2. 导入到Hive

登陆c1.wtmart.com机器,下载数据文件


~ ssh cos@c1.wtmart.com
~ cd /home/cos/hadoop/sqldb
~ scp -P 10003 cos@d2.wtmart.com:/tmp/export_cb_hft_20130712.csv .
export_cb_hft_20130712.csv                                                                 100% 1019MB  39.2MB/s   00:26

在hive上建表


~ bin/hive shell

#删除已存在的表
hive> DROP TABLE IF EXISTS t_hft_tmp;
Time taken: 4.898 seconds

#创建t_hft_tmp表
hive> CREATE TABLE t_hft_tmp(
  SecurityID STRING,TradeTime STRING,
  PreClosePx DOUBLE,OpenPx DOUBLE,HighPx DOUBLE,LowPx DOUBLE,LastPx DOUBLE,
  BidSize1 DOUBLE,BidPx1 DOUBLE,BidSize2 DOUBLE,BidPx2 DOUBLE,BidSize3 DOUBLE,BidPx3 DOUBLE,BidSize4 DOUBLE,BidPx4 DOUBLE,BidSize5 DOUBLE,BidPx5 DOUBLE,
  OfferSize1 DOUBLE,OfferPx1 DOUBLE,OfferSize2 DOUBLE,OfferPx2 DOUBLE,OfferSize3 DOUBLE,OfferPx3 DOUBLE,OfferSize4 DOUBLE,OfferPx4 DOUBLE,OfferSize5 DOUBLE,OfferPx5 DOUBLE,
  NumTrades INT,TotalVolumeTrade DOUBLE,TotalValueTrade DOUBLE,PE DOUBLE,PE1 DOUBLE,PriceChange1 DOUBLE,PriceChange2 DOUBLE,Positions DOUBLE
) PARTITIONED BY (tradeDate INT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

Time taken: 0.189 seconds

#导入数据
hive> LOAD DATA LOCAL INPATH '/home/cos/hadoop/sqldb/export_cb_hft_20130712.csv' OVERWRITE INTO TABLE t_hft_tmp PARTITION (tradedate=20130712);
Copying data from file:/home/cos/hadoop/sqldb/export_cb_hft_20130712.csv
Copying file: file:/home/cos/hadoop/sqldb/export_cb_hft_20130712.csv
Loading data to table default.t_hft_tmp partition (tradedate=20130712)
Time taken: 16.535 seconds

当数据被加载至表中时,不会对数据进行任何转换。Load操作只是将数据复制至Hive表对应的位置,这个表只有一个文件,文件没有切分成多份。


hive> dfs -ls /user/hive/warehouse/t_hft_tmp/tradedate=20130712;
Found 1 items
-rw-r--r--   1 cos supergroup 1068707117 2013-07-19 16:07 /user/hive/warehouse/t_hft_tmp/tradedate=20130712/export_cb_hft_20130712.csv

3. 优化导入过程Hive Bucket

第二步导入,我们要把刚才的一个大文件切分成多少小文件,大概按照64M一个block的要求。我们设置做16个Bucket。

新建数据表t_hft_day,并定义CLUSTERED BY,SORTED BY,16 BUCKETS


hive> CREATE TABLE t_hft_day(
  SecurityID STRING,TradeTime STRING,
  PreClosePx DOUBLE,OpenPx DOUBLE,HighPx DOUBLE,LowPx DOUBLE,LastPx DOUBLE,
  BidSize1 DOUBLE,BidPx1 DOUBLE,BidSize2 DOUBLE,BidPx2 DOUBLE,BidSize3 DOUBLE,BidPx3 DOUBLE,BidSize4 DOUBLE,BidPx4 DOUBLE,BidSize5 DOUBLE,BidPx5 DOUBLE,
  OfferSize1 DOUBLE,OfferPx1 DOUBLE,OfferSize2 DOUBLE,OfferPx2 DOUBLE,OfferSize3 DOUBLE,OfferPx3 DOUBLE,OfferSize4 DOUBLE,OfferPx4 DOUBLE,OfferSize5 DOUBLE,OfferPx5 DOUBLE,
  NumTrades INT,TotalVolumeTrade DOUBLE,TotalValueTrade DOUBLE,PE DOUBLE,PE1 DOUBLE,PriceChange1 DOUBLE,PriceChange2 DOUBLE,Positions DOUBLE
) PARTITIONED BY (tradeDate INT)
CLUSTERED BY(SecurityID) SORTED BY(TradeTime) INTO 16 BUCKETS
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

从t_hft_tmp临时数据表导入到t_hft_day数据表


#强制执行装桶的操作
hive> set hive.enforce.bucketing = true;

#数据导入
hive> FROM t_hft_tmp
INSERT OVERWRITE TABLE t_hft_day
PARTITION (tradedate=20130712)
SELECT SecurityID , TradeTime ,
  PreClosePx ,OpenPx ,HighPx ,LowPx ,LastPx ,
  BidSize1 ,BidPx1 ,BidSize2 ,BidPx2 ,BidSize3 ,BidPx3 ,BidSize4 ,BidPx4 ,BidSize5 ,BidPx5 ,
  OfferSize1 ,OfferPx1 ,OfferSize2 ,OfferPx2 ,OfferSize3 ,OfferPx3 ,OfferSize4 ,OfferPx4 ,OfferSize5 ,OfferPx5 ,
  NumTrades,TotalVolumeTrade ,TotalValueTrade ,PE ,PE1 ,PriceChange1 ,PriceChange2 ,Positions 
WHERE tradedate=20130712;

MapReduce Total cumulative CPU time: 8 minutes 5 seconds 810 msec
Ended Job = job_201307191356_0016
Loading data to table default.t_hft_day partition (tradedate=20130712)
Partition default.t_hft_day{tradedate=20130712} stats: [num_files: 16, num_rows: 0, total_size: 1291728298, raw_data_size: 0]
Table default.t_hft_day stats: [num_partitions: 11, num_files: 176, num_rows: 0, total_size: 10425980914, raw_data_size: 0]
6127080 Rows loaded to t_hft_day
MapReduce Jobs Launched:
Job 0: Map: 4  Reduce: 16   Cumulative CPU: 485.81 sec   HDFS Read: 1068771008 HDFS Write: 1291728298 SUCCESS
Total MapReduce CPU Time Spent: 8 minutes 5 seconds 810 msec
OK
Time taken: 172.617 seconds

导入操作累计CPU时间是8分05秒,8*60+5=485秒。由于有4个Map并行,16个Reduce并行,所以实际消耗时间是172秒。

我们再看一下新表的文件是否被分片:


hive> dfs -ls /user/hive/warehouse/t_hft_day/tradedate=20130712;
Found 16 items
-rw-r--r--   1 cos supergroup   95292536 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000000_0
-rw-r--r--   1 cos supergroup   97136495 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000001_0
-rw-r--r--   1 cos supergroup   90695623 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000002_0
-rw-r--r--   1 cos supergroup   84132171 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000003_0
-rw-r--r--   1 cos supergroup   81552397 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000004_0
-rw-r--r--   1 cos supergroup   80580028 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000005_0
-rw-r--r--   1 cos supergroup   73195335 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000006_0
-rw-r--r--   1 cos supergroup   68648786 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000007_0
-rw-r--r--   1 cos supergroup   72210159 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000008_0
-rw-r--r--   1 cos supergroup   66851502 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000009_0
-rw-r--r--   1 cos supergroup   69292538 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000010_0
-rw-r--r--   1 cos supergroup   75282272 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000011_0
-rw-r--r--   1 cos supergroup   79572724 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000012_0
-rw-r--r--   1 cos supergroup   78151866 2013-07-19 16:19 /user/hive/warehouse/t_hft_day/tradedate=20130712/000013_0
-rw-r--r--   1 cos supergroup   86850954 2013-07-19 16:18 /user/hive/warehouse/t_hft_day/tradedate=20130712/000014_0
-rw-r--r--   1 cos supergroup   92282912 2013-07-19 16:19 /user/hive/warehouse/t_hft_day/tradedate=20130712/000015_0

一共16个分片。

4. 执行查询

当前1G的文件,使用Hive执行一个简单的查询:34.974秒


hive> select count(1) from t_hft_day where tradedate=20130712;

MapReduce Total cumulative CPU time: 34 seconds 670 msec
Ended Job = job_201307191356_0017
MapReduce Jobs Launched:
Job 0: Map: 7  Reduce: 1   Cumulative CPU: 34.67 sec   HDFS Read: 1291793812 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 34 seconds 670 msec
6127080
Time taken: 34.974 seconds

MySQL执行同样的查询,在开始时我已经测试过3.29秒。
相差了10倍的时间,不过只有1G的数据量,是发挥不出hadoop的优势的。

接下来,按照上面的方法,我们把十几天的数据都导入到hive里面,然后再进行比较。

查看已导入hive的数据集


hive> SHOW PARTITIONS t_hft_day;
tradedate=20130627
tradedate=20130628
tradedate=20130701
tradedate=20130702
tradedate=20130703
tradedate=20130704
tradedate=20130705
tradedate=20130708
tradedate=20130709
tradedate=20130710
tradedate=20130712
tradedate=20130715
tradedate=20130716
tradedate=20130719
Time taken: 0.099 seconds

在MySQL中,对5张表进行查询。(5G数据量)


#单表:由于PreClosePx不是索引列,第一次查询
mysql> select SecurityID,20130719 as tradedate,count(1) as count from cb_hft_20130716 where PreClosePx>8.17 group by SecurityID limit 10;
+------------+-----------+-------+
| SecurityID | tradedate | count |
+------------+-----------+-------+
| 000001     |  20130719 |  5200 |
| 000002     |  20130719 |  5193 |
| 000003     |  20130719 |  1978 |
| 000004     |  20130719 |  3201 |
| 000005     |  20130719 |  1975 |
| 000006     |  20130719 |  1910 |
| 000007     |  20130719 |  3519 |
| 000008     |  20130719 |  4229 |
| 000009     |  20130719 |  5147 |
| 000010     |  20130719 |  2176 |
+------------+-----------+-------+
10 rows in set (24.60 sec)

#多表查询
select t.SecurityID,t.tradedate,t.count 
from (
select SecurityID,20130710 as tradedate,count(1) as count from cb_hft_20130710 where PreClosePx>8.17 group by SecurityID
union
select SecurityID,20130712 as tradedate,count(1) as count from cb_hft_20130712 group by SecurityID
union
select SecurityID,20130715 as tradedate,count(1) as count from cb_hft_20130715 where PreClosePx>8.17 group by SecurityID
union
select SecurityID,20130716 as tradedate,count(1) as count from cb_hft_20130716 where PreClosePx>8.17 group by SecurityID
union
select SecurityID,20130719 as tradedate,count(1) as count from cb_hft_20130719 where PreClosePx>8.17 group by SecurityID ) as t 
limit 10

#超过3分钟,无返回结果。
....

在Hive中,对同样的5张表进行查询。(5G数据量)


select SecurityID,tradedate,count(1) from t_hft_day where tradedate in (20130710,20130712,20130715,20130716,20130719) and PreClosePx>8.17 group by SecurityID,tradedate limit 10;

MapReduce Total cumulative CPU time: 3 minutes 56 seconds 540 msec
Ended Job = job_201307191356_0023
MapReduce Jobs Launched:
Job 0: Map: 25  Reduce: 7   Cumulative CPU: 236.54 sec   HDFS Read: 6577084486 HDFS Write: 1470 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 56 seconds 540 msec
OK
000001  20130710        5813
000004  20130715        3546
000005  20130712        1820
000005  20130719        2364
000006  20130716        1910
000008  20130710        2426
000011  20130715        2113
000012  20130712        3554
000012  20130719        3756
000013  20130716        1646
Time taken: 66.32 seconds

#对以上14张表的查询
MapReduce Total cumulative CPU time: 8 minutes 40 seconds 380 msec
Ended Job = job_201307191356_0022
MapReduce Jobs Launched:
Job 0: Map: 53  Reduce: 15   Cumulative CPU: 520.38 sec   HDFS Read: 14413501282 HDFS Write: 3146 SUCCESS
Total MapReduce CPU Time Spent: 8 minutes 40 seconds 380 msec
OK
000001  20130716        5200
000002  20130715        5535
000003  20130705        1634
000004  20130704        2173
000005  20130703        996
000005  20130712        1820
000006  20130702        1176
000007  20130701        2973
000007  20130710        4084
000010  20130716        2176
Time taken: 119.161 seconds

我们看到hadoop对以G为单位量级的数据增长是不敏感的,多了3倍的数据(15G),执行查询的时间是原来(5G)的两倍。而MySQL数据增长到5G,查询时间几乎是不可忍受的。

1G以下的数据是单机可以处理的,MySQL会非常好的完成查询任务。Hadoop只有在数据量大的情况下才能发挥出优势,当数据量到达10G时,MySQL的单表查询就显得就会性能不足。如果数据量到达了100G,MySQL就已经解决不了了,要通过各种优化的程序才能完成查询。

测试过程已经描述的很清楚了,我们接下来的工作就是把过程自动化。

转载请注明出处:
 http://blog.fens.me/hadoop-hive-10g/

打赏作者

R利剑NoSQL系列文章 之 Cassandra

R利剑NoSQL系列文章,主要介绍通过R语言连接使用nosql数据库。涉及的NoSQL产品,包括Redis,MongoDBHBaseHiveCassandra, Neo4j。希望通过我的介绍让广大的R语言爱好者,有更多的开发选择,做出更多地激动人心的应用。

关于作者:

  • 张丹(Conan), 程序员Java,R,PHP,Javascript
  • weibo:@Conan_Z
  • blog: http://blog.fens.me
  • email: bsspirit@gmail.com

转载请注明:
http://blog.fens.me/nosql-r-cassandra/

rcassandra

第三篇 R利剑Cassandra,分为7个章节。

  1. Cassandra介绍
  2. Cassandra安装
  3. RCassandra安装
  4. RCassandra函数库
  5. RCassandra基本使用操作
  6. RCassandra使用案例
  7. Cassandra的没落

每一章节,都会分为”文字说明部分”和”代码部分”,保持文字说明与代码的连贯性。

1. Cassandra介绍

Apache Cassandra是一套开源分布式NoSQL数据库系统。它最初由Facebook开发,用于储存收件箱等简单格式数据,集Google BigTable的数据模型与Amazon Dynamo的完全分布式的架构于一身。Facebook于2008将 Cassandra 开源,此后,由于Cassandra良好的可扩放性,被Digg、Twitter等知名Web 2.0网站所采纳,成为了一种流行的分布式结构化数据存储方案。

Cassandra 的名称来源于希腊神话,是特洛伊的一位悲剧性的女先知的名字,因此项目的Logo是一只放光的眼睛。

Cassandra的数据会写入多个节点,来保证数据的可靠性,在一致性、可用性和网络分区耐受能力(CAP)的折衷问题上,Cassandra比较灵活,用户在读取时可以指定要求所有副本一致(高一致性)、读到一个副本即可(高可用性)或是通过选举来确认多数副本一致即可(折衷)。这样,Cassandra可以适用于有节点、网络失效,以及多数据中心的场景。

Cassandra介绍摘自:维基百科(http://zh.wikipedia.org/wiki/Cassandra)

2. Cassandra安装

文字说明部分:
首先环境准备,这里我选择了Linux Ubuntu操作系统12.04的64位服务器版本,大家可以根据自己的使用习惯选择顺手的Linux。

JDK使用SUN官方版本JDK 1.6.0_29,请不要用Linux自带的openjdk。

手动下载并安装Cassandra。

Cassandra配置,需要提前初始化几个目录。

  • data_file_directories:为数据文件目录
  • commitlog_directory:为日志文件目录
  • saved_caches_directory:为缓存文件目录

下面将介绍单节点的安装,集群安装请参考:Cassandra单集群实验2个节点

代码部分:
单节点安装:系统环境 Linux Ubuntu 12.04 LTS 64bit server


~ uname -a
Linux u1 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

~ cat /etc/issue
Ubuntu 12.04.2 LTS \n \l

JDK环境:SUN官方JDK 1.6.0_29


~ java -version

java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

下载Cassandra并解压


~ wget http://mirrors.tuna.tsinghua.edu.cn/apache/cassandra/1.2.5/apache-cassandra-1.2.5-bin.tar.gz

~ tar xvf apache-cassandra-1.2.5-bin.tar.gz
~ mv apache-cassandra-1.2.5-bin cassandra125
~ mv cassandra125 /home/conan/toolkit/

~ pwd
/home/conan/toolkit

~ ls -l
drwxrwxr-x  9 conan conan 4096 Jun  1 06:10 cassandra125/
drwxr-xr-x 10 conan conan  4096 Apr 23 14:36 jdk16

初始化cassandra


~ cd /home/conan/toolkit/cassandra125

#配置Cassandra数据文件目录
~ vi conf/cassandra.yaml

data_file_directories:
    - /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
saved_caches_directory: /var/lib/cassandra/saved_caches

目录的介绍:
data_file_directories:为数据文件目录
commitlog_directory:为日志文件目录
saved_caches_directory:为缓存文件目录

确认操作系统中,这几个目录已被创建。
同时确认/var/log/cassandra/目录,对于cassandra是可写的。


~ sudo mkdir -p /var/lib/cassandra/data
~ sudo mkdir -p /var/lib/cassandra/saved_caches
~ sudo mkdir -p /var/lib/cassandra/commitlog
~ sudo mkdir -p /var/log/cassandra/

~ sudo chown -R conan:conan /var/lib/cassandra
~ sudo chown -R conan:conan /var/log/cassandra/

~ ll /var/lib/cassandra
drwxr-xr-x  2 conan conan 4096 Jun  1 06:21 commitlog/
drwxr-xr-x  2 conan conan 4096 Jun  1 06:21 data/
drwxr-xr-x  2 conan conan 4096 Jun  1 06:21 saved_caches/

设置环境变量


~ sudo vi /etc/environment
CASSANDRA_HOME=/home/conan/toolkit/cassandra125

#让变量生效
~ . /etc/environment

#查看环境变量
~ export |grep /home/conan/toolkit/cassandra125
declare -x CASSANDRA_HOME="/home/conan/toolkit/cassandra125"
declare -x OLDPWD="/home/conan/toolkit/cassandra125"
declare -x PWD="/home/conan/toolkit/cassandra125/bin"

启动cassandra


~ bin/cassandra -f
#注:-f参数是绑定到console,不加-f则是后台启动。

~ jps
19971 CassandraDaemon
20440 Jps

打开客户端


~ bin/cassandra-cli

Connected to: "Test Cluster" on 127.0.0.1/9160
Welcome to Cassandra CLI version 1.2.5

Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.

[default@unknown]

单节的cassandra,我们已经成功能安装好了。

Cassandra的集群安装请参考:Cassandra单集群实验2个节点

3. RCassandra安装

文字说明部分:
R语言的版本请使用2.15.3,下面介绍如何安装R。

首先,增加一个软件源deb http://mirror.bjtu.edu.cn/cran/bin/linux/ubuntu precise/。
更新及指定安装2.15.3-1precise0precise1版本。

启动R程序,安装RCassandra包。

代码部分
测试环境R语言的版本是:2.15.3

安装R语言


~  sudo vi /etc/apt/sources.list
deb http://mirrors.163.com/ubuntu/ precise main universe restricted multiverse
deb-src http://mirrors.163.com/ubuntu/ precise main universe restricted multiverse
deb http://mirrors.163.com/ubuntu/ precise-security universe main multiverse restricted
deb-src http://mirrors.163.com/ubuntu/ precise-security universe main multiverse restricted
deb http://mirrors.163.com/ubuntu/ precise-updates universe main multiverse restricted
deb http://mirrors.163.com/ubuntu/ precise-proposed universe main multiverse restricted
deb-src http://mirrors.163.com/ubuntu/ precise-proposed universe main multiverse restricted
deb http://mirrors.163.com/ubuntu/ precise-backports universe main multiverse restricted
deb-src http://mirrors.163.com/ubuntu/ precise-backports universe main multiverse restricted
deb-src http://mirrors.163.com/ubuntu/ precise-updates universe main multiverse restricted
deb http://mirror.bjtu.edu.cn/cran/bin/linux/ubuntu precise/

更新apt-get源


~ sudo apt-get update

#声明安装2.15.3的版本
~ sudo apt-get install r-base-core=2.15.3-1precise0precise1

#启动R
~ R
R version 2.15.3 (2013-03-01) -- "Security Blanket"
Copyright (C) 2013 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

安装RCassandra


> install.packages('RCassandra')
> library(RCassandra)

4. RCassandra函数库

文字说明部分
列出有的RCassandra支持的函数,只有17个。记得rredis有100个函数,rmongodb有153个函数。相比之下RCassandra太轻量了。

但是这17个函数,并没有覆盖Cassandra的所有操作,就连一些的基本的操作都没有函数支持,要在命令行处理。不知道是什么原因?!希望RCassandra能继续发展,完善没有实现的功能函数。

不支持的常用操作:
创建keyspaces,删除keyspaces
创建列族,删除列族
删除一行
删除一行的某列数据

下面列出了这17个函数,并与Cassandra的命令做了对比说明。

代码部分
共有17个函数


RC.close               RC.insert
RC.cluster.name        RC.login
RC.connect             RC.mget.range
RC.consistency         RC.mutate
RC.describe.keyspace   RC.read.table
RC.describe.keyspaces  RC.use
RC.get                 RC.version
RC.get.range           RC.write.table
RC.get.range.slices

Cassandra和RCassandra的基本操作对比:


#连接到集群
Cassandra: connect 192.168.1.200/9160;
RCassandra: conn<-RC.connect(host="192.168.1.200",port=9160)

#查看当前集群名字
Cassandra: show cluster name;
RCassandra: RC.cluster.name(conn)

#列出当前集群所有keyspaces
Cassandra: show keyspaces;
RCassandra: RC.describe.keyspaces(conn)

#查看DEMO的keyspace
Cassandra: show schema DEMO;
RCassandra: RC.describe.keyspace(conn,'DEMO')

#选择DEMO的keyspace
Cassandra: use DEMO;
RCassandra: RC.use(conn,'DEMO')

#设置一致性级别
Cassandra: consistencylevel as ONE;
RCassandra: RC.consistency(conn,level="one")

#插入数据
Cassandra:set Users[1][name] = scott;
RCassandra:RC.insert(conn,'Users','1', 'name', 'scott')

#插入数据框
Cassandra:NA
RCassandra:RC.write.table(conn, "Users", df)

#读取列族所有数据
Cassandra: list Users;
RCassandra: RC.read.table(conn,"Users")

#读取数据
Cassandra: get Users[1]['name'];
RCassandra:RC.get(conn,'Users','1', c('name'))

#退出连接
Cassandra: exit; quit;
RCassandra: RC.close(conn)

 

5. RCassandra基本使用操作

文字说明部分
介绍RCassandra的基本函数操作,以iris的数据集为例,介绍了如何利用RCassandra操作Cassandra数据库。

代码部分


#安装RCassandra
install.packages('RCassandra')

#加载RCassandra类库
library(RCassandra)

#建立服务器连接
conn<-RC.connect(host="192.168.1.200")

#当前集群的名字(2个节点集群的名字)
RC.cluster.name(conn)
[1] "case1"

#当前协议的版本
RC.version(conn)
[1] "19.36.0"

#列出所有keyspaces配置信息
RC.describe.keyspaces(conn)

#列出叫的DEMO的keyspaces配置信息
RC.describe.keyspace(conn, "DEMO")

#RCassandra是不能创建的列族的,提前通过Cassandra命令创建一个列族
#[default@DEMO] create column family iris;

#插入iris数据
head(iris)
  Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1          5.1         3.5          1.4         0.2  setosa
2          4.9         3.0          1.4         0.2  setosa
3          4.7         3.2          1.3         0.2  setosa
4          4.6         3.1          1.5         0.2  setosa
5          5.0         3.6          1.4         0.2  setosa
6          5.4         3.9          1.7         0.4  setosa

#iris是一个data.frame
RC.write.table(conn, "iris", iris)

attr(,"class")
[1] "CassandraConnection"

#查看第1行,Sepal.Length列和Species的值
RC.get(conn, "iris", "1", c("Sepal.Length", "Species"))
           key  value           ts
1 Sepal.Length    5.1 1.372881e+15
2      Species setosa 1.372881e+15
#注:ts是时间戳

#查看第1行
RC.get.range(conn, "iris", "1")
           key  value           ts
1 Petal.Length    1.4 1.372881e+15
2  Petal.Width    0.2 1.372881e+15
3 Sepal.Length    5.1 1.372881e+15
4  Sepal.Width    3.5 1.372881e+15
5      Species setosa 1.372881e+15

#查看
r <- RC.get.range.slices(conn, "iris")
class(r)
[1] "list"

r[[1]]
           key  value           ts
1 Petal.Length    1.7 1.372881e+15
2  Petal.Width    0.4 1.372881e+15
3 Sepal.Length    5.4 1.372881e+15
4  Sepal.Width    3.9 1.372881e+15
5      Species setosa 1.372881e+15

rk <- RC.get.range.slices(conn, "iris", limit=0)
y <- RC.read.table(conn, "iris")
y <- y[order(as.integer(row.names(y))),]

head(y)
  Petal.Length Petal.Width Sepal.Length Sepal.Width Species
1          1.4         0.2          5.1         3.5  setosa
2          1.4         0.2          4.9         3.0  setosa
3          1.3         0.2          4.7         3.2  setosa
4          1.5         0.2          4.6         3.1  setosa
5          1.4         0.2          5.0         3.6  setosa
6         

不支持的常用操作

  • 创建keyspaces,删除keyspaces
  • 创建列族,删除列族
  • 删除一行
  • 删除一行的某列数据

6. RCassandra使用案例

文字说明部分
通过一个业务需求的例子,加深我们对RCassandra的认识。下面是一个非常简单的业务场景。

业务需求:
1. 创建一个Users列族,包含name,password两列
2. 在已经数据的情况下,有动态增加一个新列age

代码部分
在Cassandra命令行,创建列族Users


[default@DEMO] create column family Users
...     with key_validation_class = 'UTF8Type'
...     and comparator = 'UTF8Type'
...     and default_validation_class = 'UTF8Type';

89a2fb75-f7d0-399e-b017-30a974b19f4a

RCassandra插入数据,包含name,password两列


> df<-data.frame(name=c('a1','a2'),password=c('a1','a2')) > print(df)
  name password
1   a1       a1
2   a2       a2

#插入数据
> RC.write.table(conn, "Users", df)
attr(,"class")
[1] "CassandraConnection"

#查看数据
> RC.read.table(conn,"Users")
     name password
2    a2       a2
1    a1       a1

#新插入: 一行KEY=1234,并增加age列
> RC.insert(conn,'Users','1234', 'name', 'scott')
> RC.insert(conn,'Users','1234', 'password', 'tiger')
> RC.insert(conn,'Users','1234', 'age', '20')

#查看数据
> RC.read.table(conn,"Users")
     age  name password
1234  20 scott    tiger
2     NA    a2       a2
1     NA    a1       a1

#修改: KEY=1的行中,name=a11, age=12
> RC.insert(conn,'Users','1', 'name', 'a11')
> RC.insert(conn,'Users','1', 'age', '12')

#查看数据
> RC.read.table(conn,"Users")
     age  name password
1234  20 scott    tiger
2     NA    a2       a2
1     12   a11       a1

7. Cassandra的没落

越来越多的基于cassandra构建的应用,开始向hbase迁移。

Cassandra的没落,在技术上可能存在的一些原因:

1. 读的性能太慢

无中心的设计,造成读数据时通过逆熵做计算,性能损耗很大,甚至会严重影响服务器运作。

2. 数据同步太慢(最终一致性延迟可能非常大)

由于无中心设计,要靠各节点传递信息。相互发通知告知状态,如果副本集有多份,其中又出现节点有宕机的情况,那么做到数据的一致性,延迟可能非常大,效率也很低的。

3. 用插入和更新代替查询,缺乏灵活性,所有查询都要求提前定义好。

与大多数数据库为读优化不同,Cassandra的写性能理论上是高于读性能的,因此非常适合流式的数据存储,尤其是写负载高于读负载的。与HBase比起来,它的随机访问性能要高很多,但不是很擅长区间扫描,因此可以作为HBase的即时查询缓存,由HBase进行批量的大数据处理,由Cassandra提供随机查询的接口

4. 不支持直接接入hadoop,不能实现MapReduce。

现在大数据的代名词就是hadoop,做为海量数据的框架不支持hadoop及MapReduce,就将被取代。除非Cassandra能够给出其他的定位,或者海量数据解决方案。DataStax公司,正在用Cassandra重够HDFS的文件系统,不知道是否可以成功。

让我期待Cassandra未来的发展吧!

转载请注明:
http://blog.fens.me/nosql-r-cassandra/

打赏作者

Hive安装及使用攻略

让Hadoop跑在云端系列文章,介绍了如何整合虚拟化和Hadoop,让Hadoop集群跑在VPS虚拟主机上,通过云向用户提供存储和计算的服务。

现在硬件越来越便宜,一台非品牌服务器,2颗24核CPU,配48G内存,2T的硬盘,已经降到2万块人民币以下了。这种配置如果简单地放几个web应用,显然是奢侈的浪费。就算是用来实现单节点的hadoop,对计算资源浪费也是非常高的。对于这么高性能的计算机,如何有效利用计算资源,就成为成本控制的一项重要议题了。

通过虚拟化技术,我们可以将一台服务器,拆分成12台VPS,每台2核CPU,4G内存,40G硬盘,并且支持资源重新分配。多么伟大的技术啊!现在我们有了12个节点的hadoop集群, 让Hadoop跑在云端,让世界加速。

关于作者:

  • 张丹(Conan), 程序员Java,R,PHP,Javascript
  • weibo:@Conan_Z
  • blog: http://blog.fens.me
  • email: bsspirit@gmail.com

转载请注明出处:
 http://blog.fens.me/hadoop-hive-intro/

hadoop-hive

前言

Hive是Hadoop一个程序接口,Hive让数据分析人员快速上手,Hive使用了类SQL的语法,Hive让JAVA的世界变得简单而轻巧,Hive让Hadoop普及到了程序员以外的人。

从Hive开始,让分析师们也能玩转大数据。

目录

  1. Hive的安装
  2. Hive的基本使用:CRUD
  3. Hive交互式模式
  4. 数据导入
  5. 数据导出
  6. Hive查询HiveQL
  7. Hive视图
  8. Hive分区表

1. Hive的安装

系统环境
装好hadoop的环境后,我们可以把Hive装在namenode机器上(c1)。
hadoop的环境,请参考:让Hadoop跑在云端系列文章RHadoop实践系列之一:Hadoop环境搭建

下载: hive-0.9.0.tar.gz
解压到: /home/cos/toolkit/hive-0.9.0

hive配置


~ cd /home/cos/toolkit/hive-0.9.0
~ cp hive-default.xml.template hive-site.xml
~ cp hive-log4j.properties.template hive-log4j.properties

修改hive-site.xml配置文件
把Hive的元数据存储到MySQL中


~ vi conf/hive-site.xml

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://c1:3306/hive_metadata?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>username to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<description>password to use against metastore database</description>
</property>

<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>

修改hive-log4j.properties

#log4j.appender.EventCounter=org.apache.hadoop.metrics.jvm.EventCounter
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter

设置环境变量


~ sudo vi /etc/environment

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/cos/toolkit/ant184/bin:/home/cos/toolkit/jdk16/bin:/home/cos/toolkit/maven3/bin:/home/cos/toolkit/hadoop-1.0.3/bin:/home/cos/toolkit/hive-0.9.0/bin"

JAVA_HOME=/home/cos/toolkit/jdk16
ANT_HOME=/home/cos/toolkit/ant184
MAVEN_HOME=/home/cos/toolkit/maven3

HADOOP_HOME=/home/cos/toolkit/hadoop-1.0.3
HIVE_HOME=/home/cos/toolkit/hive-0.9.0

CLASSPATH=/home/cos/toolkit/jdk16/lib/dt.jar:/home/cos/toolkit/jdk16/lib/tools.jar

在hdfs上面,创建目录


$HADOOP_HOME/bin/hadoop fs -mkidr /tmp
$HADOOP_HOME/bin/hadoop fs -mkidr /user/hive/warehouse
$HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp
$HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse

在MySQL中创建数据库


create database hive_metadata;
grant all on hive_metadata.* to hive@'%' identified by 'hive';
grant all on hive_metadata.* to hive@localhost identified by 'hive';
ALTER DATABASE hive_metadata CHARACTER SET latin1;

手动上传mysql的jdbc库到hive/lib


~ ls /home/cos/toolkit/hive-0.9.0/lib
mysql-connector-java-5.1.22-bin.jar

启动hive


#启动metastore服务
~ bin/hive --service metastore &
Starting Hive Metastore Server

#启动hiveserver服务
~ bin/hive --service hiveserver &
Starting Hive Thrift Server

#启动hive客户端
~ bin/hive shell
Logging initialized using configuration in file:/root/hive-0.9.0/conf/hive-log4j.properties
Hive history file=/tmp/root/hive_job_log_root_201211141845_1864939641.txt

hive> show tables
OK

查询MySQL数据库中的元数据


~ mysql -uroot -p
mysql> use hive_metadata;
Database changed

mysql> show tables;
+-------------------------+
| Tables_in_hive_metadata |
+-------------------------+
| BUCKETING_COLS          |
| CDS                     |
| COLUMNS_V2              |
| DATABASE_PARAMS         |
| DBS                     |
| IDXS                    |
| INDEX_PARAMS            |
| PARTITIONS              |
| PARTITION_KEYS          |
| PARTITION_KEY_VALS      |
| PARTITION_PARAMS        |
| PART_COL_PRIVS          |
| PART_PRIVS              |
| SDS                     |
| SD_PARAMS               |
| SEQUENCE_TABLE          |
| SERDES                  |
| SERDE_PARAMS            |
| SORT_COLS               |
| TABLE_PARAMS            |
| TBLS                    |
| TBL_COL_PRIVS           |
| TBL_PRIVS               |
+-------------------------+
23 rows in set (0.00 sec)

Hive已经成功安装,下面是hive的使用攻略。

2. Hive的基本使用

1. 进入hive控制台


~ cd /home/cos/toolkit/hive-0.9.0

~ bin/hive shell
Logging initialized using configuration in file:/home/cos/toolkit/hive-0.9.0/conf/hive-log4j.properties
Hive history file=/tmp/cos/hive_job_log_cos_201307160003_95040367.txt
hive>

新建表


#创建数据(文本以tab分隔)
~ vi /home/cos/demo/t_hive.txt

16      2       3
61      12      13
41      2       31
17      21      3
71      2       31
1       12      34
11      2       34

#创建新表
hive> CREATE TABLE t_hive (a int, b int, c int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
OK
Time taken: 0.489 seconds

#导入数据t_hive.txt到t_hive表
hive> LOAD DATA LOCAL INPATH '/home/cos/demo/t_hive.txt' OVERWRITE INTO TABLE t_hive ;
Copying data from file:/home/cos/demo/t_hive.txt
Copying file: file:/home/cos/demo/t_hive.txt
Loading data to table default.t_hive
Deleted hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive
OK
Time taken: 0.397 seconds

查看表和数据


#查看表 
hive> show tables;
OK
t_hive
Time taken: 0.099 seconds

#正则匹配表名
hive>show tables '*t*';
OK
t_hive
Time taken: 0.065 seconds

#查看表数据
hive> select * from t_hive;
OK
16      2       3
61      12      13
41      2       31
17      21      3
71      2       31
1       12      34
11      2       34
Time taken: 0.264 seconds

#查看表结构
hive> desc t_hive;
OK
a       int
b       int
c       int
Time taken: 0.1 seconds

修改表


#增加一个字段
hive> ALTER TABLE t_hive ADD COLUMNS (new_col String);
OK
Time taken: 0.186 seconds
hive> desc t_hive;
OK
a       int
b       int
c       int
new_col string
Time taken: 0.086 seconds

#重命令表名
~ ALTER TABLE t_hive RENAME TO t_hadoop;
OK
Time taken: 0.45 seconds
hive> show tables;
OK
t_hadoop
Time taken: 0.07 seconds

删除表


hive> DROP TABLE t_hadoop;
OK
Time taken: 0.767 seconds

hive> show tables;
OK
Time taken: 0.064 seconds

3. Hive交互式模式

  • quit,exit:  退出交互式shell
  • reset: 重置配置为默认值
  • set <key>=<value> : 修改特定变量的值(如果变量名拼写错误,不会报错)
  • set :  输出用户覆盖的hive配置变量
  • set -v : 输出所有Hadoop和Hive的配置变量
  • add FILE[S] *, add JAR[S] *, add ARCHIVE[S] * : 添加 一个或多个 file, jar, archives到分布式缓存
  • list FILE[S], list JAR[S], list ARCHIVE[S] : 输出已经添加到分布式缓存的资源。
  • list FILE[S] *, list JAR[S] *,list ARCHIVE[S] * : 检查给定的资源是否添加到分布式缓存
  • delete FILE[S] *,delete JAR[S] *,delete ARCHIVE[S] * : 从分布式缓存删除指定的资源
  • ! <command> :  从Hive shell执行一个shell命令
  • dfs <dfs command> :  从Hive shell执行一个dfs命令
  • <query string> : 执行一个Hive 查询,然后输出结果到标准输出
  • source FILE <filepath>:  在CLI里执行一个hive脚本文件

4. 数据导入

还以刚才的t_hive为例。

#创建表结构
hive> CREATE TABLE t_hive (a int, b int, c int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';

从操作本地文件系统加载数据(LOCAL)


hive> LOAD DATA LOCAL INPATH '/home/cos/demo/t_hive.txt' OVERWRITE INTO TABLE t_hive ;
Copying data from file:/home/cos/demo/t_hive.txt
Copying file: file:/home/cos/demo/t_hive.txt
Loading data to table default.t_hive
Deleted hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive
OK
Time taken: 0.612 seconds

#在HDFS中查找刚刚导入的数据
~ hadoop fs -cat /user/hive/warehouse/t_hive/t_hive.txt

16      2       3
61      12      13
41      2       31
17      21      3
71      2       31
1       12      34
11      2       34

从HDFS加载数据


创建表t_hive2
hive> CREATE TABLE t_hive2 (a int, b int, c int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';

#从HDFS加载数据
hive> LOAD DATA INPATH '/user/hive/warehouse/t_hive/t_hive.txt' OVERWRITE INTO TABLE t_hive2;
Loading data to table default.t_hive2
Deleted hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive2
OK
Time taken: 0.325 seconds

#查看数据
hive> select * from t_hive2;
OK
16      2       3
61      12      13
41      2       31
17      21      3
71      2       31
1       12      34
11      2       34
Time taken: 0.287 seconds

从其他表导入数据


hive> INSERT OVERWRITE TABLE t_hive2 SELECT * FROM t_hive ;

Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201307131407_0002, Tracking URL = http://c1.wtmart.com:50030/jobdetails.jsp?jobid=job_201307131407_0002
Kill Command = /home/cos/toolkit/hadoop-1.0.3/libexec/../bin/hadoop job  -Dmapred.job.tracker=hdfs://c1.wtmart.com:9001 -kill job_201307131407_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2013-07-16 10:32:41,979 Stage-1 map = 0%,  reduce = 0%
2013-07-16 10:32:48,034 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.03 sec
2013-07-16 10:32:49,050 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.03 sec
2013-07-16 10:32:50,068 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.03 sec
2013-07-16 10:32:51,082 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.03 sec
2013-07-16 10:32:52,093 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.03 sec
2013-07-16 10:32:53,102 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.03 sec
2013-07-16 10:32:54,112 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.03 sec
MapReduce Total cumulative CPU time: 1 seconds 30 msec
Ended Job = job_201307131407_0002
Ended Job = -314818888, job is filtered out (removed at runtime).
Moving data to: hdfs://c1.wtmart.com:9000/tmp/hive-cos/hive_2013-07-16_10-32-31_323_5732404975764014154/-ext-10000
Loading data to table default.t_hive2
Deleted hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive2
Table default.t_hive2 stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 56, raw_data_size: 0]
7 Rows loaded to t_hive2
MapReduce Jobs Launched:
Job 0: Map: 1   Cumulative CPU: 1.03 sec   HDFS Read: 273 HDFS Write: 56 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 30 msec
OK
Time taken: 23.227 seconds

hive> select * from t_hive2;
OK
16      2       3
61      12      13
41      2       31
17      21      3
71      2       31
1       12      34
11      2       34
Time taken: 0.134 seconds

创建表并从其他表导入数据


#删除表
hive> DROP TABLE t_hive;

#创建表并从其他表导入数据
hive> CREATE TABLE t_hive AS SELECT * FROM t_hive2 ;

Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201307131407_0003, Tracking URL = http://c1.wtmart.com:50030/jobdetails.jsp?jobid=job_201307131407_0003
Kill Command = /home/cos/toolkit/hadoop-1.0.3/libexec/../bin/hadoop job  -Dmapred.job.tracker=hdfs://c1.wtmart.com:9001 -kill job_201307131407_0003
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2013-07-16 10:36:48,612 Stage-1 map = 0%,  reduce = 0%
2013-07-16 10:36:54,648 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
2013-07-16 10:36:55,657 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
2013-07-16 10:36:56,666 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
2013-07-16 10:36:57,673 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
2013-07-16 10:36:58,683 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
2013-07-16 10:36:59,691 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.13 sec
MapReduce Total cumulative CPU time: 1 seconds 130 msec
Ended Job = job_201307131407_0003
Ended Job = -670956236, job is filtered out (removed at runtime).
Moving data to: hdfs://c1.wtmart.com:9000/tmp/hive-cos/hive_2013-07-16_10-36-39_986_1343249562812540343/-ext-10001
Moving data to: hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive
Table default.t_hive stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 56, raw_data_size: 0]
7 Rows loaded to hdfs://c1.wtmart.com:9000/tmp/hive-cos/hive_2013-07-16_10-36-39_986_1343249562812540343/-ext-10000
MapReduce Jobs Launched:
Job 0: Map: 1   Cumulative CPU: 1.13 sec   HDFS Read: 272 HDFS Write: 56 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 130 msec
OK
Time taken: 20.13 seconds

hive> select * from t_hive;
OK
16      2       3
61      12      13
41      2       31
17      21      3
71      2       31
1       12      34
11      2       34
Time taken: 0.109 seconds

仅复制表结构不导数据


hive> CREATE TABLE t_hive3 LIKE t_hive;
hive> select * from t_hive3;
OK
Time taken: 0.077 seconds

从MySQL数据库导入数据
我们将在介绍Sqoop时讲。

5. 数据导出

从HDFS复制到HDFS其他位置


~ hadoop fs -cp /user/hive/warehouse/t_hive /

~ hadoop fs -ls /t_hive
Found 1 items
-rw-r--r--   1 cos supergroup         56 2013-07-16 10:41 /t_hive/000000_0

~ hadoop fs -cat /t_hive/000000_0
1623
611213
41231
17213
71231
11234
11234

通过Hive导出到本地文件系统


hive> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/t_hive' SELECT * FROM t_hive;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201307131407_0005, Tracking URL = http://c1.wtmart.com:50030/jobdetails.jsp?jobid=job_201307131407_0005
Kill Command = /home/cos/toolkit/hadoop-1.0.3/libexec/../bin/hadoop job  -Dmapred.job.tracker=hdfs://c1.wtmart.com:9001 -kill job_201307131407_0005
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2013-07-16 10:46:24,774 Stage-1 map = 0%,  reduce = 0%
2013-07-16 10:46:30,823 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.87 sec
2013-07-16 10:46:31,833 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.87 sec
2013-07-16 10:46:32,844 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.87 sec
2013-07-16 10:46:33,856 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.87 sec
2013-07-16 10:46:34,865 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.87 sec
2013-07-16 10:46:35,873 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.87 sec
2013-07-16 10:46:36,884 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 0.87 sec
MapReduce Total cumulative CPU time: 870 msec
Ended Job = job_201307131407_0005
Copying data to local directory /tmp/t_hive
Copying data to local directory /tmp/t_hive
7 Rows loaded to /tmp/t_hive
MapReduce Jobs Launched:
Job 0: Map: 1   Cumulative CPU: 0.87 sec   HDFS Read: 271 HDFS Write: 56 SUCCESS
Total MapReduce CPU Time Spent: 870 msec
OK
Time taken: 23.369 seconds

#查看本地操作系统
hive> ! cat /tmp/t_hive/000000_0;
hive> 1623
611213
41231
17213
71231
11234
11234

6. Hive查询HiveQL

注:以下代码将去掉map,reduce的日志输出部分。

普通查询:排序,列别名,嵌套子查询


hive> FROM (
    >   SELECT b,c as c2 FROM t_hive
    > ) t
    > SELECT t.b, t.c2
    > WHERE b>2
    > LIMIT 2;
12      13
21      3

连接查询:JOIN


hive> SELECT t1.a,t1.b,t2.a,t2.b
    > FROM t_hive t1 JOIN t_hive2 t2 on t1.a=t2.a
    > WHERE t1.c>10;

1       12      1       12
11      2       11      2
41      2       41      2
61      12      61      12
71      2       71      2

聚合查询1:count, avg


hive> SELECT count(*), avg(a) FROM t_hive;
7       31.142857142857142

聚合查询2:count, distinct


hive> SELECT count(DISTINCT b) FROM t_hive;
3

聚合查询3:GROUP BY, HAVING


#GROUP BY
hive> SELECT avg(a),b,sum(c) FROM t_hive GROUP BY b,c
16.0    2       3
56.0    2       62
11.0    2       34
61.0    12      13
1.0     12      34
17.0    21      3

#HAVING
hive> SELECT avg(a),b,sum(c) FROM t_hive GROUP BY b,c HAVING sum(c)>30
56.0    2       62
11.0    2       34
1.0     12      34

7. Hive视图

Hive视图和数据库视图的概念是一样的,我们还以t_hive为例。


hive> CREATE VIEW v_hive AS SELECT a,b FROM t_hive where c>30;
hive> select * from v_hive;
41      2
71      2
1       12
11      2

删除视图


hive> DROP VIEW IF EXISTS v_hive;
OK
Time taken: 0.495 seconds

8. Hive分区表

分区表是数据库的基本概念,但很多时候数据量不大,我们完全用不到分区表。Hive是一种OLAP数据仓库软件,涉及的数据量是非常大的,所以分区表在这个场景就显得非常重要!!

下面我们重新定义一个数据表结构:t_hft

创建数据


~ vi /home/cos/demo/t_hft_20130627.csv
000001,092023,9.76
000002,091947,8.99
000004,092002,9.79
000005,091514,2.2
000001,092008,9.70
000001,092059,9.45

~ vi /home/cos/demo/t_hft_20130628.csv
000001,092023,9.76
000002,091947,8.99
000004,092002,9.79
000005,091514,2.2
000001,092008,9.70
000001,092059,9.45

创建数据表


DROP TABLE IF EXISTS t_hft;
CREATE TABLE t_hft(
SecurityID STRING,
tradeTime STRING,
PreClosePx DOUBLE
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

创建分区数据表
根据业务:按天和股票ID进行分区设计


DROP TABLE IF EXISTS t_hft;
CREATE TABLE t_hft(
SecurityID STRING,
tradeTime STRING,
PreClosePx DOUBLE
) PARTITIONED BY (tradeDate INT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

导入数据


#20130627
hive> LOAD DATA LOCAL INPATH '/home/cos/demo/t_hft_20130627.csv' OVERWRITE INTO TABLE t_hft PARTITION (tradeDate=20130627);
Copying data from file:/home/cos/demo/t_hft_20130627.csv
Copying file: file:/home/cos/demo/t_hft_20130627.csv
Loading data to table default.t_hft partition (tradedate=20130627)

#20130628
hive> LOAD DATA LOCAL INPATH '/home/cos/demo/t_hft_20130628.csv' OVERWRITE INTO TABLE t_hft PARTITION (tradeDate=20130628);
Copying data from file:/home/cos/demo/t_hft_20130628.csv
Copying file: file:/home/cos/demo/t_hft_20130628.csv
Loading data to table default.t_hft partition (tradedate=20130628)

查看分区表


hive> SHOW PARTITIONS t_hft;
tradedate=20130627
tradedate=20130628
Time taken: 0.082 seconds

查询数据


hive> select * from t_hft where securityid='000001';
000001  092023  9.76    20130627
000001  092008  9.7     20130627
000001  092059  9.45    20130627
000001  092023  9.76    20130628
000001  092008  9.7     20130628
000001  092059  9.45    20130628

hive> select * from t_hft where tradedate=20130627 and PreClosePx<9;
000002  091947  8.99    20130627
000005  091514  2.2     20130627

Hive基于使用完成,这些都是日常的操作。后面我会继续讲一下,HiveQL优化及Hive的运维。

转载请注明出处:
 http://blog.fens.me/hadoop-hive-intro/

打赏作者

Cassandra单集群实验2个节点

cassandra-title

前言

Apache Cassandra是一套开源分布式Key-Value存储系统。它最初由Facebook开发,用于储存特别大的数据。主要特性:分布式,基于column的结构化,高伸展性。作为NoSQL的一支代表,虽然现在已经被hbase超越,但Cassandra的很多的设计思想是非常值得我们学习和借鉴的。感谢tigerfish老师的详细讲解,让我收获颇多!

Cassandra中非常有用的几个概念:一致性哈希,Gossip协议,Snitch,复制策略,DHT,BloomFilter。

关于作者:
张丹(Conan), 程序员Java,R,PHP,Javascript
weibo:@Conan_Z
blog: http://blog.fens.me
email: bsspirit@gmail.com

转载请注明出处:
http://blog.fens.me/cassandra-clustor/

目录

  1. Cassandra集群实验2个节点
  2. 实验过程的错误及修复

1. Cassandra集群实验2个节点

1. 下载Cassandra并配置JAVA环境(跳过)
2. 安装第一个Cassandra节点,解压到/home/conan/toolkit/cassandra125目录

~ pwd
/home/conan/toolkit/cassandra125

ip地址:192.168.1.200


~ ifconfig
eth0      Link encap:Ethernet  HWaddr 08:00:27:90:e8:19
      inet addr:192.168.1.200  Bcast:192.168.1.255  Mask:255.255.255.0
      inet6 addr: fe80::a00:27ff:fe90:e819/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:16943 errors:0 dropped:0 overruns:0 frame:0
      TX packets:19527 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:1433046 (1.4 MB)  TX bytes:2902059 (2.9 MB)

3. 设置环境变更


~ sudo vi /etc/environment
CASSANDRA_HOME=/home/conan/toolkit/cassandra125

~ . /etc/environment

~ export |grep /home/conan/toolkit/cassandra125
declare -x CASSANDRA_HOME="/home/conan/toolkit/cassandra125"
declare -x OLDPWD="/home/conan/toolkit/cassandra125"
declare -x PWD="/home/conan/toolkit/cassandra125/bin"

4. 创建存储和日志目录


~ sudo rm -rf /var/lib/cassandra

~ sudo mkdir -p /var/lib/cassandra/data
~ sudo mkdir -p /var/lib/cassandra/saved_caches
~ sudo mkdir -p /var/lib/cassandra/commitlog
~ sudo mkdir -p /var/log/cassandra/

~ sudo chown -R conan:conan /var/lib/cassandra
~ sudo chown -R conan:conan /var/log/cassandra

~ ls -l /var/lib/cassandra
drwxr-xr-x  2 conan conan 4096 Jul  4 00:15 commitlog/
drwxr-xr-x  2 conan conan 4096 Jul  4 00:15 data/
drwxr-xr-x  2 conan conan 4096 Jul  4 00:15 saved_caches/

5. 修改配置文件cassandra.yaml,按文件顺序列表修改的地方


~ vi /home/conan/toolkit/cassandra125/conf/cassandra.yaml

cluster_name: 'case1'
num_tokens: 256

data_file_directories:
- /var/lib/cassandra/data

commitlog_directory: /var/lib/cassandra/commitlog

saved_caches_directory: /var/lib/cassandra/saved_caches

seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
  parameters:
      - seeds: "192.168.1.200"

#listen_address: localhost
listen_address: 192.168.1.200

#rpc_address: localhost
rpc_address: 192.168.1.200

endpoint_snitch: SimpleSnitch

6. 启动节点


~ cd /home/conan/toolkit/cassandra125/

~ bin/cassandra -f

#部分日志
INFO 00:23:22,785 Enqueuing flush of Memtable-schema_columnfamilies@1792194126(1097/1097 serialized/live bytes, 20 ops)
INFO 00:23:22,786 Writing Memtable-schema_columnfamilies@1792194126(1097/1097 serialized/live bytes, 20 ops)
INFO 00:23:22,796 Completed flushing /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-ic-2-Data.db (698 bytes) for commitlog position ReplayPosition(segmentId=1372868601408, position=64705)
INFO 00:23:22,797 Enqueuing flush of Memtable-schema_columns@552364977(251/251 serialized/live bytes, 5 ops)
INFO 00:23:22,798 Writing Memtable-schema_columns@552364977(251/251 serialized/live bytes, 5 ops)
INFO 00:23:22,808 Completed flushing /var/lib/cassandra/data/system/schema_columns/system-schema_columns-ic-2-Data.db (209 bytes) for commitlog position ReplayPosition(segmentId=1372868601408, position=64705)
INFO 00:23:22,894 Starting listening for CQL clients on /192.168.1.200:9042...
INFO 00:23:22,906 Binding thrift service to /192.168.1.200:9160
INFO 00:23:22,931 Using TFramedTransport with a max frame size of 15728640 bytes.
INFO 00:23:22,952 Using synchronous/threadpool thrift server on 192.168.1.200 : 9160
INFO 00:23:22,953 Listening for thrift clients...
INFO 00:23:33,101 Created default superuser 'cassandra'

7. 查看集群的状态


bin/nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns (effective)  Host ID                               Rack
UN  192.168.1.200  51.01 KB   256     100.0%            e7106e0a-1a9e-43a2-9bcc-fc1201076fee  rack1

8. 增加第2个节点到集群:2个节点
计算机ip: 192.168.1.201


~ ifconfig
eth0      Link encap:Ethernet  HWaddr 08:00:27:0d:0b:0b
          inet addr:192.168.1.201  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe0d:b0b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:45455 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14717 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:33590582 (33.5 MB)  TX bytes:2549931 (2.5 MB)

9. 从第一节点192.168.1.200把cassandra125和jdk目录复制过来


~ pwd
/home/conan/toolkit

~ scp -r conan@192.168.1.200:/home/conan/toolkit/cassandra125 .
~ scp -r conan@192.168.1.200:/home/conan/toolkit/jdk16 .

~ ls -l
drwxrwxr-x  9 conan conan 4096 Apr 25 03:04 cassandra125
drwxr-xr-x 10 conan conan 4096 Apr 25 03:33 jdk16

10. 设置环境变量


~ sudo vi /etc/environment

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/conan/toolkit/jdk16/bin:/home/conan/toolkit/cassandra/bin"
JAVA_HOME=/home/conan/toolkit/jdk16
CASSANDRA_HOME=/home/conan/toolkit/cassandra125

~ . /etc/environment

~ java -version
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

11. 创建存储和日志目录


~ sudo rm -rf /var/lib/cassandra

~ sudo mkdir -p /var/lib/cassandra/data
~ sudo mkdir -p /var/lib/cassandra/saved_caches
~ sudo mkdir -p /var/lib/cassandra/commitlog
~ sudo mkdir -p /var/log/cassandra/

~ sudo chown -R conan:conan /var/lib/cassandra
~ sudo chown -R conan:conan /var/log/cassandra

~ ls -l /var/lib/cassandra
drwxr-xr-x  2 conan conan 4096 Jul  4 00:15 commitlog/
drwxr-xr-x  2 conan conan 4096 Jul  4 00:15 data/
drwxr-xr-x  2 conan conan 4096 Jul  4 00:15 saved_caches/

12. 修改配置文件cassandra.yaml,按文件顺序列表修改的地方


~ vi /home/conan/toolkit/cassandra125/conf/cassandra.yaml

cluster_name: 'case1'

seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
  parameters:
      - seeds: "192.168.1.200"

listen_address: 192.168.1.201

rpc_address: 192.168.1.201

13. 启动节点192.168.1.201


~ bin/cassandra -f

//部分日志
INFO 03:36:47,476 Completed flushing /var/lib/cassandra/data/system/local/system-local-ic-4-Data.db (75 bytes) for commitlog position ReplayPosition(segmentId=1366832174115, position=77582)
INFO 03:36:47,504 Compacting [SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ic-1-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ic-3-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ic-4-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ic-2-Data.db')]
INFO 03:36:47,527 Enqueuing flush of Memtable-local@692438881(10094/10094 serialized/live bytes, 257 ops)
INFO 03:36:47,533 Writing Memtable-local@692438881(10094/10094 serialized/live bytes, 257 ops)
INFO 03:36:47,547 Completed flushing /var/lib/cassandra/data/system/local/system-local-ic-5-Data.db (5365 bytes) for commitlog position ReplayPosition(segmentId=1366832174115, position=89585)
INFO 03:36:47,660 Node /192.168.1.201 state jump to normal
INFO 03:36:47,663 Startup completed! Now serving reads.
INFO 03:36:47,686 Compacted 4 sstables to [/var/lib/cassandra/data/system/local/system-local-ic-6,].  5,956 bytes to 5,687 (~95% of original) in 158ms = 0.034326MB/s.  4 total rows, 1 unique.  Row merge counts were {1:0, 2:0, 3:0, 4:1, }
INFO 03:36:47,771 Starting listening for CQL clients on /192.168.1.201:9042...
INFO 03:36:47,785 Binding thrift service to /192.168.1.201:9160
INFO 03:36:47,810 Using TFramedTransport with a max frame size of 15728640 bytes.
INFO 03:36:47,834 Using synchronous/threadpool thrift server on 192.168.1.201 : 9160
INFO 03:36:47,834 Listening for thrift clients...

14. 查看节点1,192.168.1.200的日志


INFO 01:01:27,382 InetAddress /192.168.1.201 is now UP
INFO 01:01:58,660 Beginning transfer to /192.168.1.201
INFO 01:01:58,661 Flushing memtables for [CFS(Keyspace='system_auth', ColumnFamily='users')]...
INFO 01:01:58,663 Enqueuing flush of Memtable-users@1338035062(28/28 serialized/live bytes, 2 ops)
INFO 01:01:58,668 Writing Memtable-users@1338035062(28/28 serialized/live bytes, 2 ops)
INFO 01:01:59,010 Completed flushing /var/lib/cassandra/data/system_auth/users/system_auth-users-ic-1-Data.db (64 bytes) for commitlog position ReplayPosition(segmentId=1372868601408, position=65900)
INFO 01:01:59,047 Stream context metadata [/var/lib/cassandra/data/system_auth/users/system_auth-users-ic-1-Data.db sections=1 progress=0/64 - 0%], 1 sstables.
INFO 01:01:59,048 Streaming to /192.168.1.201
INFO 01:01:59,122 Successfully sent /var/lib/cassandra/data/system_auth/users/system_auth-users-ic-1-Data.db to /192.168.1.201
INFO 01:01:59,123 Finished streaming session to /192.168.1.201
INFO 01:01:59,424 Enqueuing flush of Memtable-peers@1855686378(10279/10279 serialized/live bytes, 271 ops)
INFO 01:01:59,425 Writing Memtable-peers@1855686378(10279/10279 serialized/live bytes, 271 ops)
INFO 01:01:59,497 Completed flushing /var/lib/cassandra/data/system/peers/system-peers-ic-1-Data.db (5538 bytes) for commitlog position ReplayPosition(segmentId=1372868601408, position=77902)

15. 集群中已经成功加载了192.168.1.201个节点了。


bin/nodetool status

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns (effective)  Host ID                               Rack
UN  192.168.1.200  65.46 KB   256     48.7%             e7106e0a-1a9e-43a2-9bcc-fc1201076fee  rack1
UN  192.168.1.201  58.04 KB   256     51.3%             8eef1965-9822-44bf-a9f6-fff5b87bc474  rack1

实验完成!!

2. 实验过程的错误及修复

1. 不要在已经建立keystore的节点,再修改cluster name
出现下面的错误


ERROR 23:29:58,004 Fatal exception during initialization
org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name Test Cluster != configured name case1

解决办法:http://wiki.apache.org/cassandra/FAQ


Cassandra says "ClusterName mismatch: oldClusterName != newClusterName" and refuses to start

To prevent operator errors, Cassandra stores the name of the cluster in its system table. If you need to rename a cluster for some reason, you can:

Perform these steps on each node:

Start the cassandra-cli connected locally to this node.
Run the following:
use system;
set LocationInfo[utf8('L')][utf8('ClusterName')]=utf8('');
exit;
Run nodetool flush on this node.
Update the cassandra.yaml file for the cluster_name as the same as 2b).
Restart the node.
Once all nodes have been had this operation performed and restarted, nodetool ring should show all nodes as UP.

2. 修改配置文件时:后一定要有空格
会出现下面的错误提示:


while scanning a simple key; could not found expected ':'

举例:修改下面配置


#错误语法
listen_address:localhost

#正确语法
listen_address: localhost

问题解释:
上面问题是由于,:和”之间没有空格,引起的解析错误。

转载请注明出处:
http://blog.fens.me/cassandra-clustor/

打赏作者