• Posts tagged "Mongodb"

Blog Archives

在Ubuntu中安装MongoDB

MongoDB部署实验系列文章,MongoDB做为NoSQL数据库,最近几年持续升温,越来越多的企业都开始尝试用MongoDB代替原有Database做一些事情。MongoDB也在集群,分片,复制上也有相当不错的的表现。我通过将做各种MongoDB的部署实验进行介绍。

关于作者:

  • 张丹(Conan), 程序员Java,R,PHP,Javascript
  • weibo:@Conan_Z
  • blog: http://blog.fens.me
  • email: bsspirit@gmail.com

转载请注明出处:
http://blog.fens.me/linux-mongodb-install/

linux-mongo-install

前言

MongoDB作为一种文档型的NoSQL数据库,使用起来非常灵活,回避了关系型数据库前期的复杂数据库设计。MongoDB存储基于JSON格式,同时用Javascript做为数据库操作语言,给了使用者无限想象的空间,可以通过编程在MongoDB服务器中解决非常复杂的条件查询的问题。

目录

  1. MongoDB在Windows中安装
  2. MongoDB在Linux Ubuntu中安装
  3. 通过命令行客户端访问MongoDB

1 MongoDB在Windows中安装

在Windows系统上安装MongoDB数据库是件非常简单的事情,下载可执行安装文件(exe),双击安装即可。下载地址:http://www.mongodb.org/downloads

  • MongoDB服务器运行命令:MongoDB安装目录/bin/mongod.exe
  • MongoDB客户端运行命令:MongoDB安装目录/bin/mongo.exe

2 MongoDB在Linux Ubuntu中安装

本文使用的Linux是Ubuntu 12.04.2 LTS 64bit的系统,安装MongoDB数据库软件包可以通过apt-get实现。但我们修要安装官方提供MongoDB软件源。

修改apt的source.list文件,增加10gen的设置。


# 下载密钥文件
~  sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.kVFab9XYw0 --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
gpg: 下载密钥‘7F0CEB10’,从 hkp 服务器 keyserver.ubuntu.com
gpg: 密钥 7F0CEB10:公钥“Richard Kreuter ”已导入
gpg: 没有找到任何绝对信任的密钥
gpg: 合计被处理的数量:1
gpg:               已导入:1  (RSA: 1)

# 在source.list中增加MongoDB源的配置
~ echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen

# 更新软件源
~ sudo apt-get update

在Linux Ubuntu中安装MongoDB数据库


#安装MongoDB服务器端
~ sudo apt-get install mongodb-10gen

安装完成后,MongoDB服务器会自动启动,我们检查MongoDB服务器程序


# 检查MongoDB服务器系统进程
~  ps -aux|grep mongo
mongodb   6870  3.7  0.4 349208 39740 ?        Ssl  10:27   2:23 /usr/bin/mongod --config /etc/mongodb.conf

# 通过启动命令检查MongoDB服务器状态
~  netstat -nlt|grep 27017
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN

# 通过启动命令检查MongoDB服务器状态
~ sudo /etc/init.d/mongodb status
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service mongodb status

Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the status(8) utility, e.g. status mongodb
mongodb start/running, process 6870

# 通过系统服务检查MongoDB服务器状态
~ sudo service mongodb status
mongodb start/running, process 6870

通过web的控制台,查看MongoDB服务器的状态。在浏览器输入 http://ip:28017 ,就可以打开通过web的控制台了。

mongodb-web

3. 通过命令行客户端访问MongoDB

安装MongoDB服务器,会自动地一起安装MongoDB命令行客户端程序。

在本机输入mongo命令就可以启动,客户端程序访问MongoDB服务器。


~ mongo
MongoDB shell version: 2.4.9
connecting to: test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user

# 查看命令行帮助
> help
        db.help()                    help on db methods
        db.mycoll.help()             help on collection methods
        sh.help()                    sharding helpers
        rs.help()                    replica set helpers
        help admin                   administrative help
        help connect                 connecting to a db help
        help keys                    key shortcuts
        help misc                    misc things to know
        help mr                      mapreduce

        show dbs                     show database names
        show collections             show collections in current database
        show users                   show users in current database
        show profile                 show most recent system.profile entries with time >= 1ms
        show logs                    show the accessible logger names
        show log [name]              prints out the last segment of log in memory, 'global' is default
        use                 set current database
        db.foo.find()                list objects in collection foo
        db.foo.find( { a : 1 } )     list objects in foo where a == 1
        it                           result of the last line evaluated; use to further iterate
        DBQuery.shellBatchSize = x   set default number of items to display on shell
        exit                         quit the mongo shell

MongoDB服务器,默认情况下是允许外部访问的。这样单节的MongoDB,我们已经成功地安装在Linux Ubuntu系统中。

转载请注明出处:
http://blog.fens.me/linux-mongodb-install/

打赏作者

Nodejs对MongoDB模糊查询

从零开始nodejs系列文章

从零开始nodejs系列文章,将介绍如何利Javascript做为服务端脚本,通过Nodejs框架web开发。Nodejs框架是基于V8的引擎,是目前速度最快的Javascript引擎。chrome浏览器就基于V8,同时打开20-30个网页都很流畅。Nodejs标准的web开发框架Express,可以帮助我们迅速建立web站点,比起PHP的开发效率更高,而且学习曲线更低。非常适合小型网站,个性化网站,我们自己的Geek网站!!

关于作者
张丹(Conan), 程序员Java,R,PHP,Javascript
weibo:@Conan_Z
blog: http://blog.fens.me
email: bsspirit@gmail.com

转载请注明出处:
http://blog.fens.me/nodejs-mongodb-regexp

reg-title

前言

模糊查询是数据库的基本操作之一,实现对给定的字符串是否与指定的模式进行匹配。如果字符完全匹配,可以用=等号表示,如果部分匹配可认为是一种模糊查询。在关系型数据中,通过SQL使用like ‘%fens%’的语法。那么在mongodb中我们应该如何实现模糊查询的效果呢。

目录

  1. mongodb模糊查询
  2. nodejs通过mongoose的模糊查询

1. mongodb模糊查询

我们打开mongodb,以name文字字段进行测试。

精确查询
当{‘name’:’未来警察’}时,精确匹配到一条记录。


db.movies.find({'name':'未来警察'})

reg

模糊查询
{‘name’:/未来/},匹配到了多条记录。


db.movies.find({'name':/未来/})

reg2

MongoDB的模糊查询,其实是正则查询的一种。
注:在关系型数据中,单独有一个关键字like做模糊查询,如果不用like,也可以在关系型数据中使用正则查询。

MongoDB官方介绍:http://docs.mongodb.org/manual/reference/operator/regex/

官方举例:
db.collection.find( { field: /acme.*corp/i } );
db.collection.find( { field: { $regex: 'acme.*corp', $options: 'i' } } );

2. nodejs通过mongoose的模糊查询

希望实现的效果:
reg3

下面说说如何用mongoose进行模糊查询。

使用mongoose访问mongodb,在 Mongoose使用案例–让JSON数据直接入库MongoDB 一文中已经讲过。

我们对Movie建模,并构造dao层。

查询所有电影


MovieDAO.prototype.findByName = function(query, callback) {
  Movie.findOne(query, function(err, obj){
    callback(err, obj);
  });
};

通过传入query对象,就可以进行查询。

接下来,构造query对象


//代码片断
exports.movie = function(req, res) {
  var query={};
  if(req.query.m2) {
    query['name']=new RegExp(req.query.m2);//模糊查询参数
  }

  Movie.findByName (query,function(err, list){
    return res.render('admin/movie', {movieList:list});
  });
}

请注意,刚才我们已经分析了MongoDB的的模糊查询是通过正则表达式实现的,对应mongodb中,可以直接使用 ‘/../’ 斜杠。
但是在nodejs中,必须要使用RegExp,来构建正则表达式对象。

其实很简单,一层窗户纸。知道了实现原理,一切都变得很容易。

转载请注明出处:
http://blog.fens.me/nodejs-mongodb-regexp

打赏作者

upstart封装mongodb应用为系统服务

ubuntu实用工具系列文章

操作系统实用工具系列文章,将介绍基于Linux ubuntu的各种工具软件的配置和使用。有些工具大家早已耳熟能详,有些工具经常用到但确依然陌生。我将记录我在使用操作系统时,安装及配置工具上面的一些方法,把使用心得记录下来也便于自己的以后查找和回忆。

关于作者

  • 张丹(Conan), 程序员Java,R,PHP,Javascript
  • weibo:@Conan_Z
  • blog: http://blog.fens.me
  • email: bsspirit@gmail.com

转载请注明出处:
http://blog.fens.me/linux-upstart-mongodb/

upstart-mongodb

前言

本文将介绍封装mongodb应用为系统服务,mongod应用会像一个守护程序一样,被操作系统所管理。通过upstart以系统服务的方式管理mongodb应用。运维起来也会很容易!!

本次实验是针对单个进程mongodb的,如果是mongodb集群,效果会更加明显的。特别进程被非法停止后的自动重启功能,增加了系统的健壮性。

文章目录:

  1. mongodb环境介绍
  2. upstart任务脚本
  3. mongodb应用管理

 

1. Mongodb环境介绍

moive.me是一个nodejs应用,使用mongodb作为数据存储。nodejs开发请参考:从零开始nodejs系列文章

正常情况mongodb的启动命令
~ /usr/bin/mongod --config /etc/mongodb-moive.conf

为mongodb配置启动参数,配置文件mongodb-moive.conf

~ vi /etc/mongodb-moive.conf

dbpath=/var/lib/mongodb
logpath=/var/log/mongodb/mongodb-moive.log
logappend=true
bind_ip = 127.0.0.1
port = 27017
journal=true
#fork=true     #如果打开fork, 则upstart的stop,status命令失效。

上面的方式,应用程序会在当前的console界面中运行,一旦console结束,应用也会停止。我们改一下命令,让程序在后台运行。

~ /usr/bin/mongod --config /etc/mongodb-moive.conf &

这样程序就就在后台启动了。进程正常运行着,我也不用做太多的事情。

如果我想停止这个程序,怎么办呢? 找到mongod的系统进程,再杀死。如果我们系统中跑着多个mongod的进程,那么找起来也是一个工作量,而且如果杀错了进程,后果不堪设想。多进程的mongod请参考 MongoDB部署实验系列文章

如果moive的单个应用,能像系统服务一样,通过start, stop, status管理,那将会是多么方便的一件事啊!

2. upstart任务脚本

upstart的使用在 upstart把应用封装成系统服务 一文中已经介绍过了。

~ vi /etc/init/mongodb-moive.conf

description "mongodb moive.me"
author "bsspirit <http://blog.fens.me>"

limit nofile 20000 20000

kill timeout 300

respawn
respawn limit 2 5

pre-start script
    mkdir -p /var/lib/mongodb/
    mkdir -p /var/log/mongodb/
end script

start on runlevel [2345]
stop on runlevel [06]

script
    exec  /usr/bin/mongod --config /etc/mongodb-moive.conf
end script

3. Mongodb应用管理

启动mongodb-moive应用,进程ID:2037


~ start mongodb-moive
mongodb-moive start/running, process 2037

~ ps -aux|grep mongo
root      2037  0.7  1.5 705112 15960 ?        Ssl  07:53   0:00 /usr/bin/mongod --config /etc/mongodb-moive.conf

查看运行状态, 进程2037正常运行


~ status mongodb-moive
mongodb-moive start/running, process 2037

~ mongo
MongoDB shell version: 2.0.4
connecting to: test

> show dbs
local   (empty)
nodejs  0.203125GB
session 0.203125GB

非法关闭测试:杀死nodejs应用进程2037,通过upstart管理,mongodb-moive应用会自动重启


~ kill -9 2037

#查看系统进程,发现进程ID变了  
~ ps -aux|grep mongo
root      2054  2.0  1.5 638548 15872 ?        Ssl  07:53   0:00 /usr/bin/mongod --config /etc/mongodb-moive.conf

#查看进程状态,进程ID确实变了,而且是自动完成的
~ status mongodb-moive
mongodb-moive start/running, process 2054

#命令进行mongo
~ mongo
MongoDB shell version: 2.0.4
connecting to: test

> show dbs
local   (empty)
nodejs  0.203125GB
session 0.203125GB

刚才mongod被进程杀死时的日志, 2037被杀死,2054自动重启。


***** SERVER RESTARTED *****

Sat Jun 22 07:53:35 [initandlisten] MongoDB starting : pid=2037 port=27017 dbpath=/var/lib/mongodb 64-bit host=li478-194
Sat Jun 22 07:53:35 [initandlisten] db version v2.0.4, pdfile version 4.5
Sat Jun 22 07:53:35 [initandlisten] git version: nogitversion
Sat Jun 22 07:53:35 [initandlisten] build info: Linux lamiak 2.6.42-37-generic #58-Ubuntu SMP Thu Jan 24 15:28:10 UTC 2013 x86_64 BOOST_LIB_VERSION=1_46_1
Sat Jun 22 07:53:35 [initandlisten] options: { config: "/etc/mongodb-moive.conf", dbpath: "/var/lib/mongodb", journal: "true", logappend: "true", logpath: "/var/log/mongodb/mongodb-moive.log" }
Sat Jun 22 07:53:35 [initandlisten] journal dir=/var/lib/mongodb/journal
Sat Jun 22 07:53:35 [initandlisten] recover : no journal files present, no recovery needed
Sat Jun 22 07:53:35 [initandlisten] waiting for connections on port 27017
Sat Jun 22 07:53:35 [websvr] admin web console waiting for connections on port 28017
Sat Jun 22 07:53:38 [initandlisten] connection accepted from 127.0.0.1:37554 #1
Sat Jun 22 07:53:39 [conn1] end connection 127.0.0.1:37554

***** SERVER RESTARTED *****

Sat Jun 22 07:53:56 [initandlisten] MongoDB starting : pid=2054 port=27017 dbpath=/var/lib/mongodb 64-bit host=li478-194
Sat Jun 22 07:53:56 [initandlisten] db version v2.0.4, pdfile version 4.5
Sat Jun 22 07:53:56 [initandlisten] git version: nogitversion
Sat Jun 22 07:53:56 [initandlisten] build info: Linux lamiak 2.6.42-37-generic #58-Ubuntu SMP Thu Jan 24 15:28:10 UTC 2013 x86_64 BOOST_LIB_VERSION=1_46_1
Sat Jun 22 07:53:56 [initandlisten] options: { config: "/etc/mongodb-moive.conf", dbpath: "/var/lib/mongodb", journal: "true", logappend: "true", logpath: "/var/log/mongodb/mongodb-moive.log" }
Sat Jun 22 07:53:56 [initandlisten] journal dir=/var/lib/mongodb/journal
Sat Jun 22 07:53:56 [initandlisten] recover begin
Sat Jun 22 07:53:56 [initandlisten] info no lsn file in journal/ directory
Sat Jun 22 07:53:56 [initandlisten] recover lsn: 0
Sat Jun 22 07:53:56 [initandlisten] recover /var/lib/mongodb/journal/j._0
Sat Jun 22 07:53:56 [initandlisten] recover cleaning up
Sat Jun 22 07:53:56 [initandlisten] removeJournalFiles
Sat Jun 22 07:53:56 [initandlisten] recover done
Sat Jun 22 07:53:56 [websvr] admin web console waiting for connections on port 28017
Sat Jun 22 07:53:56 [initandlisten] waiting for connections on port 27017
Sat Jun 22 07:54:04 [initandlisten] connection accepted from 127.0.0.1:37559 #1
Sat Jun 22 07:54:56 [clientcursormon] mem (MB) res:47 virt:1008 mapped:160
Sat Jun 22 07:59:56 [clientcursormon] mem (MB) res:47 virt:1008 mapped:160
Sat Jun 22 08:00:25 [conn1] end connection 127.0.0.1:37559

正常关闭mongodb测试:通过stop命令


~ stop mongodb-moive
mongodb-moive stop/waiting

~ status mongodb-moive
mongodb-moive stop/waiting

~  ps -aux|grep mongo

正常关闭mongodb测试:通过mongo命令


~ mongo
MongoDB shell version: 2.0.4
connecting to: test

> use admin
switched to db admin

> db.shutdownServer()
Sat Jun 22 08:10:11 DBClientCursor::init call() failed
Sat Jun 22 08:10:11 query failed : admin.$cmd { shutdown: 1.0 } to: 127.0.0.1
server should be down...
Sat Jun 22 08:10:11 trying reconnect to 127.0.0.1
Sat Jun 22 08:10:11 reconnect 127.0.0.1 failed couldn't connect to server 127.0.0.1
Sat Jun 22 08:10:11 Error: error doing query: unknown shell/collection.js:151
>
bye

~ ps -aux|grep mongo
root      2332  0.6  1.5 705112 15960 ?        Ssl  08:10   0:00 /usr/bin/mongod --config /etc/mongodb-moive.conf

我们看们通过mongo的shutdownServer()命令,mongo也会重启,可能是runlevel的设置问题。我们选择要不要使用respawn的重启功能。

我们已经按照moive应用的需求,配置好了mongodb-moive启动程序。

系统运维也将变得如此简单。

转载请注明出处:
http://blog.fens.me/linux-upstart-mongodb/

打赏作者

MongoDB 自动分片 auto sharding

MongoDB部署实验系列文章,MongoDB做为NoSQL数据库,最近几年持续升温,越来越多的企业都开始尝试用MongoDB代替原有Database做一些事情。MongoDB也在集群,分片,复制上也有相当不错的的表现。我通过将做各种MongoDB的部署实验进行介绍。

关于作者:

  • 张丹(Conan), 程序员Java,R,PHP,Javascript
  • weibo:@Conan_Z
  • blog: http://blog.fens.me
  • email: bsspirit@gmail.com

转载请注明:
http://blog.fens.me/mongodb-shard/

 

sh

第三篇 MongoDB 自动分片 auto sharding,分为6个部分

  1. 初始化文件目录
  2. 启动shard节点
  3. 配置shard节点
  4. 插入数据分片实验
  5. 删除主分片
  6. 重置主分片

系统环境介绍:

Ubuntu 12.04. LTS 64bit Server

1. 初始化文件目录

创建目录

  • config1,config2,config3是配置节点
  • shard1,shard2,shard3是分片节点


~ pwd
/home/conan/dbs
~ mkdir config1 config2 config3 shard1 shard2 shard3
conan@u1:~/dbs$ ls -l
drwxrwxr-x 3 conan conan 4096 May 31 11:27 config1
drwxrwxr-x 3 conan conan 4096 May 31 11:27 config2
drwxrwxr-x 3 conan conan 4096 May 31 11:27 config3
drwxrwxr-x 3 conan conan 4096 May 31 11:28 shard1
drwxrwxr-x 3 conan conan 4096 May 31 11:29 shard2
drwxrwxr-x 3 conan conan 4096 May 31 11:29 shard3

2. 启动shard节点

shard2

启动config节点

~ mongod --dbpath /home/conan/dbs/config1 --port 20001 --nojournal --fork --logpath /home/conan/dbs/config1.log
~ mongod --dbpath /home/conan/dbs/config2 --port 20002 --nojournal --fork --logpath /home/conan/dbs/config2.log
~ mongod --dbpath /home/conan/dbs/config3 --port 20003 --nojournal --fork --logpath /home/conan/dbs/config3.log

启动mongos节点

~ mongos --configdb localhost:20001,localhost:20002,localhost:20003 --port 30001 --fork --logpath /home/conan/dbs/mongos1.log
~ mongos --configdb localhost:20001,localhost:20002,localhost:20003 --port 30002 --fork --logpath /home/conan/dbs/mongos2.log

启动shard节点

~ mongod --dbpath /home/conan/dbs/shard1 --port 10001 --nojournal --fork --logpath /home/conan/dbs/shard1.log
~ mongod --dbpath /home/conan/dbs/shard2 --port 10002 --nojournal --fork --logpath /home/conan/dbs/shard2.log
~ mongod --dbpath /home/conan/dbs/shard3 --port 10003 --nojournal --fork --logpath /home/conan/dbs/shard3.log

查看端口

~ netstat -nlt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:21003 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:10001 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:30001 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:10002 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:30002 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:10003 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11001 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:31001 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11002 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:31002 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11003 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:20001 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:20002 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:20003 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:21001 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:21002 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN

3. 配置shard节点

连接mongos1,在mongos中添加分片:

~ mongo localhost:30001/admin
MongoDB shell version: 2.4.3
connecting to: localhost:30001/admin

mongos> db.runCommand({addshard : "localhost:10001", allowLocal : true})
{ "shardAdded" : "shard0000", "ok" : 1 }
mongos> db.runCommand({addshard : "localhost:10002", allowLocal : true})
{ "shardAdded" : "shard0001", "ok" : 1 }
mongos> db.runCommand({addshard : "localhost:10003", allowLocal : true})
{ "shardAdded" : "shard0002", "ok" : 1 }

错误的语法:(MongoDB2.0.4已经不支持此语法)

mongos> db.runCommand({addshard : "shard1/localhost:10001,localhost:10002",name:"s1", allowLocal : true})
{
"ok" : 0,
"errmsg" : "couldn't connect to new shard socket exception [CONNECT_ERROR] for shard1/localhost:10001,localhost:10002"
}

查看分片信息

mongos> db.runCommand({listshards:1})
{
"shards" : [
{
"_id" : "shard0000",
"host" : "localhost:10001"
},
{
"_id" : "shard0001",
"host" : "localhost:10002"
},
{
"_id" : "shard0002",
"host" : "localhost:10003"
}
],
"ok" : 1
}

启用数据库分片:fensme

mongos> db.runCommand({"enablesharding" : "fensme"})
{ "ok" : 1 }

注:一旦enable了个数据库,mongos将会把数据库里的不同数据集放在不同的分片上。只有数据集也被分片,否则一个数据集的所有数据将放在一个分片上。

启用数据集分片:fensme.users

mongos> db.runCommand({"shardcollection" : "fensme.users", "key" : {"_id" : 1,"uid":1}})
{ "collectionsharded" : "fensme.users", "ok" : 1 }

查看分片状态

mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 3,
"minCompatibleVersion" : 3,
"currentVersion" : 4,
"clusterId" : ObjectId("51a8d3287034310ad2f6a94e")
}
shards:
{ "_id" : "shard0000", "host" : "localhost:10001" }
{ "_id" : "shard0001", "host" : "localhost:10002" }
{ "_id" : "shard0002", "host" : "localhost:10003" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "fensme", "partitioned" : true, "primary" : "shard0000" }
fensme.users
shard key: { "_id" : 1, "uid" : 1 }
chunks:
shard0000 1
{ "_id" : { "$minKey" : 1 }, "uid" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 }, "uid" : { "$maxKey" : 1 } } on : shard0000 { "t" : 1, "i" : 0 }

fensme数据库是支持shard的,主shard是shard0000,对应host是localhost:10001

再查看config数据库

> mongos> use config
switched to db config
> mongos> show collections
changelog
chunks
collections
databases
lockpings
locks
mongos
settings
shards
system.indexes
tags
version
mongos> db.shards.find()
{ "_id" : "shard0000", "host" : "localhost:10001" }
{ "_id" : "shard0001", "host" : "localhost:10002" }
{ "_id" : "shard0002", "host" : "localhost:10003" }
mongos> db.chunks.find()
{ "_id" : "fensme.users-_id_MinKey", "lastmod" : { "t" : 1, "i" : 0 }, "lastmodEpoch" : ObjectId("51a81aaa81f2196944ef40fa"), "ns" : "fensme.users", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard0000" }

shards配置成功,分片信息都正确

4. 插入数据分片实验

向fensme.users插入数据, 批量插入10w条记录

> mongos> use fensme
switched to db fensme
> for(var i=0; i<100000; i++){
db.users.insert({_id:i*1597,uid:i});
}
> mongos> db.users.find()
{ "_id" : 10929279, "uid" : 18307 }
{ "_id" : 0, "uid" : 0 }
{ "_id" : 10929876, "uid" : 18308 }
{ "_id" : 597, "uid" : 1 }
{ "_id" : 10930473, "uid" : 18309 }
{ "_id" : 1194, "uid" : 2 }
{ "_id" : 10931070, "uid" : 18310 }
{ "_id" : 1791, "uid" : 3 }
{ "_id" : 10931667, "uid" : 18311 }
{ "_id" : 2388, "uid" : 4 }
{ "_id" : 10932264, "uid" : 18312 }
{ "_id" : 2985, "uid" : 5 }
{ "_id" : 10932861, "uid" : 18313 }
{ "_id" : 3582, "uid" : 6 }
{ "_id" : 10933458, "uid" : 18314 }
{ "_id" : 4179, "uid" : 7 }
{ "_id" : 10934055, "uid" : 18315 }
{ "_id" : 4776, "uid" : 8 }
{ "_id" : 10934652, "uid" : 18316 }
{ "_id" : 5373, "uid" : 9 }

查看数据分片存储信息

mongos> db.users.stats()
{
"sharded" : true,
"ns" : "fensme.users",
"count" : 100000,
"numExtents" : 12,
"size" : 3200000,
"storageSize" : 13983744,
"totalIndexSize" : 6647088,
"indexSizes" : {
"_id_" : 2812544,
"_id_1_uid_1" : 3834544
},
"avgObjSize" : 32,
"nindexes" : 2,
"nchunks" : 3,
"shards" : {
"shard0000" : {
"ns" : "fensme.users",
"count" : 18307,
"size" : 585824,
"avgObjSize" : 32,
"storageSize" : 2793472,
"numExtents" : 5,
"nindexes" : 2,
"lastExtentSize" : 2097152,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 1226400,
"indexSizes" : {
"_id_" : 523264,
"_id_1_uid_1" : 703136
},
"ok" : 1
},
"shard0001" : {
"ns" : "fensme.users",
"count" : 81693,
"size" : 2614176,
"avgObjSize" : 32,
"storageSize" : 11182080,
"numExtents" : 6,
"nindexes" : 2,
"lastExtentSize" : 8388608,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 5404336,
"indexSizes" : {
"_id_" : 2281104,
"_id_1_uid_1" : 3123232
},
"ok" : 1
},
"shard0002" : {
"ns" : "fensme.users",
"count" : 0,
"size" : 0,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"_id_1_uid_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}

查看users数据集信息,shard0000上面18307条,shard0001上面81693条,shard0002上面没有数据。分片很不均匀。

分别连接到shard1,shard2,shard3查看数据分片存储的情况
连接shard1

mongo localhost:10001
MongoDB shell version: 2.4.3
connecting to: localhost:10001/test
> show dbs
fensme 0.203125GB
local 0.078125GB
> use fensme
switched to db fensme
> show collections
system.indexes
users
> db.users.count()
18307

连接shard2, 81693条记录

mongo localhost:10002
MongoDB shell version: 2.4.3
connecting to: localhost:10002/test
> use fensme
switched to db fensme
> db.users.count()
81693

连接shard3,无数据

mongo localhost:10003
MongoDB shell version: 2.4.3
connecting to: localhost:10002/test
> use fensme
switched to db fensme
> db.users.count()
0

注:分片数据分配不均匀,应该重新规化分片的key。

5. 删除主分片

移除Primary分片shard1,localhost:10001

> use admin
mongos> db.runCommand({"removeshard":"localhost:10001"})
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard0000",
"note" : "you need to drop or movePrimary these databases",
"dbsToMove" : [
"fensme"
],
"ok" : 1
}

提示shard0000是主分片,要手动迁移主分片。

查看分片状态

mongos> db.runCommand({listshards:1})
{
"shards" : [
{
"_id" : "shard0001",
"host" : "localhost:10002"
},
{
"_id" : "shard0002",
"host" : "localhost:10003"
},
{
"_id" : "shard0000",
"draining" : true,
"host" : "localhost:10001"
}
],
"ok" : 1
}

draining为正在迁移过程中。。。。

mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 3,
"minCompatibleVersion" : 3,
"currentVersion" : 4,
"clusterId" : ObjectId("51a8d3287034310ad2f6a94e")
}
shards:
{ "_id" : "shard0000", "draining" : true, "host" : "localhost:10001" }
{ "_id" : "shard0001", "host" : "localhost:10002" }
{ "_id" : "shard0002", "host" : "localhost:10003" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "fensme", "partitioned" : true, "primary" : "shard0000" }
fensme.users
shard key: { "_id" : 1, "uid" : 1 }
chunks:
shard0002 1
shard0001 2
{ "_id" : { "$minKey" : 1 }, "uid" : { "$minKey" : 1 } } -->> { "_id" : 0, "uid" : 0 } on shard0002 { "t" : 3, "i" : 0 }
{ "_id" : 0, "uid" : 0 } -->> { "_id" : 10929279, "uid" : 18307 } on : shard0001 { "t" : 4 "i" : 0 }
{ "_id" : 10929279, "uid" : 18307 } -->> { "_id" : { "$maxKey" : 1 }, "uid" : { "$maxKey" 1 } } on : shard0001 { "t" : 2, "i" : 0 }

draining为正在迁移过程中。。。。

查看db.users的分片存储分布

> use fensme
mongos> db.users.stats()
{
"sharded" : true,
"ns" : "fensme.users",
"count" : 118307,
"numExtents" : 12,
"size" : 3785824,
"storageSize" : 13983744,
"totalIndexSize" : 8854608,
"indexSizes" : {
"_id_" : 3752784,
"_id_1_uid_1" : 5101824
},
"avgObjSize" : 32,
"nindexes" : 2,
"nchunks" : 3,
"shards" : {
"shard0000" : {
"ns" : "fensme.users",
"count" : 18307,
"size" : 585824,
"avgObjSize" : 32,
"storageSize" : 2793472,
"numExtents" : 5,
"nindexes" : 2,
"lastExtentSize" : 2097152,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 1226400,
"indexSizes" : {
"_id_" : 523264,
"_id_1_uid_1" : 703136
},
"ok" : 1
},
"shard0001" : {
"ns" : "fensme.users",
"count" : 100000,
"size" : 3200000,
"avgObjSize" : 32,
"storageSize" : 11182080,
"numExtents" : 6,
"nindexes" : 2,
"lastExtentSize" : 8388608,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 7611856,
"indexSizes" : {
"_id_" : 3221344,
"_id_1_uid_1" : 4390512
},
"ok" : 1
},
"shard0002" : {
"ns" : "fensme.users",
"count" : 0,
"size" : 0,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"_id_1_uid_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}

发现shard0000的数据,都被自动迁移到了shard0001

连接node2,发现10W条数据全在这里。

mongo localhost:10002
MongoDB shell version: 2.4.3
connecting to: localhost:10002/test
> use fensme
switched to db fensme
> db.users.count()
100000

6. 重置主分片

重新设置主分片为shard0002

mongos> use admin
switched to db admin
mongos> db.runCommand({"moveprimary":"fensme","to":"localhost:10003"})
{ "primary " : "shard0002:localhost:10003", "ok" : 1 }

再次删除node1, shard0000

use admin
mongos> db.runCommand({"removeshard":"localhost:10001"})
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "shard0000",
"ok" : 1
}

删除分片成功。

查看分片信息

mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 3,
"minCompatibleVersion" : 3,
"currentVersion" : 4,
"clusterId" : ObjectId("51a8e8b8bf1411b5099da477")
}
shards:
{ "_id" : "shard0001", "host" : "localhost:10002" }
{ "_id" : "shard0002", "host" : "localhost:10003" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "fensme", "partitioned" : true, "primary" : "shard0002" }
fensme.users
shard key: { "_id" : 1, "uid" : 1 }
chunks:
shard0002 1
shard0001 2
{ "_id" : { "$minKey" : 1 }, "uid" : { "$minKey" : 1 } } -->> { "_id" : 0, "uid" : 0 } on : shard0002 { "t" : 3, "i" : 0 }
{ "_id" : 0, "uid" : 0 } -->> { "_id" : 29236279, "uid" : 18307 } on : shard0001 { "t" : 4, "i" : 0 }
{ "_id" : 29236279, "uid" : 18307 } -->> { "_id" : { "$maxKey" : 1 }, "uid" : { "$maxKey" : 1 } } on : shard0001 { "t" : 2, "i" : 0 }

fensme的数据库的primary分片,成功的改为了shard0002.

再看数据的分布情况:

mongos> use fensme
switched to db fensme
mongos> db.users.stats()
{
"sharded" : true,
"ns" : "fensme.users",
"count" : 100000,
"numExtents" : 7,
"size" : 3200000,
"storageSize" : 11190272,
"totalIndexSize" : 7628208,
"indexSizes" : {
"_id_" : 3229520,
"_id_1_uid_1" : 4398688
},
"avgObjSize" : 32,
"nindexes" : 2,
"nchunks" : 3,
"shards" : {
"shard0001" : {
"ns" : "fensme.users",
"count" : 100000,
"size" : 3200000,
"avgObjSize" : 32,
"storageSize" : 11182080,
"numExtents" : 6,
"nindexes" : 2,
"lastExtentSize" : 8388608,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 7611856,
"indexSizes" : {
"_id_" : 3221344,
"_id_1_uid_1" : 4390512
},
"ok" : 1
},
"shard0002" : {
"ns" : "fensme.users",
"count" : 0,
"size" : 0,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"_id_1_uid_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}

shard0001包括全部10W条数据,shard0002没有数据,shard0000已经不存储。

实验完成!!

 

转载请注明:
http://blog.fens.me/mongodb-shard/

打赏作者

MongoDB 副本集自动复制 Replica Set

MongoDB部署实验系列文章,MongoDB做为NoSQL数据库,最近几年持续升温,越来越多的企业都开始尝试用MongoDB代替原有Database做一些事情。MongoDB也在集群,分片,复制上也有相当不错的的表现。我通过将做各种MongoDB的部署实验进行介绍。

关于作者:

  • 张丹(Conan), 程序员Java,R,PHP,Javascript
  • weibo:@Conan_Z
  • blog: http://blog.fens.me
  • email: bsspirit@gmail.com

转载请注明:
http://blog.fens.me/mongodb-replica-set/

rs

第二篇 MongoDB 副本集自动复制 Replica Set,分为7个部分

  1. 初始化文件目录
  2. 启动副本集 Replica Set
  3. 模拟 PRIMARY失败,SECONDARY自动切换
  4. 修复失败节点
  5. 恢复失败节点,补充到SECONDARY
  6. 删除一个Replica Set节点
  7. 新增加一个Replica Set节点

系统环境介绍:

Ubuntu 12.04. LTS 64bit Server

 

初始化文件目录

~ pwd
/home/conan/dbs

~ mkdir node1 node2 node3
~ ls -l
drwxrwxr-x 2 conan conan 4096 May 31 14:21 node1
drwxrwxr-x 2 conan conan 4096 May 31 14:21 node2
drwxrwxr-x 2 conan conan 4096 May 31 14:21 node3

 

启动副本集 Replica Set

启动node1,node2,node3

mongod --dbpath /home/conan/dbs/node1 --port 10001 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node1.log
mongod --dbpath /home/conan/dbs/node2 --port 10002 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node2.log
mongod --dbpath /home/conan/dbs/node3 --port 10003 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node3.log

副本集初始化

~ mongo localhost:10001
MongoDB shell version: 2.4.3
connecting to: localhost:10001/test
> rs.initiate({_id:"blort",members:[
{_id:1,host:"localhost:10001"},
{_id:2,host:"localhost:10002"},
{_id:3,host:"localhost:10003"},
]})
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}

查看日志信息:node1变选为PRIMARY,node2,node3分别是2个SECONDARY

Fri May 31 14:26:44.728 [conn2] ******
Fri May 31 14:26:44.728 [conn2] replSet info saving a newer config version to local.system.replset
Fri May 31 14:26:44.741 [conn2] replSet saveConfigLocally done
Fri May 31 14:26:44.741 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
Fri May 31 14:26:44.741 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "blort", members: [ { _id: 1.0, host: "localhost:10001" }, { _id: 2.0, host: "localhost:10002" }, { _id: 3.0, host: "localhost:10003" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:646741 reslen:112 652ms
Fri May 31 14:26:53.682 [rsStart] replSet I am localhost:10001
Fri May 31 14:26:53.682 [rsStart] replSet STARTUP2
Fri May 31 14:26:53.683 [rsHealthPoll] replSet member localhost:10002 is up
Fri May 31 14:26:53.684 [rsHealthPoll] replSet member localhost:10003 is up
Fri May 31 14:26:54.285 [initandlisten] connection accepted from 127.0.0.1:46469 #3 (3 connections now open)
Fri May 31 14:26:54.683 [rsSync] replSet SECONDARY

用mongo客户端查看设置
node1连接

~ mongo localhost:10001
MongoDB shell version: 2.4.3
connecting to: localhost:10001/test
blort:PRIMARY> rs.status()
{
"set" : "blort",
"date" : ISODate("2013-05-31T06:34:12Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "localhost:10001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 659,
"optime" : {
"t" : 1369981604,
"i" : 1
},
"optimeDate" : ISODate("2013-05-31T06:26:44Z"),
"self" : true
},
{
"_id" : 2,
"name" : "localhost:10002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 439,
"optime" : {
"t" : 1369981604,
"i" : 1
},
"optimeDate" : ISODate("2013-05-31T06:26:44Z"),
"lastHeartbeat" : ISODate("2013-05-31T06:34:11Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0,
"syncingTo" : "localhost:10001"
},
{
"_id" : 3,
"name" : "localhost:10003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 439,
"optime" : {
"t" : 1369981604,
"i" : 1
},
"optimeDate" : ISODate("2013-05-31T06:26:44Z"),
"lastHeartbeat" : ISODate("2013-05-31T06:34:11Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0,
"syncingTo" : "localhost:10001"
}
],
"ok" : 1
}

node2连接

~ mongo localhost:10002
MongoDB shell version: 2.4.3
connecting to: localhost:10002/test
blort:SECONDARY>

在primary插入数据:

~ mongo localhost:10001
MongoDB shell version: 2.4.3
connecting to: localhost:10001/test

blort:PRIMARY> show dbs
local 1.078125GB

blort:PRIMARY> use fensme
switched to db fensme

blort:PRIMARY> db.user.insert({uid:10001})
blort:PRIMARY> db.user.find()
{ “_id” : ObjectId(“51a8454813321f05df62a8c8”), “uid” : 10001 }

连接node2,SECONDARY查询数据

~ mongo localhost:10002
MongoDB shell version: 2.4.3
connecting to: localhost:10002/test

blort:SECONDARY> show dbs
fensme 0.203125GB
local 1.078125GB

blort:SECONDARY> use fensme
switched to db fensme

blort:SECONDARY> show collections
Fri May 31 14:39:22.276 JavaScript execution failed: error: { “$err” : “not master and slaveOk=false”, “code” : 13435 } at src/mongo/shell/query.js:L128

blort:SECONDARY> db.user.find()
error: { “$err” : “not master and slaveOk=false”, “code” : 13435 }

提示是不是master/slave的结构,SECONDARY不许允查询。

模拟 PRIMARY失败,SECONDARY自动切换


ps -aux|grep mongod
conan 5110 0.4 2.1 1723616 44608 ? Sl 14:23 0:05 mongod --dbpath /home/conan/dbs/node1 --port 10001 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node1.log
conan 5173 0.4 2.2 1693812 45960 ? Sl 14:24 0:04 mongod --dbpath /home/conan/dbs/node2 --port 10002 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node2.log
conan 5218 0.4 2.2 1692800 45548 ? Sl 14:24 0:04 mongod --dbpath /home/conan/dbs/node3 --port 10003 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node3.log


kill -9 5110
ps -aux|grep mongod
conan 5173 0.4 2.2 1703048 46096 ? Sl 14:24 0:04 mongod --dbpath /home/conan/dbs/node2 --port 10002 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node2.log
conan 5218 0.4 2.2 1702036 45696 ? Sl 14:24 0:04 mongod --dbpath /home/conan/dbs/node3 --port 10003 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node3.log

连接node2,SECONDARY查询节点重新选举

~ mongo localhost:10002
MongoDB shell version: 2.4.3
connecting to: localhost:10002/test

blort:SECONDARY> rs.status()
{
“set” : “blort”,
“date” : ISODate(“2013-05-31T06:42:51Z”),
“myState” : 2,
“syncingTo” : “localhost:10003”,
“members” : [
{
“_id” : 1,
“name” : “localhost:10001”,
“health” : 0,
“state” : 8,
“stateStr” : “(not reachable/healthy)”,
“uptime” : 0,
“optime” : {
“t” : 1369982280,
“i” : 1
},
“optimeDate” : ISODate(“2013-05-31T06:38:00Z”),
“lastHeartbeat” : ISODate(“2013-05-31T06:42:51Z”),
“lastHeartbeatRecv” : ISODate(“1970-01-01T00:00:00Z”),
“pingMs” : 0
},
{
“_id” : 2,
“name” : “localhost:10002”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 1088,
“optime” : {
“t” : 1369982280,
“i” : 1
},
“optimeDate” : ISODate(“2013-05-31T06:38:00Z”),
“errmsg” : “syncing to: localhost:10003”,
“self” : true
},
{
“_id” : 3,
“name” : “localhost:10003”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 946,
“optime” : {
“t” : 1369982280,
“i” : 1
},
“optimeDate” : ISODate(“2013-05-31T06:38:00Z”),
“lastHeartbeat” : ISODate(“2013-05-31T06:42:51Z”),
“lastHeartbeatRecv” : ISODate(“1970-01-01T00:00:00Z”),
“pingMs” : 0,
“syncingTo” : “localhost:10001”
}
],
“ok” : 1
}

查看结果:

  • localhost:10001 : not reachable/healthy
  • localhost:10002: SECONDARY
  • localhost:10003: PRIMARY

连接node3, localhost:10003查看数据

mongo localhost:10003
MongoDB shell version: 2.4.3
connecting to: localhost:10003/test

blort:PRIMARY> show dbs
fensme 0.203125GB
local 1.078125GB

blort:PRIMARY> use fensme
switched to db fensme

blort:PRIMARY> show collections
system.indexes
user

blort:PRIMARY> db.user.find()
{ “_id” : ObjectId(“51a8454813321f05df62a8c8”), “uid” : 10001 }

node1失效后,node3被选为了PRIMARY,数据在node3可以查询到。

修复失败节点

直接重启node1,启动失败:

~ mongod --dbpath /home/conan/dbs/node1 --port 10001 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node1_restart.log
MongoDB starting : pid=8544 port=10001 dbpath=/home/conan/dbs/node1 64-bit host=u1
Fri May 31 14:49:37.280 [initandlisten] db version v2.4.3
Fri May 31 14:49:37.280 [initandlisten] git version: fe1743177a5ea03e91e0052fb5e2cb2945f6d95f
Fri May 31 14:49:37.280 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49
Fri May 31 14:49:37.280 [initandlisten] allocator: tcmalloc
Fri May 31 14:49:37.280 [initandlisten] options: { dbpath: "/home/conan/dbs/node1", fork: true, logpath: "/home/conan/dbs/node1_restart.log", nojournal: true, port: 10001, replSet: "blort" }
**************
Unclean shutdown detected.
Please visit http://dochub.mongodb.org/core/repair for recovery instructions.
*************
Fri May 31 14:49:37.281 [initandlisten] exception in initAndListen: 12596 old lock file, terminating
Fri May 31 14:49:37.281 dbexit:
Fri May 31 14:49:37.281 [initandlisten] shutdown: going to close listening sockets...
Fri May 31 14:49:37.281 [initandlisten] shutdown: going to flush diaglog...
Fri May 31 14:49:37.281 [initandlisten] shutdown: going to close sockets...
Fri May 31 14:49:37.281 [initandlisten] shutdown: waiting for fs preallocator...
Fri May 31 14:49:37.281 [initandlisten] shutdown: closing all files...
Fri May 31 14:49:37.281 [initandlisten] closeAllFiles() finished
Fri May 31 14:49:37.281 dbexit: really exiting now

5. 恢复失败节点,补充到SECONDARY


mkdir /home/conan/dbs/repair/
mongod --dbpath /home/conan/dbs/node1 --port 10001 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node1_restart_repair.log --repairpath /home/conan/dbs/repair/
about to fork child process, waiting until server is ready for connections.
forked process: 13736
all output going to: /home/conan/dbs/node1_restart_repair.log
child process started successfully, parent exiting

修复成功。

重新连接node1,查看节点状态

~ mongo localhost:10001
MongoDB shell version: 2.4.3
connecting to: localhost:10001/test

blort:SECONDARY> rs.status()
{
“set” : “blort”,
“date” : ISODate(“2013-05-31T07:04:34Z”),
“myState” : 2,
“syncingTo” : “localhost:10003”,
“members” : [
{
“_id” : 1,
“name” : “localhost:10001”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 74,
“optime” : {
“t” : 1369982280,
“i” : 1
},
“optimeDate” : ISODate(“2013-05-31T06:38:00Z”),
“errmsg” : “syncing to: localhost:10003”,
“self” : true
},
{
“_id” : 2,
“name” : “localhost:10002”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 74,
“optime” : {
“t” : 1369982280,
“i” : 1
},
“optimeDate” : ISODate(“2013-05-31T06:38:00Z”),
“lastHeartbeat” : ISODate(“2013-05-31T07:04:33Z”),
“lastHeartbeatRecv” : ISODate(“1970-01-01T00:00:00Z”),
“pingMs” : 0,
“syncingTo” : “localhost:10003”
},
{
“_id” : 3,
“name” : “localhost:10003”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 74,
“optime” : {
“t” : 1369982280,
“i” : 1
},
“optimeDate” : ISODate(“2013-05-31T06:38:00Z”),
“lastHeartbeat” : ISODate(“2013-05-31T07:04:33Z”),
“lastHeartbeatRecv” : ISODate(“2013-05-31T07:04:34Z”),
“pingMs” : 0
}
],
“ok” : 1
}

查看结果:

  • localhost:10001: SECONDARY
  • localhost:10002: SECONDARY
  • localhost:10003: PRIMARY

node1节点已恢复。

6. 删除一个Replica Set节点

删除node1

blort:PRIMARY> rs.remove("localhost:10001")
Fri May 31 15:40:21.977 DBClientCursor::init call() failed
Fri May 31 15:40:21.978 JavaScript execution failed: Error: error doing query: failed at src/mongo/shell/query.js:L78
Fri May 31 15:40:21.979 trying reconnect to localhost:10003
Fri May 31 15:40:21.980 reconnect localhost:10003 ok
rs.status()
{
"set" : "blort",
"date" : ISODate("2013-05-31T07:40:23Z"),
"myState" : 1,
"members" : [
{
"_id" : 2,
"name" : "localhost:10002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2,
"optime" : {
"t" : 1369986021,
"i" : 1
},
"optimeDate" : ISODate("2013-05-31T07:40:21Z"),
"lastHeartbeat" : ISODate("2013-05-31T07:40:21Z"),
"lastHeartbeatRecv" : ISODate("2013-05-31T07:40:23Z"),
"pingMs" : 2,
"lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: localhost:10003",
"syncingTo" : "localhost:10003"
},
{
"_id" : 3,
"name" : "localhost:10003",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4539,
"optime" : {
"t" : 1369986021,
"i" : 1
},
"optimeDate" : ISODate("2013-05-31T07:40:21Z"),
"self" : true
}
],
"ok" : 1
}

清空node1,数据文件。

rm -rf node1*
ls -l
drwxrwxr-x 3 conan conan 4096 May 31 14:38 node2/
-rw-rw-r-- 1 conan conan 509277 May 31 15:42 node2.log
drwxrwxr-x 3 conan conan 4096 May 31 14:38 node3/
-rw-rw-r-- 1 conan conan 515918 May 31 15:42 node3.log
drwxrwxr-x 2 conan conan 4096 May 31 15:02 repair/

7. 新增加一个Replica Set节点


mkdir node1
mongod --dbpath /home/conan/dbs/node1 --port 10001 --replSet blort --nojournal --fork --logpath /home/conan/dbs/node1.log
about to fork child process, waiting until server is ready for connections.
forked process: 15145
all output going to: /home/conan/dbs/node1.log
child process started successfully, parent exiting

新增加的node1节点

rs.add("localhost:10001")
blort:PRIMARY> rs.status()
{
"set" : "blort",
"date" : ISODate("2013-05-31T07:48:28Z"),
"myState" : 1,
"members" : [
{
"_id" : 2,
"name" : "localhost:10002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 487,
"optime" : {
"t" : 1369986467,
"i" : 1
},
"optimeDate" : ISODate("2013-05-31T07:47:47Z"),
"lastHeartbeat" : ISODate("2013-05-31T07:48:26Z"),
"lastHeartbeatRecv" : ISODate("2013-05-31T07:48:27Z"),
"pingMs" : 0,
"syncingTo" : "localhost:10003"
},
{
"_id" : 3,
"name" : "localhost:10003",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 5024,
"optime" : {
"t" : 1369986467,
"i" : 1
},
"optimeDate" : ISODate("2013-05-31T07:47:47Z"),
"self" : true
},
{
"_id" : 4,
"name" : "localhost:10001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 41,
"optime" : {
"t" : 1369986467,
"i" : 1
},
"optimeDate" : ISODate("2013-05-31T07:47:47Z"),
"lastHeartbeat" : ISODate("2013-05-31T07:48:27Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0,
"syncingTo" : "localhost:10003"
}
],
"ok" : 1
}

增加的localhost:10001, 为SECONDARY.

实验完成!

Replica Set 做为FailOver的实现,还是很不错的。

 

转载请注明:
http://blog.fens.me/mongodb-replica-set/

打赏作者