规划

系统版本:debian12.8

Mongodb版本:8.0.3

节点IP端口作用数据目录日志文件配置文件
192.168.0.22327023config/mongo/data/config/mongo/logs/config.log/mongo/conf/config.conf
192.168.0.22427023config/mongo/data/config/mongo/logs/config.log/mongo/conf/config.conf
192.168.0.22527023config/mongo/data/cofnig/mongo/logs/config.log
/mongo/conf/config.conf
192.168.0.22327022router-/mongo/logs/router.log/mongo/conf/router.conf
192.168.0.22427022router-/mongo/logs/router.log/mongo/conf/router.conf
192.168.0.22527022router-/mongo/logs/router.log/mongo/conf/router.conf
192.168.0.22327021shard/mongo/data/shard/mongo/logs/shard.log/mongo/conf/shard.conf
192.168.0.22427021shard/mongo/data/shard/mongo/logs/shard.log/mongo/conf/shard.conf
192.168.0.22527021shard/mongo/data/shard/mongo/logs/shard.log/mongo/conf/shard.conf

三个config、三个router、三个shard(每个shard集群只有一个节点,所以无法实现高可用,可以通过将每个shard都配置为副本集就可以实现高可用)

部署

全局配置

所有节点执行:

mkdir /mongo/{data,logs,conf} -p
mkdir /mongo/data/{config,router,shard} -p
apt install curl -y
cd /usr/local/src
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-debian12-8.0.3.tgz
tar xvf mongodb-linux-x86_64-debian12-8.0.3.tgz
cd mongodb-linux-x86_64-debian12-8.0.3
cp bin/{mongod,mongos} /usr/local/bin/
# 添加支持rc.local开机启动
cat <<EOF >/etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
EOF
chmod +x /etc/rc.local
systemctl enable --now rc-local

shard节点配置

192.168.0.223操作
cat > /mongo/conf/shard.conf<<EOF
# MongoDB日志存储相关配置
systemLog:
    # 将所有日志写到指定文件中
    destination: file
    # 记录所有日志信息的文件路径
    path: "/mongo/logs/shard.log"
    # 当服务重启时,将新日志以追加形式写到现有日志尾部
    logAppend: true
storage:
    # 指定MongoDB存储数据的目录
    dbPath: "/mongo/data/config"
processManagement:
    # 以后台进程方式运行MongoDB服务
    fork: true
    # 指定保存mongo进程ID的文件位置
    pidFilePath: "/mongo/data/shard.pid"
net:
    # 绑定服务实例的IP,默认是localhost,这里换成本机IP
    bindIp: 192.168.0.223
    # 绑定的端口,默认是27017
    port: 27021
replication:
    # 指定副本集群的名称
    replSetName: shard-1
sharding:
    # 指定当前节点在分片集群中的角色(shardsvr代表分片节点)
    clusterRole: shardsvr
EOF

mongod -f /mongo/conf/shard.conf

echo '/usr/local/bin/mongod -f /mongo/conf/shard.conf' >> /etc/rc.local
192.168.0.224操作
cat > /mongo/conf/shard.conf<<EOF
# MongoDB日志存储相关配置
systemLog:
    # 将所有日志写到指定文件中
    destination: file
    # 记录所有日志信息的文件路径
    path: "/mongo/logs/shard.log"
    # 当服务重启时,将新日志以追加形式写到现有日志尾部
    logAppend: true
storage:
    # 指定MongoDB存储数据的目录
    dbPath: "/mongo/data/shard"
processManagement:
    # 以后台进程方式运行MongoDB服务
    fork: true
    # 指定保存mongo进程ID的文件位置
    pidFilePath: "/mongo/data/shard.pid"
net:
    # 绑定服务实例的IP,默认是localhost,这里换成本机IP
    bindIp: 192.168.0.224
    # 绑定的端口,默认是27017
    port: 27021
replication:
    # 指定副本集群的名称
    replSetName: shard-2
sharding:
    # 指定当前节点在分片集群中的角色(shardsvr代表分片节点)
    clusterRole: shardsvr
EOF

mongod -f /mongo/conf/shard.conf

echo '/usr/local/bin/mongod -f /mongo/conf/shard.conf' >> /etc/rc.local
192.168.0.225操作
cat > /mongo/conf/config.conf<<EOF
# MongoDB日志存储相关配置
systemLog:
    # 将所有日志写到指定文件中
    destination: file
    # 记录所有日志信息的文件路径
    path: "/mongo/logs/shard.log"
    # 当服务重启时,将新日志以追加形式写到现有日志尾部
    logAppend: true
storage:
    # 指定MongoDB存储数据的目录
    dbPath: "/mongo/data/shard"
processManagement:
    # 以后台进程方式运行MongoDB服务
    fork: true
    # 指定保存mongo进程ID的文件位置
    pidFilePath: "/mongo/data/shard.pid"
net:
    # 绑定服务实例的IP,默认是localhost,这里换成本机IP
    bindIp: 192.168.0.225
    # 绑定的端口,默认是27017
    port: 27021
replication:
    # 指定副本集群的名称
    replSetName: shard-3
sharding:
    # 指定当前节点在分片集群中的角色(shardsvr代表分片节点)
    clusterRole: shardsvr
EOF

mongod -f /mongo/conf/shard.conf

echo '/usr/local/bin/mongod -f /mongo/conf/shard.conf' >> /etc/rc.local

shard-副本集群初始化

随便一个节点通过mongosh连接shard节点进行初始化

cd /usr/local/src
wget https://downloads.mongodb.com/compass/mongosh-2.3.3-linux-x64.tgz
tar xvf mongosh-2.3.3-linux-x64.tgz
cd mongosh-2.3.3-linux-x64/bin
192.168.0.223初始化
root@debian128-223:/usr/local/src/mongosh-2.3.3-linux-x64/bin# ./mongosh 192.168.0.223:27021
Current Mongosh Log ID:	6746bfd4c48a1f3f71c1c18b
Connecting to:		mongodb://192.168.0.223:27021/?directConnection=true&appName=mongosh+2.3.3
Using MongoDB:		8.0.3
Using Mongosh:		2.3.3

For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

------
   The server generated these startup warnings when booting
   2024-11-27T01:19:48.312-05:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
   2024-11-27T01:19:49.271-05:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
   2024-11-27T01:19:49.271-05:00: You are running this process as the root user, which is not recommended
   2024-11-27T01:19:49.271-05:00: Soft rlimits for open file descriptors too low
   2024-11-27T01:19:49.271-05:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile
   2024-11-27T01:19:49.271-05:00: We suggest setting the contents of sysfsFile to 0.
   2024-11-27T01:19:49.271-05:00: Your system has glibc support for rseq built in, which is not yet supported by tcmalloc-google and has critical performance implications. Please set the environment variable GLIBC_TUNABLES=glibc.pthread.rseq=0
   2024-11-27T01:19:49.271-05:00: We suggest setting swappiness to 0 or 1, as swapping can cause performance problems.
------

test> rs.initiate();
{
  info2: 'no configuration specified. Using a default configuration for the set',
  me: '192.168.0.223:27021',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732689890, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732689890, i: 1 })
}
shard-1 [direct: secondary] test> rs.status();
{
  set: 'shard-1',
  date: ISODate('2024-11-27T06:45:01.051Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 1,
  writeMajorityCount: 1,
  votingMembersCount: 1,
  writableVotingMembersCount: 1,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1732689890, i: 16 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-11-27T06:44:50.666Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1732689890, i: 16 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1732689890, i: 16 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1732689890, i: 16 }), t: Long('1') },
    writtenOpTime: { ts: Timestamp({ t: 1732689890, i: 16 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-11-27T06:44:50.666Z'),
    lastDurableWallTime: ISODate('2024-11-27T06:44:50.666Z'),
    lastWrittenWallTime: ISODate('2024-11-27T06:44:50.666Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1732689890, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-11-27T06:44:50.514Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1732689890, i: 1 }), t: Long('-1') },
    lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1732689890, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1732689890, i: 1 }), t: Long('-1') },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    newTermStartDate: ISODate('2024-11-27T06:44:50.551Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-11-27T06:44:50.595Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.0.223:27021',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 1513,
      optime: { ts: Timestamp({ t: 1732689890, i: 16 }), t: Long('1') },
      optimeDate: ISODate('2024-11-27T06:44:50.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1732689890, i: 16 }), t: Long('1') },
      optimeWrittenDate: ISODate('2024-11-27T06:44:50.000Z'),
      lastAppliedWallTime: ISODate('2024-11-27T06:44:50.666Z'),
      lastDurableWallTime: ISODate('2024-11-27T06:44:50.666Z'),
      lastWrittenWallTime: ISODate('2024-11-27T06:44:50.666Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1732689890, i: 2 }),
      electionDate: ISODate('2024-11-27T06:44:50.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732689890, i: 16 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732689890, i: 16 })
}
shard-1 [direct: primary] test> quit;

192.168.0.224初始化
root@debian128-223:/usr/local/src/mongosh-2.3.3-linux-x64/bin# ./mongosh 192.168.0.224:27021
Current Mongosh Log ID:	6746c030a08d6c8d44c1c18b
Connecting to:		mongodb://192.168.0.224:27021/?directConnection=true&appName=mongosh+2.3.3
Using MongoDB:		8.0.3
Using Mongosh:		2.3.3

For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/

------
   The server generated these startup warnings when booting
   2024-11-27T14:20:18.944+08:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
   2024-11-27T14:20:20.586+08:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
   2024-11-27T14:20:20.586+08:00: You are running this process as the root user, which is not recommended
   2024-11-27T14:20:20.586+08:00: Soft rlimits for open file descriptors too low
   2024-11-27T14:20:20.586+08:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile
   2024-11-27T14:20:20.586+08:00: We suggest setting the contents of sysfsFile to 0.
   2024-11-27T14:20:20.586+08:00: Your system has glibc support for rseq built in, which is not yet supported by tcmalloc-google and has critical performance implications. Please set the environment variable GLIBC_TUNABLES=glibc.pthread.rseq=0
   2024-11-27T14:20:20.587+08:00: We suggest setting swappiness to 0 or 1, as swapping can cause performance problems.
------

test> rs.initiate();
{
  info2: 'no configuration specified. Using a default configuration for the set',
  me: '192.168.0.224:27021',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732689993, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732689993, i: 1 })
}
shard-2 [direct: secondary] test> rs.status();
{
  set: 'shard-2',
  date: ISODate('2024-11-27T06:47:22.232Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 1,
  writeMajorityCount: 1,
  votingMembersCount: 1,
  writableVotingMembersCount: 1,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1732690035, i: 1 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-11-27T06:47:15.309Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1732690035, i: 1 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1732690035, i: 1 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1732690035, i: 1 }), t: Long('1') },
    writtenOpTime: { ts: Timestamp({ t: 1732690035, i: 1 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-11-27T06:47:15.309Z'),
    lastDurableWallTime: ISODate('2024-11-27T06:47:15.309Z'),
    lastWrittenWallTime: ISODate('2024-11-27T06:47:15.309Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1732689993, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-11-27T06:46:34.096Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1732689993, i: 1 }), t: Long('-1') },
    lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1732689993, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1732689993, i: 1 }), t: Long('-1') },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    newTermStartDate: ISODate('2024-11-27T06:46:34.689Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-11-27T06:46:35.327Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.0.224:27021',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 1624,
      optime: { ts: Timestamp({ t: 1732690035, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-11-27T06:47:15.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1732690035, i: 1 }), t: Long('1') },
      optimeWrittenDate: ISODate('2024-11-27T06:47:15.000Z'),
      lastAppliedWallTime: ISODate('2024-11-27T06:47:15.309Z'),
      lastDurableWallTime: ISODate('2024-11-27T06:47:15.309Z'),
      lastWrittenWallTime: ISODate('2024-11-27T06:47:15.309Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1732689994, i: 1 }),
      electionDate: ISODate('2024-11-27T06:46:34.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732690035, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732690035, i: 1 })
}
shard-2 [direct: primary] test> quit;

192.168.0.225初始化
root@debian128-223:/usr/local/src/mongosh-2.3.3-linux-x64/bin# ./mongosh 192.168.0.225:27021
Current Mongosh Log ID:	6746c0a9aa4732c48cc1c18b
Connecting to:		mongodb://192.168.0.225:27021/?directConnection=true&appName=mongosh+2.3.3
Using MongoDB:		8.0.3
Using Mongosh:		2.3.3

For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/

------
   The server generated these startup warnings when booting
   2024-11-27T01:32:19.862-05:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
   2024-11-27T01:32:20.613-05:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
   2024-11-27T01:32:20.613-05:00: You are running this process as the root user, which is not recommended
   2024-11-27T01:32:20.613-05:00: Soft rlimits for open file descriptors too low
   2024-11-27T01:32:20.613-05:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile
   2024-11-27T01:32:20.613-05:00: We suggest setting the contents of sysfsFile to 0.
   2024-11-27T01:32:20.613-05:00: Your system has glibc support for rseq built in, which is not yet supported by tcmalloc-google and has critical performance implications. Please set the environment variable GLIBC_TUNABLES=glibc.pthread.rseq=0
   2024-11-27T01:32:20.613-05:00: We suggest setting swappiness to 0 or 1, as swapping can cause performance problems.
------

test> rs.initiate();
{
  info2: 'no configuration specified. Using a default configuration for the set',
  me: '192.168.0.225:27021',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732690097, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732690097, i: 1 })
}
shard-3 [direct: secondary] test> rs.status();
{
  set: 'shard-3',
  date: ISODate('2024-11-27T06:48:21.656Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 1,
  writeMajorityCount: 1,
  votingMembersCount: 1,
  writableVotingMembersCount: 1,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1732690097, i: 16 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-11-27T06:48:17.461Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1732690097, i: 16 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1732690097, i: 16 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1732690097, i: 16 }), t: Long('1') },
    writtenOpTime: { ts: Timestamp({ t: 1732690097, i: 16 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-11-27T06:48:17.461Z'),
    lastDurableWallTime: ISODate('2024-11-27T06:48:17.461Z'),
    lastWrittenWallTime: ISODate('2024-11-27T06:48:17.461Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1732690097, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-11-27T06:48:17.303Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1732690097, i: 1 }), t: Long('-1') },
    lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1732690097, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1732690097, i: 1 }), t: Long('-1') },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    newTermStartDate: ISODate('2024-11-27T06:48:17.343Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-11-27T06:48:17.388Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.0.225:27021',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 962,
      optime: { ts: Timestamp({ t: 1732690097, i: 16 }), t: Long('1') },
      optimeDate: ISODate('2024-11-27T06:48:17.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1732690097, i: 16 }), t: Long('1') },
      optimeWrittenDate: ISODate('2024-11-27T06:48:17.000Z'),
      lastAppliedWallTime: ISODate('2024-11-27T06:48:17.461Z'),
      lastDurableWallTime: ISODate('2024-11-27T06:48:17.461Z'),
      lastWrittenWallTime: ISODate('2024-11-27T06:48:17.461Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1732690097, i: 2 }),
      electionDate: ISODate('2024-11-27T06:48:17.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732690097, i: 16 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732690097, i: 16 })
}
shard-3 [direct: primary] test> quit;

本次每个shard只有一个节点,并没有副本集参与,无法实现高可用,正式环境需要将每一个shard配置为一个副本集,副本集配置方法:MongoDB 集群搭建-副本集[^1]

config节点配置

192.168.0.223配置
cat > /mongo/conf/config.conf <<EOF
# MongoDB日志存储相关配置
systemLog:
    # 将所有日志写到指定文件中
    destination: file
    # 记录所有日志信息的文件路径
    path: "/mongo/logs/config.log"
    # 当服务重启时,将新日志以追加形式写到现有日志尾部
    logAppend: true
storage:
    # 指定MongoDB存储数据的目录
    dbPath: "/mongo/data/config"
processManagement:
    # 以后台进程方式运行MongoDB服务
    fork: true
    # 指定保存mongo进程ID的文件位置
    pidFilePath: "/mongo/data/config.pid"
net:
    # 绑定服务实例的IP,默认是localhost,这里换成本机IP
    bindIp: 192.168.0.223
    # 绑定的端口,默认是27017
    port: 27023
replication:
    # 指定副本集群的名称
    replSetName: configs
sharding:
    # 指定当前节点在分片集群中的角色(configsvr代表配置节点)
    clusterRole: configsvr
EOF

mongod -f /mongo/conf/config.conf
echo '/usr/local/bin/mongod -f /mongo/conf/config.conf' >> /etc/rc.local
192.168.0.224配置
cat > /mongo/conf/config.conf <<EOF
# MongoDB日志存储相关配置
systemLog:
    # 将所有日志写到指定文件中
    destination: file
    # 记录所有日志信息的文件路径
    path: "/mongo/logs/config.log"
    # 当服务重启时,将新日志以追加形式写到现有日志尾部
    logAppend: true
storage:
    # 指定MongoDB存储数据的目录
    dbPath: "/mongo/data/config"
processManagement:
    # 以后台进程方式运行MongoDB服务
    fork: true
    # 指定保存mongo进程ID的文件位置
    pidFilePath: "/mongo/data/config.pid"
net:
    # 绑定服务实例的IP,默认是localhost,这里换成本机IP
    bindIp: 192.168.0.224
    # 绑定的端口,默认是27017
    port: 27023
replication:
    # 指定副本集群的名称
    replSetName: configs
sharding:
    # 指定当前节点在分片集群中的角色(configsvr代表配置节点)
    clusterRole: configsvr
EOF

mongod -f /mongo/conf/config.conf
echo '/usr/local/bin/mongod -f /mongo/conf/config.conf' >> /etc/rc.local
192.168.0.225配置
cat > /mongo/conf/config.conf <<EOF
# MongoDB日志存储相关配置
systemLog:
    # 将所有日志写到指定文件中
    destination: file
    # 记录所有日志信息的文件路径
    path: "/mongo/logs/config.log"
    # 当服务重启时,将新日志以追加形式写到现有日志尾部
    logAppend: true
storage:
    # 指定MongoDB存储数据的目录
    dbPath: "/mongo/data/config"
processManagement:
    # 以后台进程方式运行MongoDB服务
    fork: true
    # 指定保存mongo进程ID的文件位置
    pidFilePath: "/mongo/data/config.pid"
net:
    # 绑定服务实例的IP,默认是localhost,这里换成本机IP
    bindIp: 192.168.0.225
    # 绑定的端口,默认是27017
    port: 27023
replication:
    # 指定副本集群的名称
    replSetName: configs
sharding:
    # 指定当前节点在分片集群中的角色(configsvr代表配置节点)
    clusterRole: configsvr
EOF

mongod -f /mongo/conf/config.conf
echo '/usr/local/bin/mongod -f /mongo/conf/config.conf' >> /etc/rc.local

config-副本集初始化

cd /usr/local/src/mongosh-2.3.3-linux-x64/bin/
./mongosh 192.168.0.223:27023

# config配置节点也是通过副本集的概念进行配置统一的
test> config = {
...     "_id": "configs",
...     "members": [
...         {
...             "_id": 0,
...             "host": "192.168.0.223:27023"
...         },
...         {
...             "_id": 1,
...             "host": "192.168.0.224:27023"
...         },
...         {
...             "_id": 2,
...             "host": "192.168.0.225:27023"
...         }
...     ]
... }
{
  _id: 'configs',
  members: [
    { _id: 0, host: '192.168.0.223:27023' },
    { _id: 1, host: '192.168.0.224:27023' },
    { _id: 2, host: '192.168.0.225:27023' }
  ]
}
test> rs.initiate(config)
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732694416, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732694416, i: 1 })
}
configs [direct: secondary] test> rs.status()
{
  set: 'configs',
  date: ISODate('2024-11-27T08:00:27.553Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  configsvr: true,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1732694427, i: 18 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-11-27T08:00:27.367Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1732694427, i: 18 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1732694427, i: 18 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1732694427, i: 18 }), t: Long('1') },
    writtenOpTime: { ts: Timestamp({ t: 1732694427, i: 18 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-11-27T08:00:27.367Z'),
    lastDurableWallTime: ISODate('2024-11-27T08:00:27.367Z'),
    lastWrittenWallTime: ISODate('2024-11-27T08:00:27.367Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1732694416, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-11-27T08:00:26.824Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1732694416, i: 1 }), t: Long('-1') },
    lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1732694416, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1732694416, i: 1 }), t: Long('-1') },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-11-27T08:00:26.869Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-11-27T08:00:27.352Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.0.223:27023',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 827,
      optime: { ts: Timestamp({ t: 1732694427, i: 18 }), t: Long('1') },
      optimeDate: ISODate('2024-11-27T08:00:27.000Z'),
      optimeWritten: { ts: Timestamp({ t: 1732694427, i: 18 }), t: Long('1') },
      optimeWrittenDate: ISODate('2024-11-27T08:00:27.000Z'),
      lastAppliedWallTime: ISODate('2024-11-27T08:00:27.367Z'),
      lastDurableWallTime: ISODate('2024-11-27T08:00:27.367Z'),
      lastWrittenWallTime: ISODate('2024-11-27T08:00:27.367Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1732694426, i: 1 }),
      electionDate: ISODate('2024-11-27T08:00:26.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: '192.168.0.224:27023',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 11,
      optime: { ts: Timestamp({ t: 1732694416, i: 1 }), t: Long('-1') },
      optimeDurable: { ts: Timestamp({ t: 1732694416, i: 1 }), t: Long('-1') },
      optimeWritten: { ts: Timestamp({ t: 1732694416, i: 1 }), t: Long('-1') },
      optimeDate: ISODate('2024-11-27T08:00:16.000Z'),
      optimeDurableDate: ISODate('2024-11-27T08:00:16.000Z'),
      optimeWrittenDate: ISODate('2024-11-27T08:00:16.000Z'),
      lastAppliedWallTime: ISODate('2024-11-27T08:00:16.319Z'),
      lastDurableWallTime: ISODate('2024-11-27T08:00:27.367Z'),
      lastWrittenWallTime: ISODate('2024-11-27T08:00:27.367Z'),
      lastHeartbeat: ISODate('2024-11-27T08:00:26.851Z'),
      lastHeartbeatRecv: ISODate('2024-11-27T08:00:27.350Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.0.225:27023',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 11,
      optime: { ts: Timestamp({ t: 1732694416, i: 1 }), t: Long('-1') },
      optimeDurable: { ts: Timestamp({ t: 1732694416, i: 1 }), t: Long('-1') },
      optimeWritten: { ts: Timestamp({ t: 1732694416, i: 1 }), t: Long('-1') },
      optimeDate: ISODate('2024-11-27T08:00:16.000Z'),
      optimeDurableDate: ISODate('2024-11-27T08:00:16.000Z'),
      optimeWrittenDate: ISODate('2024-11-27T08:00:16.000Z'),
      lastAppliedWallTime: ISODate('2024-11-27T08:00:26.964Z'),
      lastDurableWallTime: ISODate('2024-11-27T08:00:27.367Z'),
      lastWrittenWallTime: ISODate('2024-11-27T08:00:27.367Z'),
      lastHeartbeat: ISODate('2024-11-27T08:00:26.846Z'),
      lastHeartbeatRecv: ISODate('2024-11-27T08:00:27.345Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732694427, i: 18 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732694427, i: 18 })
}

router节点配置

  • router节点没有数据目录
  • router节点没有副本集配置
  • router节点是通过mongos服务进行启动
192.168.0.223配置
cat > /mongo/conf/router.conf <<EOF
# MongoDB日志存储相关配置
systemLog:
    # 将所有日志写到指定文件中
    destination: file
    # 记录所有日志信息的文件路径
    path: "/mongo/logs/router.log"
    # 当服务重启时,将新日志以追加形式写到现有日志尾部
    logAppend: true
processManagement:
    # 以后台进程方式运行MongoDB服务
    fork: true
    # 指定保存mongo进程ID的文件位置
    pidFilePath: "/mongo/data/router.pid"
net:
    # 绑定服务实例的IP,默认是localhost,这里换成本机IP
    bindIp: 192.168.0.223
    # 绑定的端口,默认是27017
    port: 27022
sharding:
    # 指定配置节点的IP地址(如果是多个节点,使用逗号分隔,后面的无需集群名称作为前缀)
    configDB: configs/192.168.0.223:27023,192.168.0.224:27023,192.168.0.225:27023
EOF

mongos -f /mongo/conf/router.conf
echo '/usr/local/bin/mongos -f /mongo/conf/router.conf' >> /etc/rc.local
192.168.0.224配置
cat > /mongo/conf/router.conf <<EOF
# MongoDB日志存储相关配置
systemLog:
    # 将所有日志写到指定文件中
    destination: file
    # 记录所有日志信息的文件路径
    path: "/mongo/logs/router.log"
    # 当服务重启时,将新日志以追加形式写到现有日志尾部
    logAppend: true
processManagement:
    # 以后台进程方式运行MongoDB服务
    fork: true
    # 指定保存mongo进程ID的文件位置
    pidFilePath: "/mongo/data/router.pid"
net:
    # 绑定服务实例的IP,默认是localhost,这里换成本机IP
    bindIp: 192.168.0.224
    # 绑定的端口,默认是27017
    port: 27022
sharding:
    # 指定配置节点的IP地址(如果是多个节点,使用逗号分隔,后面的无需集群名称作为前缀)
    configDB: configs/192.168.0.223:27023,192.168.0.224:27023,192.168.0.225:27023
EOF

mongos -f /mongo/conf/router.conf
echo '/usr/local/bin/mongos -f /mongo/conf/router.conf' >> /etc/rc.local
192.168.0.225配置
cat > /mongo/conf/router.conf <<EOF
# MongoDB日志存储相关配置
systemLog:
    # 将所有日志写到指定文件中
    destination: file
    # 记录所有日志信息的文件路径
    path: "/mongo/logs/router.log"
    # 当服务重启时,将新日志以追加形式写到现有日志尾部
    logAppend: true
processManagement:
    # 以后台进程方式运行MongoDB服务
    fork: true
    # 指定保存mongo进程ID的文件位置
    pidFilePath: "/mongo/data/router.pid"
net:
    # 绑定服务实例的IP,默认是localhost,这里换成本机IP
    bindIp: 192.168.0.225
    # 绑定的端口,默认是27017
    port: 27022
sharding:
    # 指定配置节点的IP地址(如果是多个节点,使用逗号分隔,后面的无需集群名称作为前缀)
    configDB: configs/192.168.0.223:27023,192.168.0.224:27023,192.168.0.225:27023
EOF

mongos -f /mongo/conf/router.conf
echo '/usr/local/bin/mongos -f /mongo/conf/router.conf' >> /etc/rc.local

添加shard节点

通过mongosh连接任意一个router节点进行配置

root@debian128-223:/usr/local/src/mongosh-2.3.3-linux-x64/bin# ./mongosh 192.168.0.223:27022
Current Mongosh Log ID:	6746d47395a99161c7c1c18b
Connecting to:		mongodb://192.168.0.223:27022/?directConnection=true&appName=mongosh+2.3.3
Using MongoDB:		8.0.3
Using Mongosh:		2.3.3

For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/

------
   The server generated these startup warnings when booting
   2024-11-27T03:09:59.547-05:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
   2024-11-27T03:09:59.547-05:00: You are running this process as the root user, which is not recommended
------

[direct: mongos] test> use admin;
switched to db admin
# 如果每个shard有多个节点(副本集),格式为sh.addShard("shard-1/IP:PORT,IP:PORT,IP:PORT")
[direct: mongos] admin> sh.addShard("shard-1/192.168.0.223:27021")
{
  shardAdded: 'shard-1',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732695455, i: 20 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732695455, i: 20 })
}
[direct: mongos] admin> sh.addShard("shard-2/192.168.0.224:27021")
{
  shardAdded: 'shard-2',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732695467, i: 24 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732695467, i: 18 })
}
[direct: mongos] admin> sh.addShard("shard-3/192.168.0.225:27021")
{
  shardAdded: 'shard-3',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1732695473, i: 18 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1732695473, i: 18 })
}

[direct: mongos] admin> sh.status()
shardingVersion
{ _id: 1, clusterId: ObjectId('6746d19a8f352d650b2c466b') }
---
shards
[
  {
    _id: 'shard-1',
    host: 'shard-1/192.168.0.223:27021',
    state: 1,
    topologyTime: Timestamp({ t: 1732695455, i: 10 }),
    replSetConfigVersion: Long('-1')
  },
  {
    _id: 'shard-2',
    host: 'shard-2/192.168.0.224:27021',
    state: 1,
    topologyTime: Timestamp({ t: 1732695467, i: 9 }),
    replSetConfigVersion: Long('-1')
  },
  {
    _id: 'shard-3',
    host: 'shard-3/192.168.0.225:27021',
    state: 1,
    topologyTime: Timestamp({ t: 1732695473, i: 9 }),
    replSetConfigVersion: Long('-1')
  }
]
---
active mongoses
[ { '8.0.3': 3 } ]
---
autosplit
{ 'Currently enabled': 'yes' }
---
balancer
{
  'Currently enabled': 'yes',
  'Currently running': 'no',
  'Failed balancer rounds in last 5 attempts': 0,
  'Migration Results for the last 24 hours': 'No recent migrations'
}
---
shardedDataDistribution
[
  {
    ns: 'config.system.sessions',
    shards: [
      {
        shardName: 'shard-1',
        numOrphanedDocs: 0,
        numOwnedDocuments: 1,
        ownedSizeBytes: 99,
        orphanedSizeBytes: 0
      }
    ]
  }
]
---
databases
[
  {
    database: { _id: 'config', primary: 'config', partitioned: true },
    collections: {
      'config.system.sessions': {
        shardKey: { _id: 1 },
        unique: false,
        balancing: true,
        chunkMetadata: [ { shard: 'shard-1', nChunks: 1 } ],
        chunks: [
          { min: { _id: MinKey() }, max: { _id: MaxKey() }, 'on shard': 'shard-1', 'last modified': Timestamp({ t: 1, i: 0 }) }
        ],
        tags: []
      }
    }
  }
]

只需要连接任一router节点进行shard添加即可:

由于有多个路由节点,绑定了同一个配置集群,因此其中一个路由节点上添加的分片节点,会被同步到配置节点上,而其余路由节点会监听配置节点,当配置发生变化时,会自动拉取并更新自身配置!

分片规则匹配

参考 分片连接 3.3 分片规则配置

启动顺序说明

应当调整/etc/rc.local​中的顺序使得启动顺序依次为:config节点、shard节点、router节点,不然会导致集群进行阻塞状态。

常用命令

  • sh.addShard()​:将一个分片节点添加到分片集群中。
  • sh.removeShard()​:从分片集群中移除一个分片节点。
  • sh.enableSharding()​:为一个数据库启用分片功能。
  • sh.shardCollection()​:为一个集合配置分片规则(算法+分片键)。
  • sh.status()​:查看分片集群的状态和分片信息。
  • sh.addTagRange()​:为数据块添加标签范围。
  • sh.removeTagRange()​:从数据块中移除标签范围。
  • sh.addShardTag()​:为分片添加标签。
  • sh.removeShardTag()​:从分片移除标签。
  • sh.addTagForZone()​:为一个区域添加标签。
  • sh.removeTagFromZone()​:从一个区域移除标签。
  • sh.setBalancerState()​:启用或停用分片集群的平衡器。
  • sh.waitForBalancer()​:等待分片集群的平衡器完成操作。
  • sh.enableBalancing()​:启用分片集群的平衡器。
  • sh.disableBalancing()​:停用分片集群的平衡器。
  • sh.isBalancerRunning()​:检查分片集群的平衡器是否正在运行。

星霜荏苒 居诸不息