通過rhel7的kvm虛擬機實現3節點Postgres-XL(包括gtm standby)

????關于postgres-xl的實驗是在我自己筆記本rhel7.2_x64的環境下,用kvm實現的,總共有6臺虛擬機:

????一臺openfiler2.99發布共享存儲,一臺gtm master,一臺gtm slave,三臺gtm_proxy/coordinator/datanode,除了openfiler之外其余5臺虛擬機皆以最小化安裝rhel7.2_x64初始化,且具備兩張網卡,一張用于192.168.122.* 提供服務,一張用于10.10.10.* 讀取openfiler發布的共享存儲,具體的postgres-xl服務規劃如下。

服務名稱服務作用ip服務端口服務目錄pooler port
gtm_mastgtm master192.168.122.17920001/pgdata/gtm/data
gtm_slavgtm slave192.168.122.18920001/pgdata/gtm/data
gtm_pxy01gtm proxy192.168.122.17120001/pgdata/gtm_pxy01/data
gtm_pxy02gtm proxy192.168.122.17220001/pgdata/gtm_pxy02/data
gtm_pxy03gtm proxy192.168.122.17320001/pgdata/gtm_pxy03/data
coord01coordinator192.168.122.17115432/pgdata/coord01/data40101
coord02coordinator192.168.122.17215432/pgdata/coord02/data40102
coord03coordinator192.168.122.17315432/pgdata/coord03/data40103
datan01datanode192.168.122.18125431/pgdata/datan01/data40401
datan02datanode192.168.122.18225432/pgdata/datan02/data40402
datan03datanode192.168.122.18325433/pgdata/datan03/data40403

一. 虛擬機操作系統配置

1. 主機名配置

對每個虛擬機修改主機名

hostnamectl set-hostname rhel7pg171

將 /etc/hosts 文件修改成格式化文件(帶域名就三列,不帶域名則兩列就行)

從 /etc/hosts 文件可以看出每臺虛擬機的主機名為 rhel7pgxxx

# cat /etc/hosts
127.0.0.1              localhost
192.168.122.1          station90
192.168.122.170        rhel7pg170
192.168.122.171        rhel7pg171
192.168.122.172        rhel7pg172
192.168.122.173        rhel7pg173
192.168.122.179        rhel7pg179
192.168.122.189        rhel7pg189
192.168.122.100        openfiler100
192.168.122.181        datan01
192.168.122.182        datan02
192.168.122.183        datan03

2. 安全設置

對每個虛擬機關閉selinux,關閉防火墻

setenforce 0
sed -i.bak "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
systemctl disable firewalld.service
systemctl stop firewalld.service
iptables --flush

3. 本地yum服務配置

掛載本地cdrom中的操作系統ISO到本地目錄

mkdir -p /mnt/iso
mount /dev/cdrom /mnt/iso
# 寫入fstab,下次重起操作系統的時候自動掛載
echo "/dev/cdrom    /mnt/iso    iso9660    defaults    0 0" >> /etc/fstab

創建本地yum源

vi /etc/yum.repos.d/base.repo
cat /etc/yum.repos.d/base.repo
[rhel7]
name=rhel7
baseurl=file:///mnt/iso
gpgcheck=0[rhel7-HA]
name=rhel7-HA
baseurl=file:///mnt/iso/addons/HighAvailability
gpgcheck=0[rhel7-RS]
name=rhel7-RS
baseurl=file:///mnt/iso/addons/ResilientStorage

更新yum服務信息

yum clean all
yum list
yum group list

4. 時間同步服務配置

安裝chrony包

yum install chrony.x86_64 -y

編輯配置文件,注釋掉默認的server,添加已知可用的或者自建的時間服務器,本實驗在openfiler的虛擬機192.168.122.100通過ntp發布了一個時間同步源

vi /etc/chrony.conf
# server 0.rhel.pool.ntp.org iburst
# server 1.rhel.pool.ntp.org iburst
# server 2.rhel.pool.ntp.org iburst
# server 3.rhel.pool.ntp.org iburst
server 192.168.122.100 iburst

重啟時間同步服務:

systemctl restart chronyd.service

查看時間同步狀態:

systemctl status chronyd.service

設置開機啟動服務:

systemctl enable chronyd.service

查看時間同步源:

chronyc sources -v

查看時間同步源狀態:

chronyc sourcestats -v

5. 重啟

重啟所有虛擬機,生效主機名和selinux的配置

init 6

二. postgres-xl軟件安裝

1. 依賴包的安裝

各種依賴包:

yum install -y make mpfr libmpc cpp kernel-headers glibc-headers glibc-devel libgomp libstdc++-devel libquadmath libgfortran libgnat libgnat-devel libobjc gcc gcc-c++ libquadmath-devel gcc-gfortran gcc-gnat gcc-objc gcc-objc++ ncurses-devel readline readline-devel zlib-devel m4 flex bison mailcap

perl的支持:

yum install -y perl \
perl-Carp \
perl-constant \
perl-Encode \
perl-Exporter \
perl-File-Path \
perl-File-Temp \
perl-Filter \
perl-Getopt-Long \
perl-HTTP-Tiny \
perl-libs \
perl-macros \
perl-parent \
perl-PathTools \
perl-Pod-Escapes \
perl-podlators \
perl-Pod-Perldoc \
perl-Pod-Simple \
perl-Pod-Usage \
perl-Scalar-List-Utils \
perl-Socket \
perl-Storable \
perl-Text-ParseWords \
perl-threads \
perl-threads-shared \
perl-Time-HiRes \
perl-Time-Local

2. postgres-xl主體的安裝

gunzip postgres-xl-9.5r1.4.tar.gz
tar -xvf postgres-xl-9.5r1.4.tar
cd postgres-xl-9.5r1.4
./configure
gmake
gmake install

3. pgxc_ctl插件的安裝

pgxc_ctl插件中,假如我們有需要配置備份配置文件的選項的話,需要修改下源代碼。postgres-xl-9.5r1.4/contrib/pgxc_ctl源代碼中do_command.c的static void init_all(void) 下面第二行,即第524行的init_gtm_master(true);前面,必須插入一行doConfigBackup();再保存編譯make&make install , 要不然用pgxc_ctl init all的時候,configBackup=y的功能沒辦法正常使用

cd postgres-xl-9.5r1.4/contrib/pgxc_ctl
vi do_command.c
make
make install

4. 用戶環境配置

此處環境變量要寫入 .bashrc 中,因為pgxc_ctl會通過ssh信任協議直接遠程到服務器上運行命令,這樣的話不會去讀取.bash_profile,只會讀取.bashrc,如果不把PATH等環境變量配置到.bashrc的化,之后初始化集群init all 的時候就會報“命令不存在”的錯誤。

/usr/sbin/groupadd -g 2001 postgres 
/usr/sbin/useradd -u 2001 -g postgres postgres 
echo "postgres_passwd" | passwd --stdin postgres
echo "export PGHOME=/usr/local/pgsql" >> /home/postgres/.bashrc
echo "export LD_LIBRARY_PATH==$PGHOME/lib" >> /home/postgres/.bashrc
echo 'export PG_CONFIG=$PGHOME/bin/pg_config' >> /home/postgres/.bashrc
echo 'export pg_config=$PGHOME/bin/pg_config' >> /home/postgres/.bashrc
echo 'export PATH=$PATH:$PGHOME/bin' >> /home/postgres/.bashrc

5. 鏈接庫文件

source /home/postgres/.bash_profile
echo "$PGHOME/lib" >> /etc/ld.so.conf
/sbin/ldconfig
cat /etc/ld.so.conf

三. postgres-xl初始化

1. ssh互信配置

配置每個節點的postgres用戶互信
ssh互信配置方法非常多,可以參考 《配置SSH互信》
http://blog.163.com/cao_jfeng...
我使用的是oracle的腳本

./sshUserSetup.sh -hosts "rhel7pg171 rhel7pg172 rhel7pg173 rhel7pg179 rhel7pg189" -user postgres -advanced -noPromptPassphrase -exverify

測試ssh互通

2. 創建PGDATA目錄

3臺datanode

mkdir -p /pgdata/datan01
mkdir -p /pgdata/datan02
mkdir -p /pgdata/datan03
mkdir -p /pgdata/coord01
mkdir -p /pgdata/coord02
mkdir -p /pgdata/coord03
mkdir -p /pgdata/gtm_pxy01
mkdir -p /pgdata/gtm_pxy02
mkdir -p /pgdata/gtm_pxy03

gtm master和gtm slave

mkdir -p /pgdata/gtm

所有節點

chown -R postgres:postgres /pgdata

3. datanode的特殊準備

用openfiler配置3個3GB的共享磁盤給datan01 datan02 datan03
在datan01 datan02 datan03上執行

systemctl enable iscsi
iscsiadm -m discovery -t sendtargets -p 10.10.10.100
iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.bf1f466b7eef -p 10.10.10.100 -l

在其中某個datanode上對共享磁盤進行分區,分成一個分區即可,并對其進行格式化

fdisk /dev/sda
fdisk /dev/sdb
fdisk /dev/sdc
partprobe /dev/sda
partprobe /dev/sdb
partprobe /dev/sdc
mkfs.xfs /dev/sda1
mkfs.xfs /dev/sdb1
mkfs.xfs /dev/sdc1
tune2fs -c 0 -i 0 /dev/sda1
tune2fs -c 0 -i 0 /dev/sdb1
tune2fs -c 0 -i 0 /dev/sdc1

重啟所有datanode進行重新識別

掛載測試

mount /dev/sda1 /pgdata/datan01/
mount /dev/sdb1 /pgdata/datan02/
mount /dev/sdc1 /pgdata/datan03/
umount  /dev/sda1
umount  /dev/sdb1
umount  /dev/sdc1

添加每個datanode的臨時ip和掛載文件系統

cd /etc/sysconfig/network-scripts/
cp -rp ifcfg-eth0 ifcfg-eth0:1
vi ifcfg-eth0:1
systemctl restart network
mount /dev/sdc1 /pgdata/datan03/

所有節點

chown -R postgres:postgres /pgdata

4. pgxc_ctl配置文件編寫

進入gtm_mast

su - postgres
pgxc_ctl
PGXC prepare
PGXC q
cd pgxc_ctl
vi pgxc_ctl.conf

編輯好的配置文件如下:

    #!/usr/bin/env bash## Postgres-XC Configuration file for pgxc_ctl utility. ## Configuration file can be specified as -c option from pgxc_ctl command.   Default is# $PGXC_CTL_HOME/pgxc_ctl.org.## This is bash script so you can make any addition for your convenience to configure# your Postgres-XC cluster.## Please understand that pgxc_ctl provides only a subset of configuration which pgxc_ctl# provide.  Here's several several assumptions/restrictions pgxc_ctl depends on.## 1) All the resources of pgxc nodes has to be owned by the same user.   Same user means#    user with the same user name.  User ID may be different from server to server.#    This must be specified as a variable $pgxcOwner.## 2) All the servers must be reacheable via ssh without password.   It is highly recommended#    to setup key-based authentication among all the servers.## 3) All the databases in coordinator/datanode has at least one same superuser.  Pgxc_ctl#    uses this user to connect to coordinators and datanodes.   Again, no password should#    be used to connect.  You have many options to do this, pg_hba.conf, pg_ident.conf and#    others.  Pgxc_ctl provides a way to configure pg_hba.conf but not pg_ident.conf.   This#    will be implemented in the later releases.## 4) Gtm master and slave can have different port to listen, while coordinator and datanode#    slave should be assigned the same port number as master.## 5) Port nuber of a coordinator slave must be the same as its master.## 6) Master and slave are connected using synchronous replication.  Asynchronous replication#    have slight (almost none) chance to bring total cluster into inconsistent state.#    This chance is very low and may be negligible.  Support of asynchronous replication#    may be supported in the later release.## 7) Each coordinator and datanode can have only one slave each.  Cascaded replication and#    multiple slave are not supported in the current pgxc_ctl.## 8) Killing nodes may end up with IPC resource leak, such as semafor and shared memory.#    Only listening port (socket) will be cleaned with clean command.## 9) Backup and restore are not supported in pgxc_ctl at present.   This is a big task and#    may need considerable resource.##========================================================================================### pgxcInstallDir variable is needed if you invoke "deploy" command from pgxc_ctl utility.# If don't you don't need this variable.pgxcInstallDir=/usr/local/pgsql#---- OVERALL -----------------------------------------------------------------------------#pgxcOwner=postgres            # owner of the Postgres-XC databaseo cluster.  Here, we use this# both as linus user and database user.  This must be# the super user of each coordinator and datanode.pgxcUser=$pgxcOwner        # OS user of Postgres-XC ownertmpDir=/tmp                    # temporary dir used in XC serverslocalTmpDir=$tmpDir            # temporary dir used here locallyconfigBackup=y                    # If you want config file backup, specify y to this value.configBackupHost=192.168.122.189    # host to backup config fileconfigBackupDir=/home/postgres/pgxc_ctl        # Backup directoryconfigBackupFile=pgxc_ctl.conf    # Backup file name --> Need to synchronize when original changed.#---- GTM ------------------------------------------------------------------------------------# GTM is mandatory.  You must have at least (and only) one GTM master in your Postgres-XC cluster.# If GTM crashes and you need to reconfigure it, you can do it by pgxc_update_gtm command to update# GTM master with others.   Of course, we provide pgxc_remove_gtm command to remove it.  This command# will not stop the current GTM.  It is up to the operator.#---- GTM Master -----------------------------------------------#---- Overall ----gtmName=gtm_mastgtmMasterServer=192.168.122.179gtmMasterPort=20001gtmMasterDir=/pgdata/gtm/data#---- Configuration ---gtmExtraConfig=none            # Will be added gtm.conf for both Master and Slave (done at initilization only)gtmMasterSpecificExtraConfig=none    # Will be added to Master's gtm.conf (done at initialization only)#---- GTM Slave -----------------------------------------------# Because GTM is a key component to maintain database consistency, you may want to configure GTM slave# for backup.#---- Overall ------gtmSlave=y                    # Specify y if you configure GTM Slave.   Otherwise, GTM slave will not be configured and# all the following variables will be reset.gtmSlaveName=gtm_slavgtmSlaveServer=192.168.122.189        # value none means GTM slave is not available.  Give none if you don't configure GTM Slave.gtmSlavePort=20001            # Not used if you don't configure GTM slave.gtmSlaveDir=/pgdata/gtm/data    # Not used if you don't configure GTM slave.# Please note that when you have GTM failover, then there will be no slave available until you configure the slave# again. (pgxc_add_gtm_slave function will handle it)#---- Configuration ----gtmSlaveSpecificExtraConfig=none # Will be added to Slave's gtm.conf (done at initialization only)#---- GTM Proxy -------------------------------------------------------------------------------------------------------# GTM proxy will be selected based upon which server each component runs on.# When fails over to the slave, the slave inherits its master's gtm proxy.  It should be# reconfigured based upon the new location.## To do so, slave should be restarted.   So pg_ctl promote -> (edit postgresql.conf and recovery.conf) -> pg_ctl restart## You don't have to configure GTM Proxy if you dont' configure GTM slave or you are happy if every component connects# to GTM Master directly.  If you configure GTL slave, you must configure GTM proxy too.#---- Shortcuts ------gtmProxyDir=/pgdata/gtm_pxy#---- Overall -------gtmProxy=y                # Specify y if you conifugre at least one GTM proxy.   You may not configure gtm proxies# only when you dont' configure GTM slaves.# If you specify this value not to y, the following parameters will be set to default empty values.# If we find there're no valid Proxy server names (means, every servers are specified# as none), then gtmProxy value will be set to "n" and all the entries will be set to# empty values.gtmProxyNames=(gtm_pxy01 gtm_pxy02 gtm_pxy03)    # No used if it is not configuredgtmProxyServers=(192.168.122.171 192.168.122.172 192.168.122.173)        # Specify none if you dont' configure it.gtmProxyPorts=(20001 20001 20001)                # Not used if it is not configured.gtmProxyDirs=($gtmProxyDir'01/data' $gtmProxyDir'02/data' $gtmProxyDir'03/data')# Not used if it is not configured.#---- Configuration ----gtmPxyExtraConfig=none        # Extra configuration parameter for gtm_proxy.  Coordinator section has an example.gtmPxySpecificExtraConfig=(none none none)#---- Coordinators ----------------------------------------------------------------------------------------------------#---- shortcuts ----------coordMasterDir=/pgdata/coord##coordSlaveDir=$HOME/pgxc/nodes/coord_slave##coordArchLogDir=$HOME/pgxc/nodes/coord_archlog#---- Overall ------------coordNames=(coord01 coord02 coord03)        # Master and slave use the same namecoordPorts=(15432 15432 15432)            # Master portspoolerPorts=(40101 40102 40103)            # Master pooler portscoordPgHbaEntries=(0.0.0.0/0)                # Assumes that all the coordinator (master/slave) accepts# the same connection# This entry allows only $pgxcOwner to connect.# If you'd like to setup another connection, you should# supply these entries through files specified below.# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust".   If you don't want# such setups, specify the value () to this variable and suplly what you want using coordExtraPgHba# and/or coordSpecificExtraPgHba variables.#coordPgHbaEntries=(::1/128)    # Same as above but for IPv6 addresses#---- Master -------------coordMasterServers=(192.168.122.171 192.168.122.172 192.168.122.173)        # none means this master is not availablecoordMasterDirs=($coordMasterDir'01/data' $coordMasterDir'02/data' $coordMasterDir'03/data')coordMaxWALsernder=0    # max_wal_senders: needed to configure slave. If zero value is specified,# it is expected to supply this parameter explicitly by external files# specified in the following.    If you don't configure slaves, leave this value to zero.coordMaxWALSenders=(0 0 0)# max_wal_senders configuration for each coordinator.#---- Slave -------------coordSlave=n            # Specify y if you configure at least one coordiantor slave.  Otherwise, the following# configuration parameters will be set to empty values.# If no effective server names are found (that is, every servers are specified as none),# then coordSlave value will be set to n and all the following values will be set to# empty values.##coordSlaveSync=y        # Specify to connect with synchronized mode.##coordSlaveServers=(node07 node08 node09 node06)            # none means this slave is not available##coordSlavePorts=(20004 20005 20004 20005)            # Master ports##coordSlavePoolerPorts=(20010 20011 20010 20011)            # Master pooler ports##coordSlaveDirs=($coordSlaveDir $coordSlaveDir $coordSlaveDir $coordSlaveDir)##coordArchLogDirs=($coordArchLogDir $coordArchLogDir $coordArchLogDir $coordArchLogDir)#---- Configuration files---# Need these when you'd like setup specific non-default configuration # These files will go to corresponding files for the master.# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries # Or you may supply these files manually.coordExtraConfig=coordExtraConfig    # Extra configuration file for coordinators.  # This file will be added to all the coordinators'# postgresql.conf# Pleae note that the following sets up minimum parameters which you may want to change.# You can put your postgresql.conf lines here.cat > $coordExtraConfig <<EOF#================================================# Added to all the coordinator postgresql.conf# Original: $coordExtraConfiglog_destination = 'stderr'logging_collector = onlog_directory = 'pg_log'listen_addresses = '*'max_connections = 100EOF# Additional Configuration file for specific coordinator master.# You can define each setting by similar means as above.coordSpecificExtraConfig=(none none none)coordExtraPgHba=none    # Extra entry for pg_hba.conf.  This file will be added to all the coordinators' pg_hba.confcoordSpecificExtraPgHba=(none none none)#----- Additional Slaves -----## Please note that this section is just a suggestion how we extend the configuration for# multiple and cascaded replication.   They're not used in the current version.###coordAdditionalSlaves=n        # Additional slave can be specified as follows: where you##coordAdditionalSlaveSet=(cad1)        # Each specifies set of slaves.   This case, two set of slaves are# configured##cad1_Sync=n                  # All the slaves at "cad1" are connected with asynchronous mode.# If not, specify "y"# The following lines specifies detailed configuration for each# slave tag, cad1.  You can define cad2 similarly.##cad1_Servers=(node08 node09 node06 node07)    # Hosts##cad1_dir=$HOME/pgxc/nodes/coord_slave_cad1##cad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)##cad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1##cad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)#---- Datanodes -------------------------------------------------------------------------------------------------------#---- Shortcuts --------------datanodeMasterDir=/pgdata/datan##datanodeSlaveDir=$HOME/pgxc/nodes/dn_slave##datanodeArchLogDir=$HOME/pgxc/nodes/datanode_archlog#---- Overall ---------------primaryDatanode=none                # Primary Node.# At present, xc has a priblem to issue ALTER NODE against the primay node.  Until it is fixed, the test will be done# without this feature.##primaryDatanode=datanode1                # Primary Node.datanodeNames=(datan01 datan02 datan03)datanodePorts=(25431 25432 25433)    # Master portsdatanodePoolerPorts=(40401 40402 40403)    # Master pooler portsdatanodePgHbaEntries=(0.0.0.0/0)    # Assumes that all the coordinator (master/slave) accepts# the same connection# This list sets up pg_hba.conf for $pgxcOwner user.# If you'd like to setup other entries, supply them# through extra configuration files specified below.# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust".   If you don't want# such setups, specify the value () to this variable and suplly what you want using datanodeExtraPgHba# and/or datanodeSpecificExtraPgHba variables.#datanodePgHbaEntries=(::1/128)    # Same as above but for IPv6 addresses#---- Master ----------------datanodeMasterServers=(192.168.122.181 192.168.122.182 192.168.122.183)    # none means this master is not available.# This means that there should be the master but is down.# The cluster is not operational until the master is# recovered and ready to run.    datanodeMasterDirs=($datanodeMasterDir'01/data' $datanodeMasterDir'02/data' $datanodeMasterDir'03/data')datanodeMaxWalSender=0                                # max_wal_senders: needed to configure slave. If zero value is # specified, it is expected this parameter is explicitly supplied# by external configuration files.# If you don't configure slaves, leave this value zero.datanodeMaxWALSenders=(0 0 0)# max_wal_senders configuration for each datanode#---- Slave -----------------datanodeSlave=n            # Specify y if you configure at least one coordiantor slave.  Otherwise, the following# configuration parameters will be set to empty values.# If no effective server names are found (that is, every servers are specified as none),# then datanodeSlave value will be set to n and all the following values will be set to# empty values.##datanodeSlaveServers=(node07 node08 node09 node06)    # value none means this slave is not available##datanodeSlavePorts=(20008 20009 20008 20009)    # value none means this slave is not available##datanodeSlavePoolerPorts=(20012 20013 20012 20013)    # value none means this slave is not available##datanodeSlaveSync=y        # If datanode slave is connected in synchronized mode##datanodeSlaveDirs=($datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir)##datanodeArchLogDirs=( $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir )# ---- Configuration files ---# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries here.# These files will go to corresponding files for the master.# Or you may supply these files manually.datanodeExtraConfig=none    # Extra configuration file for datanodes.  This file will be added to all the # datanodes' postgresql.confdatanodeSpecificExtraConfig=(none none none)datanodeExtraPgHba=none        # Extra entry for pg_hba.conf.  This file will be added to all the datanodes' postgresql.confdatanodeSpecificExtraPgHba=(none none none)#----- Additional Slaves -----datanodeAdditionalSlaves=n    # Additional slave can be specified as follows: where you# datanodeAdditionalSlaveSet=(dad1 dad2)        # Each specifies set of slaves.   This case, two set of slaves are# configured# dad1_Sync=n                  # All the slaves at "cad1" are connected with asynchronous mode.# If not, specify "y"# The following lines specifies detailed configuration for each# slave tag, cad1.  You can define cad2 similarly.# dad1_Servers=(node08 node09 node06 node07)    # Hosts# dad1_dir=$HOME/pgxc/nodes/coord_slave_cad1# dad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)# dad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1# dad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)#---- WAL archives -------------------------------------------------------------------------------------------------walArchive=n    # If you'd like to configure WAL archive, edit this section.# Pgxc_ctl assumes that if you configure WAL archive, you configure it# for all the coordinators and datanodes.# Default is "no".   Please specify "y" here to turn it on.##        End of Configuration Section##==========================================================================================================================#========================================================================================================================# The following is for extension.  Just demonstrate how to write such extension.  There's no code# which takes care of them so please ignore the following lines.  They are simply ignored by pgxc_ctl.# No side effects.#=============<< Beginning of future extension demonistration >> ========================================================# You can setup more than one backup set for various purposes, such as disaster recovery.##walArchiveSet=(war1 war2)##war1_source=(master)    # you can specify master, slave or ano other additional slaves as a source of WAL archive.# Default is the master##wal1_source=(slave)##wal1_source=(additiona_coordinator_slave_set additional_datanode_slave_set)##war1_host=node10    # All the nodes are backed up at the same host for a given archive set##war1_backupdir=$HOME/pgxc/backup_war1##wal2_source=(master)##war2_host=node11##war2_backupdir=$HOME/pgxc/backup_war2#=============<< End of future extension demonistration >> ========================================================

5. 通過pgxc_ctl進行初始化

進入gtm_mast
在postgres用戶下使用 pgxc_ctl init all 命令進行初始化以下是輸出結果:

    # su - postgres# pgxc_ctl init all/bin/bashInstalling pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.confFinished reading configuration.******** PGXC_CTL START ***************Current directory: /home/postgres/pgxc_ctlpgxc_ctl.conf                                                                                                                         100%   17KB  17.3KB/s   00:00    Initialize GTM masterThe files belonging to this GTM system will be owned by user "postgres".This user must also own the server process.fixing permissions on existing directory /pgdata/gtm/data ... okcreating configuration files ... okcreating control file ... okSuccess.waiting for server to shut down.... doneserver stoppedDone.Start GTM masterserver startingInitialize GTM slaveThe files belonging to this GTM system will be owned by user "postgres".This user must also own the server process.fixing permissions on existing directory /pgdata/gtm/data ... okcreating configuration files ... okcreating control file ... okSuccess.Done.Start GTM slaveserver startingDone.Initialize all the gtm proxies.Initializing gtm proxy gtm_pxy01.Initializing gtm proxy gtm_pxy02.Initializing gtm proxy gtm_pxy03.The files belonging to this GTM system will be owned by user "postgres".This user must also own the server process.fixing permissions on existing directory /pgdata/gtm_pxy01/data ... okcreating configuration files ... okSuccess.The files belonging to this GTM system will be owned by user "postgres".This user must also own the server process.fixing permissions on existing directory /pgdata/gtm_pxy02/data ... okcreating configuration files ... okSuccess.The files belonging to this GTM system will be owned by user "postgres".This user must also own the server process.fixing permissions on existing directory /pgdata/gtm_pxy03/data ... okcreating configuration files ... okSuccess.Done.Starting all the gtm proxies.Starting gtm proxy gtm_pxy01.Starting gtm proxy gtm_pxy02.Starting gtm proxy gtm_pxy03.server startingserver startingserver startingDone.Initialize all the coordinator masters.Initialize coordinator master coord01.Initialize coordinator master coord02.Initialize coordinator master coord03.The files belonging to this database system will be owned by user "postgres".This user must also own the server process.The database cluster will be initialized with locale "zh_CN.UTF-8".The default database encoding has accordingly been set to "UTF8".initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"The default text search configuration will be set to "simple".Data page checksums are disabled.fixing permissions on existing directory /pgdata/coord01/data ... okcreating subdirectories ... okselecting default max_connections ... 100selecting default shared_buffers ... 128MBselecting dynamic shared memory implementation ... posixcreating configuration files ... okcreating template1 database in /pgdata/coord01/data/base/1 ... okinitializing pg_authid ... okinitializing dependencies ... okcreating system views ... okcreating cluster information ... okloading system objects' descriptions ... okcreating collations ... okcreating conversions ... okcreating dictionaries ... oksetting privileges on built-in objects ... okcreating information schema ... okloading PL/pgSQL server-side language ... okvacuuming database template1 ... okcopying template1 to template0 ... okcopying template1 to postgres ... oksyncing data to disk ... okfreezing database template0 ... okfreezing database template1 ... okfreezing database postgres ... okWARNING: enabling "trust" authentication for local connectionsYou can change this by editing pg_hba.conf or using the option -A, or--auth-local and --auth-host, the next time you run initdb.Success.The files belonging to this database system will be owned by user "postgres".This user must also own the server process.The database cluster will be initialized with locale "zh_CN.UTF-8".The default database encoding has accordingly been set to "UTF8".initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"The default text search configuration will be set to "simple".Data page checksums are disabled.fixing permissions on existing directory /pgdata/coord02/data ... okcreating subdirectories ... okselecting default max_connections ... 100selecting default shared_buffers ... 128MBselecting dynamic shared memory implementation ... posixcreating configuration files ... okcreating template1 database in /pgdata/coord02/data/base/1 ... okinitializing pg_authid ... okinitializing dependencies ... okcreating system views ... okcreating cluster information ... okloading system objects' descriptions ... okcreating collations ... okcreating conversions ... okcreating dictionaries ... oksetting privileges on built-in objects ... okcreating information schema ... okloading PL/pgSQL server-side language ... okvacuuming database template1 ... okcopying template1 to template0 ... okcopying template1 to postgres ... oksyncing data to disk ... okfreezing database template0 ... okfreezing database template1 ... okfreezing database postgres ... okWARNING: enabling "trust" authentication for local connectionsYou can change this by editing pg_hba.conf or using the option -A, or--auth-local and --auth-host, the next time you run initdb.Success.The files belonging to this database system will be owned by user "postgres".This user must also own the server process.The database cluster will be initialized with locale "zh_CN.UTF-8".The default database encoding has accordingly been set to "UTF8".initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"The default text search configuration will be set to "simple".Data page checksums are disabled.fixing permissions on existing directory /pgdata/coord03/data ... okcreating subdirectories ... okselecting default max_connections ... 100selecting default shared_buffers ... 128MBselecting dynamic shared memory implementation ... posixcreating configuration files ... okcreating template1 database in /pgdata/coord03/data/base/1 ... okinitializing pg_authid ... okinitializing dependencies ... okcreating system views ... okcreating cluster information ... okloading system objects' descriptions ... okcreating collations ... okcreating conversions ... okcreating dictionaries ... oksetting privileges on built-in objects ... okcreating information schema ... okloading PL/pgSQL server-side language ... okvacuuming database template1 ... okcopying template1 to template0 ... okcopying template1 to postgres ... oksyncing data to disk ... okfreezing database template0 ... okfreezing database template1 ... okfreezing database postgres ... okWARNING: enabling "trust" authentication for local connectionsYou can change this by editing pg_hba.conf or using the option -A, or--auth-local and --auth-host, the next time you run initdb.Success.Done.Starting coordinator master.Starting coordinator master coord01Starting coordinator master coord02Starting coordinator master coord03LOG:  redirecting log output to logging collector processHINT:  Future log output will appear in directory "pg_log".LOG:  redirecting log output to logging collector processHINT:  Future log output will appear in directory "pg_log".LOG:  redirecting log output to logging collector processHINT:  Future log output will appear in directory "pg_log".Done.Initialize all the datanode masters.Initialize the datanode master datan01.Initialize the datanode master datan02.Initialize the datanode master datan03.The files belonging to this database system will be owned by user "postgres".This user must also own the server process.The database cluster will be initialized with locale "zh_CN.UTF-8".The default database encoding has accordingly been set to "UTF8".initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"The default text search configuration will be set to "simple".Data page checksums are disabled.fixing permissions on existing directory /pgdata/datan01/data ... okcreating subdirectories ... okselecting default max_connections ... 100selecting default shared_buffers ... 128MBselecting dynamic shared memory implementation ... posixcreating configuration files ... okcreating template1 database in /pgdata/datan01/data/base/1 ... okinitializing pg_authid ... okinitializing dependencies ... okcreating system views ... okcreating cluster information ... okloading system objects' descriptions ... okcreating collations ... okcreating conversions ... okcreating dictionaries ... oksetting privileges on built-in objects ... okcreating information schema ... okloading PL/pgSQL server-side language ... okvacuuming database template1 ... okcopying template1 to template0 ... okcopying template1 to postgres ... oksyncing data to disk ... okfreezing database template0 ... okfreezing database template1 ... okfreezing database postgres ... okWARNING: enabling "trust" authentication for local connectionsYou can change this by editing pg_hba.conf or using the option -A, or--auth-local and --auth-host, the next time you run initdb.Success.The files belonging to this database system will be owned by user "postgres".This user must also own the server process.The database cluster will be initialized with locale "zh_CN.UTF-8".The default database encoding has accordingly been set to "UTF8".initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"The default text search configuration will be set to "simple".Data page checksums are disabled.fixing permissions on existing directory /pgdata/datan02/data ... okcreating subdirectories ... okselecting default max_connections ... 100selecting default shared_buffers ... 128MBselecting dynamic shared memory implementation ... posixcreating configuration files ... okcreating template1 database in /pgdata/datan02/data/base/1 ... okinitializing pg_authid ... okinitializing dependencies ... okcreating system views ... okcreating cluster information ... okloading system objects' descriptions ... okcreating collations ... okcreating conversions ... okcreating dictionaries ... oksetting privileges on built-in objects ... okcreating information schema ... okloading PL/pgSQL server-side language ... okvacuuming database template1 ... okcopying template1 to template0 ... okcopying template1 to postgres ... oksyncing data to disk ... okfreezing database template0 ... okfreezing database template1 ... okfreezing database postgres ... okWARNING: enabling "trust" authentication for local connectionsYou can change this by editing pg_hba.conf or using the option -A, or--auth-local and --auth-host, the next time you run initdb.Success.The files belonging to this database system will be owned by user "postgres".This user must also own the server process.The database cluster will be initialized with locale "zh_CN.UTF-8".The default database encoding has accordingly been set to "UTF8".initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"The default text search configuration will be set to "simple".Data page checksums are disabled.fixing permissions on existing directory /pgdata/datan03/data ... okcreating subdirectories ... okselecting default max_connections ... 100selecting default shared_buffers ... 128MBselecting dynamic shared memory implementation ... posixcreating configuration files ... okcreating template1 database in /pgdata/datan03/data/base/1 ... okinitializing pg_authid ... okinitializing dependencies ... okcreating system views ... okcreating cluster information ... okloading system objects' descriptions ... okcreating collations ... okcreating conversions ... okcreating dictionaries ... oksetting privileges on built-in objects ... okcreating information schema ... okloading PL/pgSQL server-side language ... okvacuuming database template1 ... okcopying template1 to template0 ... okcopying template1 to postgres ... oksyncing data to disk ... okfreezing database template0 ... okfreezing database template1 ... okfreezing database postgres ... okWARNING: enabling "trust" authentication for local connectionsYou can change this by editing pg_hba.conf or using the option -A, or--auth-local and --auth-host, the next time you run initdb.Success.Done.Starting all the datanode masters.Starting datanode master datan01.Starting datanode master datan02.Starting datanode master datan03.LOG:  redirecting log output to logging collector processHINT:  Future log output will appear in directory "pg_log".LOG:  redirecting log output to logging collector processHINT:  Future log output will appear in directory "pg_log".LOG:  redirecting log output to logging collector processHINT:  Future log output will appear in directory "pg_log".Done.ALTER NODE coord01 WITH (HOST='192.168.122.171', PORT=15432);ALTER NODECREATE NODE coord02 WITH (TYPE='coordinator', HOST='192.168.122.172', PORT=15432);CREATE NODECREATE NODE coord03 WITH (TYPE='coordinator', HOST='192.168.122.173', PORT=15432);CREATE NODECREATE NODE datan01 WITH (TYPE='datanode', HOST='192.168.122.181', PORT=25431);CREATE NODECREATE NODE datan02 WITH (TYPE='datanode', HOST='192.168.122.182', PORT=25432);CREATE NODECREATE NODE datan03 WITH (TYPE='datanode', HOST='192.168.122.183', PORT=25433);CREATE NODESELECT pgxc_pool_reload();pgxc_pool_reload ------------------t(1 row)CREATE NODE coord01 WITH (TYPE='coordinator', HOST='192.168.122.171', PORT=15432);CREATE NODEALTER NODE coord02 WITH (HOST='192.168.122.172', PORT=15432);ALTER NODECREATE NODE coord03 WITH (TYPE='coordinator', HOST='192.168.122.173', PORT=15432);CREATE NODECREATE NODE datan01 WITH (TYPE='datanode', HOST='192.168.122.181', PORT=25431);CREATE NODECREATE NODE datan02 WITH (TYPE='datanode', HOST='192.168.122.182', PORT=25432);CREATE NODECREATE NODE datan03 WITH (TYPE='datanode', HOST='192.168.122.183', PORT=25433);CREATE NODESELECT pgxc_pool_reload();pgxc_pool_reload ------------------t(1 row)CREATE NODE coord01 WITH (TYPE='coordinator', HOST='192.168.122.171', PORT=15432);CREATE NODECREATE NODE coord02 WITH (TYPE='coordinator', HOST='192.168.122.172', PORT=15432);CREATE NODEALTER NODE coord03 WITH (HOST='192.168.122.173', PORT=15432);ALTER NODECREATE NODE datan01 WITH (TYPE='datanode', HOST='192.168.122.181', PORT=25431);CREATE NODECREATE NODE datan02 WITH (TYPE='datanode', HOST='192.168.122.182', PORT=25432);CREATE NODECREATE NODE datan03 WITH (TYPE='datanode', HOST='192.168.122.183', PORT=25433);CREATE NODESELECT pgxc_pool_reload();pgxc_pool_reload ------------------t(1 row)Done.EXECUTE DIRECT ON (datan01) 'CREATE NODE coord01 WITH (TYPE=''coordinator'', HOST=''192.168.122.171'', PORT=15432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan01) 'CREATE NODE coord02 WITH (TYPE=''coordinator'', HOST=''192.168.122.172'', PORT=15432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan01) 'CREATE NODE coord03 WITH (TYPE=''coordinator'', HOST=''192.168.122.173'', PORT=15432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan01) 'ALTER NODE datan01 WITH (TYPE=''datanode'', HOST=''192.168.122.181'', PORT=25431)';EXECUTE DIRECTEXECUTE DIRECT ON (datan01) 'CREATE NODE datan02 WITH (TYPE=''datanode'', HOST=''192.168.122.182'', PORT=25432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan01) 'CREATE NODE datan03 WITH (TYPE=''datanode'', HOST=''192.168.122.183'', PORT=25433)';EXECUTE DIRECTEXECUTE DIRECT ON (datan01) 'SELECT pgxc_pool_reload()';pgxc_pool_reload ------------------t(1 row)EXECUTE DIRECT ON (datan02) 'CREATE NODE coord01 WITH (TYPE=''coordinator'', HOST=''192.168.122.171'', PORT=15432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan02) 'CREATE NODE coord02 WITH (TYPE=''coordinator'', HOST=''192.168.122.172'', PORT=15432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan02) 'CREATE NODE coord03 WITH (TYPE=''coordinator'', HOST=''192.168.122.173'', PORT=15432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan02) 'CREATE NODE datan01 WITH (TYPE=''datanode'', HOST=''192.168.122.181'', PORT=25431)';EXECUTE DIRECTEXECUTE DIRECT ON (datan02) 'ALTER NODE datan02 WITH (TYPE=''datanode'', HOST=''192.168.122.182'', PORT=25432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan02) 'CREATE NODE datan03 WITH (TYPE=''datanode'', HOST=''192.168.122.183'', PORT=25433)';EXECUTE DIRECTEXECUTE DIRECT ON (datan02) 'SELECT pgxc_pool_reload()';pgxc_pool_reload ------------------t(1 row)EXECUTE DIRECT ON (datan03) 'CREATE NODE coord01 WITH (TYPE=''coordinator'', HOST=''192.168.122.171'', PORT=15432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan03) 'CREATE NODE coord02 WITH (TYPE=''coordinator'', HOST=''192.168.122.172'', PORT=15432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan03) 'CREATE NODE coord03 WITH (TYPE=''coordinator'', HOST=''192.168.122.173'', PORT=15432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan03) 'CREATE NODE datan01 WITH (TYPE=''datanode'', HOST=''192.168.122.181'', PORT=25431)';EXECUTE DIRECTEXECUTE DIRECT ON (datan03) 'CREATE NODE datan02 WITH (TYPE=''datanode'', HOST=''192.168.122.182'', PORT=25432)';EXECUTE DIRECTEXECUTE DIRECT ON (datan03) 'ALTER NODE datan03 WITH (TYPE=''datanode'', HOST=''192.168.122.183'', PORT=25433)';EXECUTE DIRECTEXECUTE DIRECT ON (datan03) 'SELECT pgxc_pool_reload()';pgxc_pool_reload ------------------t(1 row)Done.

初始化完成,可以用 pgxc_ctl monitor all 對所有服務的狀態進行觀察:

    # pgxc_ctl monitor all/bin/bashInstalling pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.confFinished reading configuration.******** PGXC_CTL START ***************Current directory: /home/postgres/pgxc_ctlRunning: gtm masterRunning: gtm slaveRunning: gtm proxy gtm_pxy01Running: gtm proxy gtm_pxy02Running: gtm proxy gtm_pxy03Running: coordinator master coord01Running: coordinator master coord02Running: coordinator master coord03Running: datanode master datan01Running: datanode master datan02Running: datanode master datan03

6. 修改datanode的gtm地址

為了后面datanode節點切換服務器的時候能夠注冊上gtm proxy,現在將每個datanode節點的配置文件里面的gtm地址配置為datanode自己的服務ip。這里只演示修改datan01:

    # gtm_mast下運行,停掉datanode datan01的服務pgxc_ctl stop datanode datan01# datan01下運行,編輯配置文件,修改gtm proxy連接地址su - postgrescd /pgdata/datan01/data/vi postgresql.conf tail -n 3 postgresql.conf# 以下為修改后的顯示結果,192.168.122.181是datan01的服務ip,從/etc/hosts可以看到gtm_host = '192.168.122.181'gtm_port = 20001# End of Addition# gtm_mast下運行pgxc_ctl start datanode datan01

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/456505.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/456505.shtml
英文地址,請注明出處:http://en.pswp.cn/news/456505.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

MySQL查詢select實例 【筆記】

use mydb; select * from EMP; select * from DEPT; select DISTINCT JOB from EMP; -- distinct 去除重復項 select MGR from EMP; select MGR as 主管編號 from EMP; -- 輔助查詢&#xff0c;每列信息 起別名 as select EMPNO as 員工編號,JOB as 職位,DEPNO as 部…

C#1

轉載于:https://www.cnblogs.com/qingwengang/p/6327371.html

使用python3連接hiveserver2的方法

前言&#xff1a;1、啟動HiveServer22、在Linux中安裝impyla&#xff08;前提是安裝Python相關的環境、虛擬環境&#xff08;可選&#xff09;&#xff09; 前言&#xff1a; 需求&#xff1a;需要通過windows端的pycharm來操作hive。 于是就搜集資料尋找解決方案。 大概有…

vue2.X的路由

以 / 開頭的嵌套路徑會被當作根路徑。 <router-link> 在vue-router1.X中是以<a v-link""></a>存在的 里面的參數&#xff1a; to&#xff1a;代表跳轉的目的地&#xff0c;渲染成<a href""> 后面目的地有下面幾種表示法 to引導&a…

mysql啟動和關閉外鍵約束的方法(FOREIGN_KEY_CHECKS)

在MySQL中刪除一張表或一條數據的時候&#xff0c;出現 [Err] 1451 -Cannot delete or update a parent row: a foreign key constraint fails (...) 這是因為MySQL中設置了foreign key關聯&#xff0c;造成無法更新或刪除數據。可以通過設置FOREIGN_KEY_CHECKS變量來避免這種…

CentOS下安裝VirtualEnv的教程

文章目錄前言&#xff1a;1、下載安裝virutalenv2、安裝新的Python版本&#xff08;可以直接安裝anaconda&#xff1a;安裝過程可自行查資料&#xff09;3、 VirtualEnv的設置4、Python虛擬環境的作用總結&#xff1a;前言&#xff1a; 在目前的Linux系統中&#xff0c;默認使…

社保(五險一金)的問題

2019獨角獸企業重金招聘Python工程師標準>>> 社保&#xff0c;全稱為社會保險&#xff0c;是一種再分配制度&#xff0c;它的目標是保證物質及勞動力的再生產和社會的穩定。我們平時常說的社保&#xff0c;還有另一個名稱&#xff0c;及“五險一金”。那么社保是哪五…

PKM(個人知識管理)類軟件收集(偶爾更新列表)

evernote(印象筆記) Wiz 有道云 麥庫 leanote GoogleKeep OneNote SimpleNote(wp家的&#xff0c;免費) pocket(稍后讀的軟件&#xff0c;同類的還有Instapaper&#xff0c;國內的收趣) MyBase RaysNote(v友開發) CintaNotes https://jitaku.io 開源 Gitit-Bigger Laverna pape…

MySQL中外鍵的定義、作用、添加和刪除

1 簡介 在實際開發的項目中&#xff0c;一個健壯數據庫中的數據一定有很好的參照完整性。例如學生檔案和成績單兩張表&#xff0c;如果成績單中有張三的成績&#xff0c;學生檔案中張三的檔案卻被刪除了&#xff0c;這樣就會產生垃圾數據或者錯誤數據。為了保證數據的完整性&a…

Hive報錯:Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)

Hive執行schematool -initSchema -dbType derby報錯。 報錯的日志&#xff1a; doupeihuadoupeihua-2104 ~/software/hive/bin $ schematool -initSchema -dbType derbySLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/doupei…

Andorid Scrolling Activity(CoordinatorLayout詳情)

1.new project -> Scrolling Activity 2.Layout xml code activity_scrolling.xml 1 <?xml version"1.0" encoding"utf-8"?>2 <android.support.design.widget.CoordinatorLayout xmlns:android"http://schemas.android.com/apk/res/an…

截取utf8中文字符串

英文直接截取即可。 中文應字節長度會亂碼&#xff0c;應先轉unicode截取。 如下&#xff1a; #-*- coding:utf8 -*- s u截取中文 s.decode(utf8)[0:3].encode(utf8)轉載于:https://www.cnblogs.com/BigFishFly/p/6337183.html

解決:Navicat for mysql 設置外鍵出錯

1 看下是不是外鍵允許為空&#xff0c;不唯一等約束條件不滿足 2 或者外鍵設置刪除時為 restrict 1. 兩個字段的類型或者大小不嚴格匹配。例如&#xff0c;如果一個是int(10)&#xff0c;那么外鍵也必須設置成int(10)&#xff0c;而不是int(11)&#xff0c;也不能是tinyint。另…

Python加鹽加密方法hashlib(md5,sha224,sha1,sha256)

用random.randint隨機數給密碼加,鹽加強密碼的安全性

Ubuntu16.04以root身份登入!

首先以非root用戶身份登入系統。 1&#xff0c;修改root密碼&#xff1a;啟動shell&#xff0c;隨后在shell里面輸入命令&#xff1a; sudo passwd root 最后輸入root要使用的密碼&#xff0c;需要輸入兩次&#xff0c;這樣root密碼就修改完畢了&#xff01; 2&#xff0c;修改…

HDU2193-AVL-數據結構-AVL

題目鏈接&#xff1a;http://acm.hdu.edu.cn/statistic.php?pid2193&from126&lang&order_type0 好吧。水題一道&#xff0c;原本以為是一道寫AVL樹的想寫來練練手。沒有想到卻是這樣一道水題&#xff0c;好吧&#xff0c;猥瑣的水過。 題目意思&#xff1a; 題目大…

玩Linux碰到的問題以及使用技巧總結

文章目錄1、問題問題一&#xff1a;解壓JDK報錯&#xff1a;gzip:stdin:not in gzip format。 問題二&#xff1a;在Linux下ping不通外網 問題三&#xff1a;解決虛擬機克隆后網卡eth0不見的問題 問題四&#xff1a;執行腳本報錯&#xff1a;syntax error: unexpected end of f…

python連接MySQL數據庫搭建簡易博客

實現功能大概 將python和MySQL數據庫交互進行 封裝 ---》》utils.py 文件程序 ----》blog.py # -*- coding: utf-8 -*- # Time : 2019/08/30 15:33 # Author : Liu # File : utils.pyimport pymysql import hashlibclass dbHelper:def __init__(self, host, user, pass…

利用Sqoop在數據庫和Hive、HDFS之間做ETL操作

文章目錄[toc] 目錄&#xff1a;一、利用Sqoop&#xff0c;從Oracle到HDFS二、利用Sqoop&#xff0c;從Oracle到Hive三、遇到的問題目錄&#xff1a; 一、利用Sqoop&#xff0c;從Oracle到HDFS 第一步&#xff1a;把Oracle驅動拷貝到Sqoop安裝路徑中的lib文件夾下。 第二步&…

跨地域的VPC私網互通【高速通道案例】

最近一家大型企業正在將業務遷移至阿里云平臺&#xff0c;用戶有深圳&#xff0c;北京&#xff0c;上海等分支&#xff0c;其中上海為總部&#xff0c;用戶要求在阿里云上的華南1&#xff0c;華北2&#xff0c;華東2分別建立VPC網絡&#xff0c;其中華南1&#xff0c;華北2要與…