Kubernetes學習之路(四)之Node節點二進制部署

K8S Node節點部署

  • 1、部署kubelet

(1)二進制包準備
[root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/bin/
[root@linux-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/
[root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.120:/opt/kubernetes/bin/
[root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.130:/opt/kubernetes/bin/
(2)創建角色綁定

kubelet啟動時會向kube-apiserver發送tsl bootstrap請求,所以需要將bootstrap的token設置成對應的角色,這樣kubectl才有權限創建該請求。

[root@linux-node1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding "kubelet-bootstrap" created
(3)創建 kubelet bootstrapping kubeconfig 文件 設置集群參數
[root@linux-node1 ~]# cd /usr/local/src/ssl
[root@linux-node1 ssl]# kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=https://192.168.56.110:6443 \--kubeconfig=bootstrap.kubeconfig Cluster "kubernetes" set.
(4)設置客戶端認證參數
[root@linux-node1 ssl]# kubectl config set-credentials kubelet-bootstrap \--token=ad6d5bb607a186796d8861557df0d17f \--kubeconfig=bootstrap.kubeconfig   
User "kubelet-bootstrap" set.
(5)設置上下文參數
[root@linux-node1 ssl]# kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=bootstrap.kubeconfig
Context "default" created.
(6)選擇默認上下文
[root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
Switched to context "default".
[root@linux-node1 ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg
[root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.120:/opt/kubernetes/cfg
[root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.130:/opt/kubernetes/cfg
  • 2、部署kubelet 1.設置CNI支持

(1)配置CNI
[root@linux-node2 ~]# mkdir -p /etc/cni/net.d
[root@linux-node2 ~]# vim /etc/cni/net.d/10-default.conf
{"name": "flannel","type": "flannel","delegate": {"bridge": "docker0","isDefaultGateway": true,"mtu": 1400}
}
[root@linux-node3 ~]# mkdir -p /etc/cni/net.d
[root@linux-node2 ~]# scp /etc/cni/net.d/10-default.conf 192.168.56.130:/etc/cni/net.d/10-default.conf
(2)創建kubelet數據存儲目錄
[root@linux-node2 ~]# mkdir /var/lib/kubelet
[root@linux-node3 ~]# mkdir /var/lib/kubelet
(3)創建kubelet服務配置
[root@linux-node2 ~]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \--address=192.168.56.120 \--hostname-override=192.168.56.120 \--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \--cert-dir=/opt/kubernetes/ssl \--network-plugin=cni \--cni-conf-dir=/etc/cni/net.d \--cni-bin-dir=/opt/kubernetes/bin/cni \--cluster-dns=10.1.0.2 \--cluster-domain=cluster.local. \--hairpin-mode hairpin-veth \--allow-privileged=true \--fail-swap-on=false \--logtostderr=true \--v=2 \--logtostderr=false \--log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5[root@linux-node3 ~]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \--address=192.168.56.130 \--hostname-override=192.168.56.130 \--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \--cert-dir=/opt/kubernetes/ssl \--network-plugin=cni \--cni-conf-dir=/etc/cni/net.d \--cni-bin-dir=/opt/kubernetes/bin/cni \--cluster-dns=10.1.0.2 \--cluster-domain=cluster.local. \--hairpin-mode hairpin-veth \--allow-privileged=true \--fail-swap-on=false \--logtostderr=true \--v=2 \--logtostderr=false \--log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
View Code
(4)啟動Kubelet
[root@linux-node2 ~]# systemctl daemon-reload
[root@linux-node2 ~]# systemctl enable kubelet
[root@linux-node2 ~]# systemctl start kubelet
[root@linux-node2 kubernetes]# systemctl status kubelet[root@linux-node3 ~]# systemctl daemon-reload
[root@linux-node3 ~]# systemctl enable kubelet
[root@linux-node3 ~]# systemctl start kubelet
[root@linux-node3 kubernetes]# systemctl status kubelet

在查看kubelet的狀態,發現有如下報錯Failed to get system container stats for "/system.slice/kubelet.service": failed to...此時需要調整kubelet的啟動參數。

解決方法:?
/usr/lib/systemd/system/kubelet.service[service]新增:?Environment="KUBELET_MY_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"?
修改ExecStart: 在末尾新增$KUBELET_MY_ARGS

[root@linux-node2 system]# systemctl status kubelet
● kubelet.service - Kubernetes KubeletLoaded: loaded (/usr/lib/systemd/system/kubelet.service; static; vendor preset: disabled)Active: active (running) since 四 2018-05-31 16:33:17 CST; 16h agoDocs: https://github.com/GoogleCloudPlatform/kubernetesMain PID: 53223 (kubelet)CGroup: /system.slice/kubelet.service└─53223 /opt/kubernetes/bin/kubelet --address=192.168.56.120 --hostname-override=192.168.56.120 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 --experiment...6月 01 08:51:09 linux-node2.example.com kubelet[53223]: E0601 08:51:09.355765   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
6月 01 08:51:19 linux-node2.example.com kubelet[53223]: E0601 08:51:19.363906   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
6月 01 08:51:29 linux-node2.example.com kubelet[53223]: E0601 08:51:29.385439   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
6月 01 08:51:39 linux-node2.example.com kubelet[53223]: E0601 08:51:39.393790   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
6月 01 08:51:49 linux-node2.example.com kubelet[53223]: E0601 08:51:49.401081   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
6月 01 08:51:59 linux-node2.example.com kubelet[53223]: E0601 08:51:59.407863   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
6月 01 08:52:09 linux-node2.example.com kubelet[53223]: E0601 08:52:09.415552   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
6月 01 08:52:19 linux-node2.example.com kubelet[53223]: E0601 08:52:19.425998   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
6月 01 08:52:29 linux-node2.example.com kubelet[53223]: E0601 08:52:29.443804   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
6月 01 08:52:39 linux-node2.example.com kubelet[53223]: E0601 08:52:39.450814   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
Hint: Some lines were ellipsized, use -l to show in full.
(5)查看csr請求 注意是在linux-node1上執行。
[root@linux-node1 ssl]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U   1m        kubelet-bootstrap   Pending
node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA   1m        kubelet-bootstrap   Pending
(6)批準kubelet 的 TLS 證書請求
[root@linux-node1 ssl]#  kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io "node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U" approved
certificatesigningrequest.certificates.k8s.io "node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA" approved[root@linux-node1 ssl]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U   2m        kubelet-bootstrap   Approved,Issued
node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA   2m        kubelet-bootstrap   Approved,Issued執行完畢后,查看節點狀態已經是Ready的狀態了 
[root@linux-node1 ssl]# kubectl get node
NAME             STATUS    ROLES     AGE       VERSION
192.168.56.120   Ready     <none>    50m       v1.10.1
192.168.56.130   Ready     <none>    46m       v1.10.1
  • 3、部署Kubernetes Proxy

(1)配置kube-proxy使用LVS
[root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack
[root@linux-node3 ~]# yum install -y ipvsadm ipset conntrack
(2)創建 kube-proxy 證書請求
[root@linux-node1 ~]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# vim kube-proxy-csr.json
{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "System"}]
}
(3)生成證書
[root@linux-node1~]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \-ca-key=/opt/kubernetes/ssl/ca-key.pem \-config=/opt/kubernetes/ssl/ca-config.json \-profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
(4)分發證書到所有Node節點
[root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.120:/opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.120:/opt/kubernetes/ssl/
(5)創建kube-proxy配置文件
[root@linux-node1 ssl]# kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=https://192.168.56.110:6443 \--kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.[root@linux-node1 ssl]# kubectl config set-credentials kube-proxy \--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.[root@linux-node1 ssl]# kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfig
Context "default" created.[root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
(6)分發kubeconfig配置文件
[root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
[root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.120:/opt/kubernetes/cfg/
[root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.130:/opt/kubernetes/cfg/
(7)創建kube-proxy服務配置
[root@linux-node1 ssl]# mkdir /var/lib/kube-proxy
[root@linux-node2 ssl]# mkdir /var/lib/kube-proxy
[root@linux-node3 ssl]# mkdir /var/lib/kube-proxy[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \--bind-address=192.168.56.120 \--hostname-override=192.168.56.120 \--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
--masquerade-all \--feature-gates=SupportIPVSProxyMode=true \--proxy-mode=ipvs \--ipvs-min-sync-period=5s \--ipvs-sync-period=5s \--ipvs-scheduler=rr \--logtostderr=true \--v=2 \--logtostderr=false \--log-dir=/opt/kubernetes/logRestart=on-failure
RestartSec=5
LimitNOFILE=65536[Install]
WantedBy=multi-user.target[root@linux-node1 ssl]# scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.120:/usr/lib/systemd/system/kube-proxy.service
kube-proxy.service                                         100%  701   109.4KB/s   00:00    
[root@linux-node1 ssl]# scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.130:/usr/lib/systemd/system/kube-proxy.service
kube-proxy.service                                         100%  701    34.9KB/s   00:00    
(8)啟動Kubernetes Proxy
[root@linux-node2 ~]# systemctl daemon-reload
[root@linux-node2 ~]# systemctl enable kube-proxy
[root@linux-node2 ~]# systemctl start kube-proxy
[root@linux-node2 ~]# systemctl status kube-proxy[root@linux-node3 ~]# systemctl daemon-reload
[root@linux-node3 ~]# systemctl enable kube-proxy
[root@linux-node3 ~]# systemctl start kube-proxy
[root@linux-node3 ~]# systemctl status kube-proxy

檢查LVS狀態,可以看到已經創建了一個LVS集群,將來自10.1.0.1:443的請求轉到192.168.56.110:6443,而6443就是api-server的端口
[root@linux-node2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.0.1:443 rr persistent 10800-> 192.168.56.110:6443          Masq    1      0          0         [root@linux-node3 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.0.1:443 rr persistent 10800-> 192.168.56.110:6443          Masq    1      0          0         如果你在兩臺實驗機器都安裝了kubelet和proxy服務,使用下面的命令可以檢查狀態:[root@linux-node1 ssl]#  kubectl get node
NAME            STATUS    ROLES     AGE       VERSION
192.168.56.120   Ready     <none>    22m       v1.10.1
192.168.56.130   Ready     <none>    3m        v1.10.1

到此,K8S的集群就部署完畢,由于K8S本身不支持網絡,需要借助第三方網絡才能進行創建Pod,將在下一節學習Flannel網絡為K8S提供網絡支持。

(9)遇到的問題:kubelet無法啟動,kubectl get node 提示:no resource found

[root@linux-node1 ssl]#  kubectl get node
No resources found.[root@linux-node3 ~]# systemctl status kubelet
● kubelet.service - Kubernetes KubeletLoaded: loaded (/usr/lib/systemd/system/kubelet.service; static; vendor preset: disabled)Active: activating (auto-restart) (Result: exit-code) since Wed 2018-05-30 04:48:29 EDT; 1s agoDocs: https://github.com/GoogleCloudPlatform/kubernetesProcess: 16995 ExecStart=/opt/kubernetes/bin/kubelet --address=192.168.56.130 --hostname-override=192.168.56.130 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --cert-dir=/opt/kubernetes/ssl --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/kubernetes/bin/cni --cluster-dns=10.1.0.2 --cluster-domain=cluster.local. --hairpin-mode hairpin-veth --allow-privileged=true --fail-swap-on=false --logtostderr=true --v=2 --logtostderr=false --log-dir=/opt/kubernetes/log (code=exited, status=255)Main PID: 16995 (code=exited, status=255)May 30 04:48:29 linux-node3.example.com systemd[1]: Unit kubelet.service entered failed state.
May 30 04:48:29 linux-node3.example.com systemd[1]: kubelet.service failed.
[root@linux-node3 ~]# tailf /var/log/messages
......
May 30 04:46:24 linux-node3 kubelet: F0530 04:46:24.134612   16207 server.go:233] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

提示kubelet使用的cgroup驅動類型和docker的cgroup驅動類型不一致。進行查看docker.service[Unit] Description
=Docker Application Container Engine Documentation=http://docs.docker.com After=network.target Wants=docker-storage-setup.service Requires=docker-cleanup.timer[Service] Type=notify NotifyAccess=all KillMode=process EnvironmentFile=-/etc/sysconfig/docker EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker-network Environment=GOTRACEBACK=crash Environment=DOCKER_HTTP_HOST_COMPAT=1 Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin ExecStart=/usr/bin/dockerd-current \--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \--default-runtime=docker-runc \--exec-opt native.cgroupdriver=systemd \ ###修改此處"systemd""cgroupfs"--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \$OPTIONS \$DOCKER_STORAGE_OPTIONS \$DOCKER_NETWORK_OPTIONS \$ADD_REGISTRY \$BLOCK_REGISTRY \$INSECURE_REGISTRY ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=0 Restart=on-abnormal MountFlags=slave[Install] WantedBy=multi-user.target [root@linux-node3 ~]# systemctl daemon-reload [root@linux-node3 ~]# systemctl restart docker.service [root@linux-node3 ~]# systemctl restart kubelet

?

轉載于:https://www.cnblogs.com/linuxk/p/9272778.html

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/251356.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/251356.shtml
英文地址,請注明出處:http://en.pswp.cn/news/251356.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

前端知識點梳理(二)

1.內核 瀏覽器內核&#xff08;Rendering Engine&#xff09;最初分為&#xff1a;渲染引擎&#xff08;layout engineer&#xff09;或&#xff08;Rendering Engine&#xff09;和js引擎&#xff1b;后來 JS 引擎越來越獨立&#xff0c;內核就傾向于單指渲染引擎。瀏覽器she…

微信小程序模仿開眼視頻app(三)——信息卡片瀑布流和分類

《微信小程序模仿開眼視頻app&#xff08;一&#xff09;——視頻首頁、視頻詳情、分類》 《微信小程序模仿開眼視頻app&#xff08;二&#xff09;——搜索功能》 可到我的github賬號上去copy文件 瀑布流部分 文件代碼提示的挺詳細的&#xff0c;這里主要點一下 社區與分類…

PHP后臺代碼解決跨域問題

在前端里面&#xff0c;解決跨域的時候總顯得那么的惡心&#xff0c;什么jsonp啊&#xff0c;ajax啊&#xff0c;CORS啊什么的&#xff0c;總覺得是在鉆空子進行跨域&#xff0c;其實在PHP文件里面只需要加一段代碼就可以跨域了&#xff0c;前端你該怎么寫還是怎么寫&#xff0…

javascript --- typeof方法和instanceof方法

ES5中: 原始類型包括:Number、String、Boolean、Null、Undefined 原始封裝類型包括:Number、String、Boolean 引用類型包括:Array、Function、RegExp、Error、Date、Error 變量對象 原始類型的實例成為原始值,它直接保存在變量對象中. 引用類型的實例成為引用值,它作為一個指針…

python 基本數據類型常用方法總結

【引言】 python中基本數據類型的有很多常用方法&#xff0c;熟悉這些方法有助于不僅提升了編碼效率&#xff0c;而且能寫出高質量代碼&#xff0c;本文做總結 int .bit_length:返回二進制長度 str 切片索引超出不會報錯 切片上下限寫反不報錯&#xff0c;沒有結果 切片倒取&am…

網易試題——關于箭頭函數與this和arguments的關系

昨天做試題的時候遇到了這個題目 var a 1;function fn1() {console.log(this.a)}const fn2 () > {console.log(this.a)}const obj {a: 10,fn1: fn1,fn2: fn2}fn1()fn2()obj.fn1()obj.fn2() 哦這該死的網易&#xff0c;怎么出這么簡單的題目&#xff0c;答案是&#xff1…

《JavaScript 高級程序設計》筆記 第1~5章

第1章 js是專為網頁交互而設計的腳本語言&#xff0c;由3部分組成&#xff1a; ECMAScript&#xff0c;提供核心語言功能DOM文檔對象模型&#xff0c;提供訪問和操作網頁內容的方法和接口BOM瀏覽器對象模型&#xff0c;提供與瀏覽器交互的方法和接口 js是一種腳本語言、解釋…

【筆記】跨域重定向中使用Ajax(XHR請求)導致跨域失敗

背景&#xff1a; 1、前端Web中有兩個域名&#xff0c;a.com和b.com&#xff0c;其中a.com是訪問主站&#xff08;頁面&#xff09;&#xff0c;b.com是數據提交接口的服務器&#xff08;XHR請求&#xff09; 2、a.com中用XHR調用b.com/cerate【沒有指定協議】&#xff0c;保存…

javascript --- js中prototype、__proto__、[[Propto]]、constructor的關系

首先看下面一行代碼: function Person(name){this.name name; } var person1 new Person; console.log(person1.__proto__ Person.prototype); console.log(person1.constructor Person);控制臺打印如下: 可以看見,當使用構造函數(Person)構造一個實例(person1)時, 在后…

前端知識點整理收集(不定時更新~)

知識點都是搜集各種大佬們的&#xff0c;如有冒犯&#xff0c;請告知&#xff01; 目錄 原型鏈 New關鍵字的執行過程 ES6——class constructor方法 類的實例對象 不存在變量提升 super 關鍵字 ES6——...&#xff08;展開/收集&#xff09;運算符 面向對象的理解 關…

數據庫四大特性與隔離級別

數據庫四大特性ACID Atomicity (原子性) :事務&#xff08;transaction&#xff09;是由指邏輯上對數據的的一組操作&#xff0c;這組操作要么一次全部成功&#xff0c;如果這組操作全部失敗&#xff0c;是不可分割的一個工作單位。 Consistency(一致性) :在事務開始以前&#…

重學《JavaScript 高級程序設計》筆記 第6章對象

第6章 面向對象的程序設計 ECMAScript中沒有類的概念&#xff1b; 1.創建對象-歷史 1.1 創建實例&#xff0c;添加方法和屬性 → 對象字面量 缺點&#xff1a; 使用同一接口創建很多對象&#xff0c;產生大量重復代碼 var person new Object() person.name "Y" pe…

Java-reflect(反射)初步理解_1

27.01_反射(類的加載概述和加載時機) A:類的加載概述 當程序要使用某個類時&#xff0c;如果該類還未被加載到內存中&#xff0c;則系統會通過加載&#xff0c;連接&#xff0c;初始化三步來實現對這個類進行初始化。加載 就是指將class文件讀入內存&#xff0c;并為之創建一個…

javascrip --- 構造函數的繼承

兩點需要注意的. 第一是在構造函數聲明時,會同時創建一個該構造函數的原型對象,而該原型對象是繼承自Object的原型對象 // 聲明一個構造函數Rectengle function Rectangle(length, width) {this.length length;this.width width; }// 即:看見function 后面函數名是大寫,一般…

Ruby實例方法和類方法的簡寫

創建: 2017/12/12 類方法 Sample.func實例方法 Sample#func轉載于:https://www.cnblogs.com/lancgg/p/8281677.html

《JavaScript 高級程序設計》筆記 第7章及以后

第7章 函數表達式 匿名函數的name屬性是空字符串&#xff1b;閉包是函數&#xff1a;閉包是有權訪問另一個函數作用域中變量的函數&#xff1b;(P181 副作用,解釋了點擊li彈出循環最后值的原因)當某個函數第一次被調用時&#xff0c;會創建一個執行環境及相應作用域鏈&#xf…

[樹形dp] Jzoj P1046 尋寶之旅

Description 探險隊長凱因意外的弄到了一份黑暗森林的藏寶圖&#xff0c;于是&#xff0c;探險隊一行人便踏上了尋寶之旅&#xff0c;去尋找傳說中的寶藏。藏寶點分布在黑暗森林的各處&#xff0c;每個點有一個值&#xff0c;表示藏寶的價值。它們之間由一些小路相連&#xff0…

javascript --- 使用語法糖class定義函數

本文討論的是通過class聲明的函數,有什么特點,或者說是指向了哪里. class A() {} // A是一個類// 要看class聲明的函數指向哪里,只需將其[[Prototype]]屬性打印到控制臺,下面看看A和它的原型對象的指向 // 注:[[Prototype]]屬性通過__proto__訪問 console.log(A.__proto__…

前端知識點整理收集(不定時更新~)二

目錄 require() 加載文件機制 線程和進程 線程 單線程 Nodejs的線程與進程 網絡模型 初識 TCP 協議 三次握手 I/O I/O 先修知識 阻塞與非阻塞 I/O 同步與異步 I/O Git 基礎命令 分支操作 修改遠程倉庫地址 遠程分支獲取最新的版本到本地 拉取遠程倉庫指定分支…

SpringBoot零基礎入門指南--搭建Springboot然后能夠在瀏覽器返回數據

File->new Project 修改默認包名&#xff0c;根據自己的喜好修改 選擇初始化需要導入的包&#xff0c;盡量不要一開始就導入很多&#xff0c;特別是數據庫&#xff0c;不然啟動可能會有問題&#xff0c;創建好的目錄如下&#xff1a; 配置文件寫在application.properties下&…