Resources. When they are unlimited they are not important. But when they're limited, boy do you have challenges!
資源。 當它們不受限制時,它們并不重要。 但是,當他們受到限制時,男孩你有挑戰!
Recently, my team has faced such a challenge ourselves: we realised that we needed to upgrade the Node version on one of our Jenkins agents so we could build and properly test our Angular 7 app. However, we learned that we would also lose the ability to build our legacy AngularJS apps which require Node 8.
最近,我的團隊自己面對了這樣的挑戰:我們意識到我們需要在我們的Jenkins代理之一上升級Node版本,以便我們可以構建和正確測試Angular 7應用程序。 但是,我們了解到,我們還將失去構建需要Node 8的舊版AngularJS應用程序的能力。
What were we to do?
我們該怎么辦?
Apart from eliminating the famous "It works on my machine" problem, Docker came in handy to tackle such a problem. However, there were certain challenges that needed to be addressed, such as Docker in Docker.
除了消除著名的“它可以在我的機器上工作”問題之外,Docker還可以方便地解決此類問題。 但是,有一些挑戰需要解決,例如Docker中的Docker。
For this purpose, after a long period of trial and error, we built and published a docker file that fit our team's needs. It helps run our builds, and it looks like the following:
為此,經過長期的反復試驗,我們構建并發布了適合我們團隊需求的docker文件 。 它有助于運行我們的構建,如下所示:
1. Install dependencies
2. Lint the code
3. Run unit tests
4. Run SonarQube analysis
5. Build the application
6. Build a docker image which would be deployed
7. Run the docker container
8. Run cypress tests
9. Push docker image to the repository
10. Run another Jenkins job to deploy it to the environment
11. Generate unit and functional test reports and publish them
12. Stop any running containers
13. Notify chat/email about the build
我們需要的Docker映像 (The docker image we needed)
Our project is an Angular 7 project, which was generated using the angular-cli
. We also have some dependencies that need Node 10.x.x. We lint our code with tslint
, and run our unit tests with Karma
and Jasmine
. For the unit tests we need a Chrome browser installed so they can run with headless Chrome.
我們的項目是Angular 7項目,它是使用angular-cli
生成的。 我們還有一些需要Node 10.xx的依賴項。我們使用tslint
代碼,并使用Karma
和Jasmine
運行單元測試。 對于單元測試,我們需要安裝Chrome瀏覽器,以便它們可以與無頭Chrome一起運行。
This is why we decided to use the cypress/browsers:node10.16.0-chrome77
image. After we installed the dependencies, linted our code and ran our unit tests, we ran the SonarQube analysis. This required us to have Openjdk 8
as well.
這就是為什么我們決定使用cypress/browsers:node10.16.0-chrome77
圖像的原因。 在安裝依賴項,簡化代碼并運行單元測試之后,我們運行了SonarQube分析。 這要求我們也有Openjdk 8
。
FROM cypress/browsers:node10.16.0-chrome77# Install OpenJDK-8
RUN apt-get update && \apt-get install -y openjdk-8-jdk && \apt-get install -y ant && \apt-get clean;# Fix certificate issues
RUN apt-get update && \apt-get install ca-certificates-java && \apt-get clean && \update-ca-certificates -f;# Setup JAVA_HOME -- useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME
Once the sonar scan was ready, we built the application. One of the strongest principles in testing is that you should test the thing that will be used by your users.That is the reason that we wanted to test the built code in exactly the same docker container as it would be in production.
聲納掃描準備就緒后,我們就構建了該應用程序。 測試中最強大的原則之一就是您應該測試將由用戶使用的東西,這就是我們想要在與生產環境完全相同的Docker容器中測試構建代碼的原因。
We could, of course serve the front-end from a very simple nodejs
static server.But that would mean that everything an Apache HTTP server or an NGINX server usually did would be missing (for example all the proxies, gzip
or brotli
).
我們當然可以通過非常簡單的nodejs
靜態服務器為前端服務, nodejs
意味著Apache HTTP服務器或NGINX服務器通常所做的一切都將丟失(例如所有代理, gzip
或brotli
)。
Now while this is a strong principle, the biggest problem was that we were already running inside a Docker container. That is why we needed DIND (Docker in Docker).
現在,盡管這是一個很強的原則,但最大的問題是我們已經在Docker容器中運行。 這就是為什么我們需要DIND(Docker中的Docker)的原因。
After spending a whole day with my colleague researching, we found a solution which ended up working like a charm. The first and most important thing is that our build container needed the Docker executable.
在與同事一起研究了一整天之后,我們找到了一個最終成功的解決方案。 首先也是最重要的一點是,我們的構建容器需要Docker可執行文件。
# Install Docker executable
RUN apt-get update && apt-get install -y \apt-transport-https \ca-certificates \curl \gnupg2 \software-properties-common \&& curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \&& add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/debian \$(lsb_release -cs) \stable" \&& apt-get update \&& apt-get install -y \docker-ceRUN usermod -u 1002 node && groupmod -g 1002 node && gpasswd -a node docker
As you can see we installed the docker executable and the necessary certificates, but we also added the rights and groups for our user. This second part is necessary because the host machine, our Jenkins agent, starts the container with -u 1002:1002
. That is the user ID of our Jenkins agent which runs the container unprivileged.
如您所見,我們安裝了docker可執行文件和必要的證書,但是我們還為用戶添加了權限和組。 第二部分是必需的,因為主機(我們的Jenkins代理)使用-u 1002:1002
啟動容器。 這是我們的Jenkins代理的用戶ID,該代理以無特權的方式運行容器。
Of course this isn't everything. When the container starts, the docker daemon of the host machine must be mounted. So we needed to start the build containerwith some extra parameters. It looks like the following in a Jenkinsfile:
當然,這還不是全部。 容器啟動時,必須掛載主機的docker守護程序。 因此,我們需要使用一些額外的參數來啟動構建容器。 Jenkins文件中的內容如下所示:
pipeline {agent {docker {image 'btapai/pipelines:node-10.16.0-chrome77-openjdk8-CETtime-dind'label 'frontend'args '-v /var/run/docker.sock:/var/run/docker.sock -v /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket -e HOME=${workspace} --group-add docker'}}// ...
}
As you can see, we mounted two Unix sockets. /var/run/docker.sock
mounts the docker daemon to the build container.
如您所見,我們安裝了兩個Unix套接字。 /var/run/docker.sock
將Docker守護程序掛載到構建容器。
/var/run/dbus/system_bus_socket
is a socket that allows cypress to run inside our container.
/var/run/dbus/system_bus_socket
是一個套接字,可讓cypress在我們的容器內運行。
We needed -e HOME=${workspace}
to avoid access rights issues during the build.
我們需要-e HOME=${workspace}
以避免在構建期間出現訪問權限問題。
--group-add docker
passes the host machines docker group down, so that inside the container our user can use the docker daemon.
--group-add docker
向下傳遞主機docker組,以便我們的用戶可以在容器內使用docker守護程序。
With these proper arguments, we were able to build our image, start it up and run our cypress tests against it.
有了這些適當的論據,我們就能建立自己的形象,啟動它并對其進行賽普拉斯測試。
But let's take a deep breath here. In Jenkins, we wanted to use multi-branch pipelines. Multibranch pipelines in Jenkins would create a Jenkins job for each branch that contained a Jenkinsfile. This meant that when we developed multiple branches they would have their own views.
但是,讓我們在這里深呼吸。 在詹金斯,我們想使用多分支管道。 Jenkins中的多分支管道會為每個包含Jenkinsfile的分支創建一個Jenkins作業。 這意味著當我們開發多個分支機構時,它們將擁有自己的視圖。
There were some problems with this. The first problem was that if we built our image with the same name in all the branches, there would be conflicts (since our docker daemon was technically not inside our build container).
這有一些問題。 第一個問題是,如果我們在所有分支中都使用相同的名稱構建映像,則會發生沖突(因為從技術上講,docker守護程序不在構建容器內)。
The second problem arose when the docker run command used the same port in every build (because you can't start the second container on a port that is already taken).
當docker run命令在每個構建版本中使用相同的端口時,會出現第二個問題(因為您無法在已占用的端口上啟動第二個容器)。
The third issue was getting the proper URL for the running application, because Dorothy, you are not in Localhost anymore.
第三個問題是為正在運行的應用程序獲取正確的URL,因為Dorothy,您不再位于Localhost中。
Let's start with the naming. Getting a unique name is pretty easy with git, because commit hashes are unique. However, to get a unique port we had to use a little trick when we declared our environment variables:
讓我們從命名開始。 使用git獲得唯一的名稱非常容易,因為提交哈希是唯一的。 但是,要獲得唯一的端口,我們在聲明環境變量時必須使用一些技巧:
pipeline {// ..environment {BUILD_PORT = sh(script: 'shuf -i 2000-65000 -n 1',returnStdout: true).trim()}// ...stage('Functional Tests') {steps {sh "docker run -d -p ${BUILD_PORT}:80 --name ${GIT_COMMIT} application"// be patient, we are going to get the url as well. :)}}// ...}
With the shuf -i 2000-65000 -n 1
command on certain Linux distributions you can generate a random number. Our base image uses Debian so we were lucky here.The GIT_COMMIT
environment variable was provided in Jenkins via the SCM plugin.
在某些Linux發行版中,使用shuf -i 2000-65000 -n 1
命令可以生成一個隨機數。 我們的基本映像使用Debian,因此我們很幸運GIT_COMMIT
環境變量是通過SCM插件在Jenkins中提供的。
Now came the hard part: we were inside a docker container, there was no localhost, and the network inside docker containers can change.
現在最困難的部分是:我們在docker容器內,沒有本地主機,并且docker容器內的網絡可以更改。
It was also funny that when we started our container, it was running on the host machine's docker daemon. So technically it was not running inside our container. We had to reach it from the inside.
有趣的是,當我們啟動容器時,它正在主機的docker守護程序上運行。 因此,從技術上講,它不在容器內運行。 我們必須從內部到達它。
After several hours of investigation my colleague found a possible solution:docker inspect --format "{{ .NetworkSettings.IPAddress }}"
經過數小時的調查,我的同事找到了一個可能的解決方案: docker inspect --format "{{ .NetworkSettings.IPAddress }}"
But it did not work, because that IP address was not an IP address inside the container, but rather outside it.
但這沒有用,因為該IP地址不是容器內部的IP地址,而是容器外部的IP地址。
Then we tried the NetworkSettings.Gateway
property, which worked like a charm.So our Functional testing stage looked like the following:
然后我們嘗試了NetworkSettings.Gateway
屬性,該屬性像一個超級按鈕一樣工作,因此我們的功能測試階段如下所示:
stage('Functional Tests') {steps {sh "docker run -d -p ${BUILD_PORT}:80 --name ${GIT_COMMIT} application"sh 'npm run cypress:run -- --config baseUrl=http://`docker inspect --format "{{ .NetworkSettings.Gateway }}" "${GIT_COMMIT}"`:${BUILD_PORT}'}
}
It was a wonderful feeling to see our cypress tests running inside a docker container.
看到我們的cypress測試在docker容器中運行是一種很棒的感覺。
But then some of them failed miserably. Because the failing cypress tests expected to see some dates.
但是,其中一些失敗了。 因為失敗的柏樹測試預計會看到一些日期。
cy.get("created-date-cell").should("be.visible").and("contain", "2019.12.24 12:33:17")
But because our build container was set to a different timezone, the displayed date on our front-end was different.
但是因為我們的構建容器設置為不同的時區,所以前端顯示的日期不同。
Fortunately, it was an easy fix, and my colleague had seen it before. We installed the necessary time zones and locales. In our case we set the build container's timezone to Europe/Budapest
, because our tests were written in this timezone.
幸運的是,這很容易解決,而我的同事以前見過。 我們安裝了必要的時區和語言環境。 在我們的案例中,我們將構建容器的時區設置為Europe/Budapest
,因為我們的測試是在該時區編寫的。
# SETUP-LOCALE
RUN apt-get update \&& apt-get install --assume-yes --no-install-recommends locales \&& apt-get clean \&& sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen \&& sed -i -e 's/# hu_HU.UTF-8 UTF-8/hu_HU.UTF-8 UTF-8/' /etc/locale.gen \&& locale-genENV LANG="en_US.UTF-8" \LANGUAGE= \LC_CTYPE="en_US.UTF-8" \LC_NUMERIC="hu_HU.UTF-8" \LC_TIME="hu_HU.UTF-8" \LC_COLLATE="en_US.UTF-8" \LC_MONETARY="hu_HU.UTF-8" \LC_MESSAGES="en_US.UTF-8" \LC_PAPER="hu_HU.UTF-8" \LC_NAME="hu_HU.UTF-8" \LC_ADDRESS="hu_HU.UTF-8" \LC_TELEPHONE="hu_HU.UTF-8" \LC_MEASUREMENT="hu_HU.UTF-8" \LC_IDENTIFICATION="hu_HU.UTF-8" \LC_ALL=# SETUP-TIMEZONE
RUN apt-get update \&& apt-get install --assume-yes --no-install-recommends tzdata \&& apt-get clean \&& echo 'Europe/Budapest' > /etc/timezone && rm /etc/localtime \&& ln -snf /usr/share/zoneinfo/'Europe/Budapest' /etc/localtime \&& dpkg-reconfigure -f noninteractive tzdata
Since every crucial part of the build was now resolved, pushing the built image to the registry was just a docker push command. You can check out the whole dockerfile here.
由于現在解決了構建的每個關鍵部分,因此將構建的映像推送到注冊表只是一個docker push命令。 您可以在此處檢出整個dockerfile。
One thing remained, which was to stop running containers when the cypress tests failed. We did this easily using the always
post step.
剩下的一件事是,當柏樹測試失敗時,停止運行容器。 我們使用always
post步驟輕松做到了這一點。
post {always {script {try {sh "docker stop ${GIT_COMMIT} && docker rm ${GIT_COMMIT}"} catch (Exception e) {echo 'No docker containers were running'}}}
}
Thank you very much for reading this blog post. I hope it helps you.
非常感謝您閱讀此博客文章。 希望對您有幫助。
The original article can be read on my blog:
原始文章可以在我的博客上閱讀:
翻譯自: https://www.freecodecamp.org/news/you-rang-mlord-docker-in-docker-with-jenkins-declarative-pipelines/