bash腳本 文件
As you can read in this article, I recently had some trouble with my email server and decided to outsource email administration to Amazon's Simple Email Service (SES).
正如您在本文中所讀到的 ,最近我的電子郵件服務器遇到了一些麻煩,決定將電子郵件管理外包給Amazon的Simple Email Service(SES)。
The problem with that solution was that I had SES save new messages to an S3 bucket, and using the AWS Management Console to read files within S3 buckets gets stale really fast.
該解決方案的問題在于,我讓SES將新消息保存到S3存儲桶中,并且使用AWS管理控制臺讀取S3存儲桶中的文件會很快變得過時。
So I decided to write a Bash script to automate the process of downloading, properly storing, and viewing new messages.
因此,我決定編寫一個Bash腳本來自動化下載,正確存儲和查看新消息的過程。
While I wrote this script for use on my Ubuntu Linux desktop, it wouldn't require too much fiddling to make it work on a macOS or Windows 10 system through Windows SubSystem for Linux.
當我編寫此腳本以在Ubuntu Linux桌面上使用時,不需要太多的麻煩就可以通過Windows SubSystem for Linux在macOS或Windows 10系統上運行它 。
Here's the complete script all in one piece. After you take a few moments to look it over, I'll walk you through it one step at a time.
以下是完整的腳本。 您花了一些時間仔細查看后,我將一步一步地指導您完成操作。
We'll begin with the single command to download any messages currently residing in my S3 bucket (by the way, I've changed the names of the bucket and other filesystem and authentication details to protect my privacy).
我們將從單個命令開始下載當前駐留在我的S3存儲桶中的所有消息(順便說一句,為了保護我的隱私,我更改了存儲桶的名稱以及其他文件系統和身份驗證詳細信息)。
aws s3 cp \--recursive \s3://bucket-name/ \/home/david/s3-emails/tmpemails/ \--profile myaccount
Of course, this will only work if you've already installed and configured the AWS CLI for your local system. Now's the time to do that if you haven't already.
當然,這僅在您已經為本地系統安裝并配置了AWS CLI的情況下才有效。 如果您還沒有的話,現在是時候這樣做了。
The cp command stands for "copy," --recursive tells the CLI to apply the operation even to multiple objects, s3://bucket-name points to my bucket (your bucket name will obviously be different), the /home/david... line is the absolute filesystem address to which I'd like the messages copied, and the --profile argument tells the CLI which of my multiple AWS accounts I'm referring to.
cp命令代表“復制”,-- recursive告訴CLI甚至將操作應用于多個對象, s3:// bucket-name指向我的存儲桶(您的存儲桶名稱顯然會有所不同),/ home / david ...行是我要將消息復制到的絕對文件系統地址,-- profile參數告訴CLI我要引用的是我多個AWS賬戶中的哪個。
The next section sets two variables that will make it much easier for me to specify filesystem locations through the rest of the script.
下一節將設置兩個變量,這將使我更容易通過腳本的其余部分指定文件系統位置。
tmp_file_location=/home/david/s3-emails/tmpemails/*
base_location=/home/david/s3-emails/emails/
Note how the value of the tmp_file_location variable ends with an asterisk. That's because I want to refer to the files within that directory, rather than the directory itself.
請注意tmp_file_location變量的值如何以星號結尾。 那是因為我要引用該目錄中的文件 ,而不是目錄本身。
I'll create a new permanent directory within the .../emails/ hierarchy to make it easier for me to find messages later. The name of this new directory will be the current date.
我將在... / emails /層次結構中創建一個新的永久目錄,以使我以后更容易找到消息。 這個新目錄的名稱將是當前日期。
today=$(date +"%m_%d_%Y")
[[ -d ${base_location}/"$today" ]] || mkdir ${base_location}/"$today"
I first create a new shell variable named today that will be populated by the output of the date +"%m_%d_%Y" command. date itself outputs the full date/timestamp, but what follows ("%m_%d_%Y") edits that output to a simpler and more readable format.
我首先創建一個名為Today的新shell變量,該變量將由date +“%m_%d_%Y”命令的輸出填充。 date本身會輸出完整的日期/時間戳,但隨后的內容( “%m_%d_%Y” )會將其編輯為更簡單且更易讀的格式。
I then test for the existence of a directly using that name - which would indicate that I've already received emails on that day and, therefore, there's no need to recreate the directory. If such a directory does not exist (||), then mkdir will create it for me. If you don't run this test, your command could return annoying error messages.
然后,我將使用該名稱直接測試是否存在-這表明我當天已經收到了電子郵件,因此,無需重新創建目錄。 如果這樣的目錄不存在(||),然后將MKDIR創建對我來說。 如果不運行此測試,則您的命令可能會返回令人討厭的錯誤消息。
Since Amazon SES gives ugly and unreadable names to each of the messages it drops into my S3 bucket, I'll now dynamically rename them while, at the same time, moving them over to their new home (in the dated directory I just created).
由于Amazon SES給它放入S3存儲桶中的每條消息都賦予丑陋且難以理解的名稱,因此,我現在將對其動態重命名,同時將其移至新位置(在我剛剛創建的帶日期的目錄中) 。
for FILE in $tmp_file_location
domv $FILE ${base_location}/${today}/email$(rand)
done
The for...do...done loop will read each of the files in the directory represented by the $tmp_file_location variable and then move it to the directory I just created (represented by the $base_location variable in addition to the current value of $today).
for ... do ... done循環將讀取$ tmp_file_location變量表示的目錄中的每個文件,然后將其移至我剛剛創建的目錄(除了$ ...的當前值之外,還由$ base_location變量表示) 今天 )。
As part of the same operation, I'll give it its new name, the string "email" followed by a random number generated by the rand command. You may need to install a random number generator: that'll be apt install rand on Ubuntu.
作為同一操作的一部分,我將為其賦予新的名稱,即字符串“ email ”,后跟由rand命令生成的隨機數。 您可能需要安裝隨機數生成器:可以在Ubuntu上安裝rand 。
An earlier version of the script created names differentiated by shorter, sequential numbers that were incremented using a count=1...count=$((count+1)) logic within the for loop. That worked fine as long as I didn't happen to receive more than one batch of messages on the same day. If I did, then the new messages would overwrite older files in that day's directory.
該腳本的早期版本創建的名稱以較短的順序號區分,這些順序號使用for循環內的count = 1 ... count = $((count + 1))邏輯遞增。 只要我當天沒碰到多于一封郵件,就可以正常工作。 如果我這樣做了,那么新消息將覆蓋當天目錄中的舊文件。
I guess it's mathematically possible that my rand command could assign overlapping numbers to two files but, given that the default range rand uses is between 1 and 32,576, that's a risk I'm willing to take.
我猜我的rand命令在數學上可能會將重疊的數字分配給兩個文件,但是,鑒于rand使用的默認范圍是1到32,576之間,這是我愿意承擔的風險。
At this point, there should be files in the new directory with names like email3039, email25343, etc. for each of the new messages I was sent.
此時,在新目錄中,應該為我發送的每條新消息都命名為email3039,email25343等文件。
Running the tree command on my own system shows me that five messages were saved to my 02_27_2020 directory, and one more to 02_28_2020 (these files were generated using the older version of my script, so they're numbered sequentially).
在我自己的系統上運行tree命令顯示,有5條消息保存到02_27_2020目錄中,另外1條保存到02_28_2020(這些文件是使用較舊版本的腳本生成的,因此按順序編號)。
There are currently no files in tmpemails - that's because the mv command moves files to their new location, leaving nothing behind.
tmpemails當前沒有文件-這是因為mv命令將文件移動到新位置,不留任何內容。
$ tree
.
├── emails
│?? ├── 02_27_2020
│?? │?? ├── email1
│?? │?? ├── email2
│?? │?? ├── email3
│?? │?? ├── email4
│?? │?? ├── email5
│?? └── 02_28_2020
│?? └── email1
└── tmpemails
The final section of the script opens each new message in my favorite desktop text editor (Gedit). It uses a similar for...do...done loop, this time reading the names of each file in the new directory (referenced using the "today" command) and then opening the file in Gedit. Note the asterisk I added to the end of the directory location.
腳本的最后部分在我最喜歡的桌面文本編輯器(Gedit)中打開每個新消息。 它使用類似的for ... do ... done循環,這次讀取新目錄中的每個文件的名稱(使用“ today ”命令引用),然后在Gedit中打開該文件。 請注意我添加到目錄位置末尾的星號。
for NEWFILE in ${base_location}/${today}/*
dogedit $NEWFILE
done
There's still one more thing to do. If I don't clean out my S3 bucket, it'll download all the accumulated messages each time I run the script. That'll make it progressively harder to manage.
還有另一件事要做。 如果我不清理我的S3存儲桶,則每次運行腳本時,它將下載所有累積的消息。 這將使其變得越來越難管理。
So, after successfully downloading my new messages, I run this short script to delete all the files in the bucket:
因此,在成功下載新消息后,我運行以下簡短腳本以刪除存儲桶中的所有文件:
#!/bin/bash
# Delete all existing emails aws s3 rm --recursive s3://bucket-name/ --profile myaccount
翻譯自: https://www.freecodecamp.org/news/bash-script-download-view-from-s3-bucket/
bash腳本 文件