aws lambda
by Yan Cui
崔燕
如何為AWS Lambda實施日志聚合 (How to implement log aggregation for AWS Lambda)
Dur-ing the exe-cu-tion of a Lamb-da func-tion, what-ev-er you write to std-out (for example, using console.log
in Node.js) will be cap-tured by Lamb-da and sent to Cloud-Watch Logs asyn-chro-nous-ly in the back-ground. And it does this with-out adding any over-head to your func-tion exe-cu-tion time.
在執行Lambda函數期間,您寫入stdout的任何內容(例如,使用Node.js中的console.log
)都會被Lambda捕獲,并在后臺異步發送到CloudWatch Logs。 這樣做不會增加函數執行時間的開銷。
You can find all the logs for your Lamb-da func-tions in Cloud-Watch Logs. There is a unique log group for each func-tion. Each log group then consists of many log streams, one for each concurrently executing instance of the function.
您可以在CloudWatch Logs中找到Lambda函數的所有日志。 每個功能都有一個唯一的日志組。 每個日志組則由許多日志流組成,每個并發執行該功能的實例一個。
You can send logs to Cloud-Watch Logs your-self via the Put-Lo-gEvents oper-a-tion. Or you can send them to your pre-ferred log aggre-ga-tion ser-vice such as Splunk or Elas-tic-search.
您可以自己通過PutLogEvents操作將日志發送到CloudWatch Logs。 或者,您可以將它們發送到首選的日志聚合服務,例如Splunk或Elasticsearch。
But, remem-ber that every-thing has to be done dur-ing a function’s invocation. If you make addi-tion-al net-work calls dur-ing the invo-ca-tion, then you’ll pay for that addi-tion-al exe-cu-tion time. Your users would also have to wait longer for the API to respond.
但是,請記住, 在函數調用期間必須完成所有操作 。 如果您在調用期間進行了其他網絡調用,則需要為該額外的執行時間付費。 您的用戶還必須等待更長的時間才能使API響應。
These extra network calls might only add 10–20ms per invocation. But you have microservices, and a single user action can involve several API calls. Those 10–20ms per API call can compound and add over 100ms to your user-facing latency, which is enough to reduce sales by 1% according to Amazon.
這些額外的網絡調用每次調用可能只會增加10–20ms。 但是您擁有微服務,單個用戶操作可能涉及多個API調用。 根據Amazon的說法,每個API調用需要10-20毫秒的時間,這會使您面對用戶的延遲加重并增加100毫秒以上,這足以使銷售量減少1% 。
So, don’t do that!
所以,不要那樣做!
Instead, process the logs from Cloud-Watch Logs after the fact.
相反,請在事實之后處理CloudWatch Logs中的日志。
In the Cloud-Watch Logs con-sole, you can select a log group and choose to stream the data direct-ly to Amazon’s host-ed Elas-tic-search ser-vice.
在CloudWatch Logs控制臺中,您可以選擇一個日志組,然后選擇將數據直接流式傳輸到Amazon托管的Elasticsearch服務。
This is very use-ful if you’re using the host-ed Elas-tic-search ser-vice already. But if you’re still eval-u-at-ing your options, then give this post a read before you decide on the AWS-host-ed Elas-tic-search.
如果您已經在使用托管的Elasticsearch服務,這將非常有用。 但是,如果您仍在評估選項,則在決定由AWS托管的Elasticsearch之前,請閱讀此文章 。
You can also stream the logs to a Lamb-da func-tion instead. There are even a num-ber of Lambda function blue-prints for push-ing Cloud-Watch Logs to oth-er log aggre-ga-tion ser-vices already.
您也可以將日志流傳輸到Lambda函數。 甚至還有許多Lambda功能藍圖,用于將CloudWatch Logs推送到其他日志聚合服務。
Clear-ly this is some-thing a lot of AWS’s cus-tomers have asked for.
顯然,這是許多AWS客戶所要求的。
You can use these blue-prints to help you write a Lamb-da func-tion that’ll ship Cloud-Watch Logs to your pre-ferred log aggre-ga-tion ser-vice. But here are a few more things to keep in mind.
您可以使用這些藍圖來幫助您編寫Lambda函數,該函數會將CloudWatch Logs運送到您首選的日志聚合服務。 但是,還有幾件事要牢記。
When-ev-er you cre-ate a new Lamb-da func-tion, it’ll cre-ate a new log group in Cloud-Watch logs. You want to avoid a man-u-al process for sub-scrib-ing log groups to your log shipping func-tion.
每當您創建新的Lambda函數時,它將在CloudWatch日志中創建一個新的日志組。 您希望避免手動將日志組訂閱到日志傳送功能的過程。
Instead, enable Cloud-Trail, and then set-up an event pat-tern in Cloud-Watch Events to invoke anoth-er Lamb-da func-tion when-ev-er a log group is cre-at-ed.
相反,啟用CloudTrail,然后在CloudWatch Events中設置事件模式以在創建日志組時調用另一個Lambda函數。
You can do this one-off set-up in the Cloud-Watch con-sole.
您可以在CloudWatch控制臺中進行一次性設置。
If you’re work-ing with mul-ti-ple AWS accounts, then you should avoid mak-ing the set-up a man-u-al process. With the Server-less frame-work, you can set-up the event source for this subscribe-log-group
func-tion in the serverless.yml
.
如果您使用多個AWS賬戶,則應避免手動進行設置。 使用Serverless框架,您可以在serverless.yml
中為該serverless.yml
subscribe-log-group
函數設置事件源。
Anoth-er thing to keep in mind is that you need to avoid sub-scrib-ing the log group for the ship-logs
func-tion to itself. It’ll cre-ate an infi-nite invo-ca-tion loop and that’s a painful les-son that you want to avoid.
要記住的另一件事是, 您需要避免 為自身 的 ship-logs
功能 訂閱日志組 。 這將創建一個無限循環調用 ,這就是要避免一個慘痛的教訓。
One more thing.
還有一件事。
By default, when Lamb-da cre-ates a new log group for your func-tion, the retention pol-i-cy is set to Never Expire
. This is overkill, as the data storage cost can add up over time. It’s also unnecessary if you’re shipping the logs elsewhere already!
默認情況下,當Lambda為您的功能創建新的日志組時,保留策略將設置為Never Expire
。 這太過分了,因為隨著時間的推移, 數據存儲成本可能會增加。 如果您已經將日志發送到其他地方,則也沒有必要!
We can apply the same tech-nique above and add anoth-er Lamb-da func-tion to automatically update the reten-tion pol-i-cy to some-thing more rea-son-able.
我們可以應用上面相同的技術,并添加另一個Lambda函數以將保留策略自動更新為更合理的方法。
If you already have lots of exist-ing log groups, then con-sid-er writing one-off scripts to update them all. You can do this by recurs-ing through all log groups with the DescribeL-og-Groups API call.
如果您已經有很多現有的日志組,請考慮編寫一次性腳本來更新它們。 你可以做到這一點遞歸通過與所有的日志組DescribeLogGroups API調用。
If you’re interested in applying these techniques yourself, I have put together a simple demo project for you. If you follow the instructions in the README and deploy the functions, then all the logs for your Lambda functions would be delivered to Logz.io.
如果您有興趣親自應用這些技術,那么我為您準備了一個簡單的演示項目 。 如果您按照自述文件中的說明進行操作并部署這些功能,則Lambda函數的所有日志都將傳遞到Logz.io。
翻譯自: https://www.freecodecamp.org/news/how-to-implement-log-aggregation-for-aws-lambda-ca714bf02f48/
aws lambda