錯誤錄入 算法
Monument (www.monument.ai) enables you to quickly apply algorithms to data in a no-code interface. But, after you drag the algorithms onto data to generate predictions, you need to decide which algorithm or combination of algorithms is most reliable for your task.
使用Monument( www.monument.ai ),您可以在無代碼界面中快速將算法應用于數據。 但是,將算法拖到數據上以生成預測后,需要確定哪種算法或算法組合最適合您的任務。
In the ocean temperature tutorial, we cleaned open remote sensing data and fed the data into Monument in order to forecast future ocean temperatures. In that case, we used visual inspection to evaluate the accuracy of different algorithms, which was possible because the historical data roughly formed a sine curve. Visual inspection is one tool in the data science toolbox, but there are other tools as well.
在海洋溫度教程中,我們清理了開放的遙感數據并將其輸入到Monument中,以預測未來的海洋溫度。 在那種情況下,我們使用視覺檢查來評估不同算法的準確性,這是有可能的,因為歷史數據大致形成了正弦曲線。 視覺檢查是數據科學工具箱中的一種工具,但是還有其他工具。
The Validation Error Rate is another useful tool in cases where you want to get more fine-grained or where visual inspection does not yield obvious insights. There are other error functions that can be used, but Validation Error Rate is the default error function in Monument.
在您希望獲得更細粒度或視覺檢查無法產生明顯見解的情況下,“驗證錯誤率”是另一個有用的工具。 可以使用其他錯誤函數,但“驗證錯誤率”是Monument中的默認錯誤函數。
驗證錯誤率是多少?為什么重要? (What Is The Validation Error Rate And Why Is It Important?)
The Validation Error Rate measures the distance between “out of sample” values and estimates produced by the algorithm. You can find this metric in the INFO box in the lower-left corner of the MODEL workspace.
驗證錯誤率可衡量“樣本外”值與算法產生的估計值之間的距離。 您可以在MODEL工作區左下角的INFO框中找到該指標。
As a general rule of thumb, the “more negative” your Validation Error Rate is, the more accurate the model is. Negative infinity would be a perfect model. In the real world, as we will see with our ocean temperature data, sometimes the best you can do is a small, but nevertheless positive number.
通常,驗證錯誤率越“負”,則模型越準確。 負無窮大將是一個完美的模型。 在現實世界中,正如我們將通過海洋溫度數據所看到的那樣,有時您能做的最好的事情雖然很小,但仍然是正數。
Currently, Monument only displays one Validation Error Rate at a time. To view the Validation Error Rate for other algorithms that you have trained, click the drop-down arrow on the right side of the algorithm pill and select SHOW ERROR RATE.
當前,紀念碑僅一次顯示一個驗證錯誤率。 要查看您已訓練的其他算法的驗證錯誤率,請單擊算法丸右側的下拉箭頭,然后選擇顯示錯誤率。
To compare the performance of the models, I have pasted below a table of all the Validation Error Rates applied to the ocean temperatures data, sorted from lowest to highest.
為了比較模型的性能,我在下表中粘貼了應用于海洋溫度數據的所有“驗證錯誤率”,從最低到最高排序。
As we discovered in the tutorial, with default parameters, AR and G-DyBM perform the best on the cleaned and transformed data.
正如我們在本教程中發現的那樣,使用默認參數,AR和G-DyBM在清理和轉換后的數據上表現最佳。
如何提高算法性能 (How To Improve Algorithm Performance)
Typically, we can improve the Validation Error Rate — i.e. make it “more negative” — by adjusting the algorithms’ parameters. You can access an algorithm’s parameters by selecting PARAMETERS in the algorithm pill drop-down.
通常,我們可以通過調整算法參數來提高“驗證錯誤率”,即使其“更負”。 您可以通過在算法藥丸下拉列表中選擇“參數”來訪問算法的參數。
Choosing which parameters to edit to improve performance depends heavily on your business objectives and the nature of the data you’re looking at. We will cover common cases in future tutorials, but the best approach is to experiment yourself to develop an intuition around which parameters most improve results for different kinds of data.
選擇要編輯哪些參數以提高性能的方法很大程度上取決于您的業務目標和所查看數據的性質。 我們將在以后的教程中介紹一些常見的案例,但是最好的方法是嘗試自己,以開發出一種直覺,即參數可以最有效地改善不同類型數據的結果。
Certain algorithms allow for automated parameter adjustment. In Monument, the LSTM and LightGBM algorithms also have “AutoML,” which is short for Automated Machine Learning. AutoML automatically adjusts an algorithm’s parameters to optimize performance. You can select AUTOML from the algorithm drop-down to access these capabilities.
某些算法允許自動調整參數。 在Monument中,LSTM和LightGBM算法還具有“ AutoML”,這是自動機器學習的縮寫。 AutoML自動調整算法參數以優化性能。 您可以從算法下拉列表中選擇AUTOML以訪問這些功能。
For example, when we run AutoML on the HABSOS data, we can lower the Validation Error Rate by 0.04 from 3.273 to 3.233. Not a huge improvement on this particular data, but an improvement nonetheless. Often, the gains are much greater.
例如,當我們在HABSOS數據上運行AutoML時,我們可以將驗證錯誤率從3.273降低0.04到3.233。 在此特定數據上不是很大的改進,但是還是有改進。 通常,收益會更大。
There are other reports within Monument that we can use to improve algorithm performance, including, dependent variables, forecast training convergence, and feature importance. We’ll explore these topics in future tutorials.
Monument內還有其他報告可用于改善算法性能,包括因變量,預測訓練收斂性和功能重要性。 我們將在以后的教程中探討這些主題。
翻譯自: https://medium.com/swlh/how-to-decide-between-algorithm-outputs-using-the-validation-error-rate-c288a358ca9b
錯誤錄入 算法
本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。 如若轉載,請注明出處:http://www.pswp.cn/news/388047.shtml 繁體地址,請注明出處:http://hk.pswp.cn/news/388047.shtml 英文地址,請注明出處:http://en.pswp.cn/news/388047.shtml
如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!