如何使用Apache的Prediction IO Machine Learning Server構建推薦引擎

by Vaghawan Ojha

通過瓦哈萬·歐哈(Vaghawan Ojha)

如何使用Apache的Prediction IO Machine Learning Server構建推薦引擎 (How to build a recommendation engine using Apache’s Prediction IO Machine Learning Server)

This post will guide you through installing Apache Prediction IO machine learning server. We’ll use one of its templates called Recommendation to build a working recommendation engine. The finished product will be able to recommend customized products depending upon a given user’s purchasing behavior.

這篇文章將指導您安裝Apache Prediction IO機器學習服務器。 我們將使用其名為“推薦”的模板之一來構建一個有效的推薦引擎。 根據給定用戶的購買行為,最終產品將能夠推薦定制產品。

問題 (The Problem)

You’ve got bunch of data and you need to predict something accurately so you can help your business grow its sales, grow customers, grow profits, grow conversion, or whatever the business need is.

您擁有大量數據,并且需要準確地進行預測,以便可以幫助您的企業提高銷售量,增加客戶,增加利潤,提高轉化率或滿足業務需求。

Recommendation systems are probably the first step everyone takes toward applying data science and machine learning. Recommendation engines use data as an input and run their algorithms over them. Then they output models from which we can make prediction about what a user is really going to buy, or what a user may like or dislike.

推薦系統可能是每個人走向應用數據科學和機器學習的第一步。 推薦引擎將數據用作輸入,并在其上運行其算法。 然后,他們輸出模型,通過這些模型,我們可以預測用戶的實際購買意愿,或者用戶可能喜歡或不喜歡的東西。

輸入預測IO (Enter Prediction IO)

“Apache PredictionIO (incubating) is an open source Machine Learning Server built on top of state-of-the-art open source stack for developers and data scientists create predictive engines for any machine learning task.” — Apache Prediction IO documentation

“ Apache PredictionIO(孵化中)是一個開源的機器學習服務器,它建立在最新的開源堆棧之上,供開發人員和數據科學家為任何機器學習任務創建預測引擎。” — Apache Prediction IO文檔

The very first look at the documentation makes me feel good because it’s giving me access to a powerful tech stack for solving machine learning problems. What’s more interesting is that Prediction IO gives access to many templates, which are helpful for solving the real problems.

初看文檔會使我感覺很好,因為它使我能夠使用功能強大的技術堆棧來解決機器學習問題。 更有趣的是,Prediction IO可以訪問許多模板,這有助于解決實際問題。

The template gallery consists many templates for recommendation, classification, regression, natural language processing, and many more. It make use of technology like Apache Hadoop, Apache spark, ElasticSearch and Apache Hbase to make the machine learning server scaleable and efficient. I’m not going to talk much about the Prediction IO itself, because you can do that on your own here.

模板庫包含許多用于推薦,分類,回歸,自然語言處理等的模板。 它利用Apache Hadoop,Apache Spark,ElasticSearch和Apache Hbase等技術使機器學習服務器可擴展且高效。 我不會談論Prediction IO本身,因為您可以在這里自行完成。

So back to the problem: I have a bunch of data from user purchase histories, which consists user_id, product_id and purchased_date. Using these, I need to make a customized prediction/recommendation to the user. Considering this problem, we’ll use a Recommendation Template with Prediction IO Machine Learning server. We’ll make use of Prediction IO event server as well as bulk data import.

回到問題所在:我從用戶購買歷史中獲得了一堆數據,其中包括user_id,product_id和Purchased_date。 使用這些,我需要對用戶進行定制的預測/推薦。 考慮到此問題,我們將使用帶有預測IO機器學習服務器的推薦模板。 我們將使用Prediction IO事件服務器以及批量數據導入。

So let’s get ahead. (Note: This guide assume that you’re using Ubuntu system for the installation)

因此,讓我們前進。 (注意:本指南假定您使用Ubuntu系統進行安裝)

步驟1:下載Apache Prediction IO (Step 1: Download Apache Prediction IO)

Go to the home directory of your current user and Download The latest 0.10.0 Prediction IO apache incubator. I assume you’re in the following dir (/home/you/)

轉到當前用戶的主目錄,然后下載最新的0.10.0 Prediction IO apache培養箱。 我假設您位于以下目錄(/home/you/)

git clone git@github.com:apache/incubator-predictionio.git

Now go to the directory `incubator-predictionio` where we have cloned the Prediction IO repo. If you have cloned it in a different directory, make sure to be inside that dir in your terminal.

現在轉到目錄“ incubator-predictionio” ,我們在其中克隆了Prediction IO存儲庫。 如果已將其克隆到其他目錄中,請確保將其放在終端的該目錄中。

Now let’s checkout the current stable version of Prediction IO which is 0.10.0

現在,讓我們簽出Prediction IO的當前穩定版本0.10.0

cd incubator-predictionio # or any dir where you have cloned pio.git checkout release/0.10.0

步驟2:讓我們分配預測IO (Step 2: Let’s Make A Distribution Of Prediction IO)

./make-distribution.sh

If everything went Ok, then you will get the message like this in your console:

如果一切正常,那么您將在控制臺中收到以下消息:

However if you encountered something like this:

但是,如果遇到以下情況:

then you would have to remove .ivy2 dir in your home directory, by default this folder is hidden. You need to remove it completely and then run the ./make-distribution.sh again for the build to successfully generate a distribution file.

那么你就必須刪除.ivy2你的home目錄目錄 ,默認情況下該文件夾是隱藏的。 您需要將其完全刪除,然后再次運行./make-distribution.sh ,以使構建成功生成分發文件。

Personally I’ve faced this issue many times, but I’m not sure this is the valid way to get through this problem. But removing the .ivy2 folder and again running the make-distribution command works.

我個人已經多次遇到此問題,但是我不確定這是否是解決此問題的有效方法。 但是刪除.ivy2文件夾,然后再次運行make-distribution命令即可。

步驟3:提取分發文件 (Step 3: Extract The Distribution File)

After the successful build, we will have a filename called PredictionIO-0.10.0-incubating.tar.gz inside the directory where we built our Prediction IO. Now let’s extract it into a directory called pio.

成功構建之后,我們將在構建Prediction IO的目錄中擁有一個名為PredictionIO-0.10.0-incubating.tar.gz的文件名。 現在,將其提取到名為pio的目錄中。

mkdir ~/piotar zxvf PredictionIO-0.10.0-incubating.tar.gz -C ~/pio

Make sure the tar.gz filename match the distribution file that you have inside the original predictionIo directory. If you forgot to check out the 0.10.0 version of Prediction IO, you’re sure to get a different file name, because by default the version would be the latest one.

確保tar.gz文件名與原始預測目錄中的分發文件匹配。 如果您忘記簽出Prediction IO的0.10.0版本,那么您肯定會獲得不同的文件名,因為默認情況下該版本是最新的。

步驟4:準備下載依賴項 (Step 4: Prepare For Downloading Dependencies)

cd ~/pio
#Let’s make a vendors folder inside ~/pio/PredictionIO-0.10.0-incubating where we will save hadoop, elasticsearch and hbase.
mkdir ~/pio/PredictionIO-0.10.0-incubating/vendors

步驟5:下載并設置Spark (Step 5: Download and Setup Spark)

wget http://d3kbcqa49mib13.cloudfront.net/spark-1.5.1-bin-hadoop2.6.tgz

If your current directory is ~/pio the command will download the spark inside pio dir. Now let’s extract it. Depending upon where you downloaded it, you might want to change the below command.

如果您當前的目錄是~/pio該命令將在pio dir中下載spark。 現在讓我們提取它。 根據下載位置,可能需要更改以下命令。

tar zxvfC spark-1.5.1-bin-hadoop2.6.tgz PredictionIO-0.10.0-incubating/vendors
# This will extract the spark setup that we downloaded and put it inside the vendors folder of our fresh pio installation.

Make sure you had done mkdir PredictionIO-0.10.0-incubating/vendors earlier.

確保您之前已完成mkdir PredictionIO-0.10.0-incubating/vendors

步驟6:下載并設置ElasticSearch (Step 6: Download & Setup ElasticSearch)

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.4.tar.gz
#Let’s extract elastic search inside vendors folder.
tar zxvfC elasticsearch-1.4.4.tar.gz PredictionIO-0.10.0-incubating/vendors

步驟7:下載并設置Hbase (Step 7: Download and Setup Hbase)

wget http://archive.apache.org/dist/hbase/hbase-1.0.0/hbase-1.0.0-bin.tar.gz
#Let’s extract it.
tar zxvfC hbase-1.0.0-bin.tar.gz PredictionIO-0.10.0-incubating/vendors

Now let’s edit the hbase-site.xml to point the hbase configuration to the right dir. Considering you’re inside ~/pio dir, you could hit this command and edit the hbase conf.

現在,讓我們編輯hbase-site.xml ,將hbase配置指向正確的目錄。 考慮到您位于~/pio目錄中,可以單擊此命令并編輯hbase conf。

nano PredictionIO-0.10.0-incubating/vendors/hbase-1.0.0/conf/hbase-site.xml

Replace the configuration block with the following configuration.

用以下配置替換配置塊。

<configuration>  <property>    <name>hbase.rootdir</name>    <value>file:///home/you/pio/PredictionIO-0.10.0-incubating/vendors/hbase-1.0.0/data</value>  </property>  <property>    <name>hbase.zookeeper.property.dataDir</name>    <value>/home/you/pio/PredictionIO-0.10.0-incubating/vendors/hbase-1.0.0/zookeeper</value>  </property></configuration>

Here “you” signifies to your user dir, for example if you’re doing all this as a user “tom” then it would be something like file::///home/tom/…

這里的“ 您”表示您的用戶目錄,例如,如果您以用戶“ tom”的身份進行所有操作,則該文件將類似于file :: /// home / tom /…。

Make sure the right files are there.

確保正確的文件在那里。

Now let’s set up JAVA_HOME in hbase-env.sh .

現在讓我們在hbase-env.sh中設置JAVA_HOME。

nano PredictionIO-0.10.0-incubating/vendors/hbase-1.0.0/conf/hbase-env.sh

If you’re unsure about which version of JDK you’re currently using, follow these step and make necessary changes if required.

如果不確定當前使用的是哪個版本的JDK,請按照以下步驟操作,并根據需要進行必要的更改。

We need Java SE Development Kit 7 or greater for Prediction IO to work. Now let’s make sure we’re using the right version by running:

我們需要Java SE Development Kit 7或更高版本才能運行Prediction IO。 現在,通過運行以下命令確保使用的版本正確:

sudo update-alternatives — config java

By default I’m using:

默認情況下,我使用:

java -version
openjdk version “1.8.0_121”
OpenJDK Runtime Environment (build 1.8.0_121–8u121-b13–0ubuntu1.16.04.2-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)

If you’re using below 1.7, then you should change the java config to use a version of java that is equal to 1.7 or greater. You can change that with the update-alternatives command as given above. In my case the command sudo update-alternatives -config java outputs something like this:

如果您使用的是1.7以下版本,則應更改java配置,以使用等于或大于1.7的java版本。 您可以使用上述給定的update-alternatives命令更改它。 在我的情況下,命令sudo update-alternatives -config java輸出如下內容:

If you have any trouble setting this up, you can follow this link.

如果您在設置時遇到任何麻煩,可以點擊此鏈接 。

Now let’s export the JAVA_HOME path in the .bashrc file inside /home/you/pio.

現在,讓我們在/home/you/pio.內的.bashrc文件中導出JAVA_HOME路徑/home/you/pio.

Considering you’re on ~/pio dir, you could do this: nano .bashrc

考慮到您在~/pio目錄下,可以執行以下操作: nano .bashrc

Don’t forget to do source .bashrc after you set up the java home in the .bashrc.

source .bashrc設置Java主頁之后,不要忘記執行source .bashrc .bashrc

步驟8:配置預測IO環境 (Step 8: Configure the Prediction IO Environment)

Now let’s configure pio.env.sh to give a final touch to our Prediction IO Machine learning server installation.

現在,讓我們配置pio.env.sh,以最終了解我們的Prediction IO Machine學習服務器安裝。

nano PredictionIO-0.10.0-incubating/conf/pio-env.sh

We’re not using ProsgesSQl or MySql for our event server, So let’s comment out that section and have a pio-env.sh something like this:

我們沒有為事件服務器使用ProsgesSQl或MySql,所以讓我們注釋掉該部分,并創建一個pio-env.sh像這樣:

#!/usr/bin/env bash## Copy this file as pio-env.sh and edit it for your site's configuration.## Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at##    http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.#
# PredictionIO Main Configuration## This section controls core behavior of PredictionIO. It is very likely that# you need to change these to fit your site.
# SPARK_HOME: Apache Spark is a hard dependency and must be configured.SPARK_HOME=$PIO_HOME/vendors/spark-1.5.1-bin-hadoop2.6
POSTGRES_JDBC_DRIVER=$PIO_HOME/lib/postgresql-9.4-1204.jdbc41.jarMYSQL_JDBC_DRIVER=$PIO_HOME/lib/mysql-connector-java-5.1.37.jar
# ES_CONF_DIR: You must configure this if you have advanced configuration for#              your Elasticsearch setup. ES_CONF_DIR=$PIO_HOME/vendors/elasticsearch-1.4.4/conf
# HADOOP_CONF_DIR: You must configure this if you intend to run PredictionIO# with Hadoop 2. HADOOP_CONF_DIR=$PIO_HOME/vendors/spark-1.5.1-bin-hadoop2.6/conf
# HBASE_CONF_DIR: You must configure this if you intend to run PredictionIO# with HBase on a remote cluster. HBASE_CONF_DIR=$PIO_HOME/vendors/hbase-1.0.0/conf
# Filesystem paths where PredictionIO uses as block storage.PIO_FS_BASEDIR=$HOME/.pio_storePIO_FS_ENGINESDIR=$PIO_FS_BASEDIR/enginesPIO_FS_TMPDIR=$PIO_FS_BASEDIR/tmp
# PredictionIO Storage Configuration## This section controls programs that make use of PredictionIO's built-in# storage facilities. Default values are shown below.## For more information on storage configuration please refer to# http://predictionio.incubator.apache.org/system/anotherdatastore/
# Storage Repositories
# Default is to use PostgreSQLPIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_metaPIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_eventPIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_modelPIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS
# Storage Data Sources
# PostgreSQL Default Settings# Please change "pio" to your database name in PIO_STORAGE_SOURCES_PGSQL_URL# Please change PIO_STORAGE_SOURCES_PGSQL_USERNAME and# PIO_STORAGE_SOURCES_PGSQL_PASSWORD accordingly# PIO_STORAGE_SOURCES_PGSQL_TYPE=jdbc# PIO_STORAGE_SOURCES_PGSQL_URL=jdbc:postgresql://localhost/pio# PIO_STORAGE_SOURCES_PGSQL_USERNAME=pio# PIO_STORAGE_SOURCES_PGSQL_PASSWORD=root
# MySQL Example# PIO_STORAGE_SOURCES_MYSQL_TYPE=jdbc# PIO_STORAGE_SOURCES_MYSQL_URL=jdbc:mysql://localhost/pio# PIO_STORAGE_SOURCES_MYSQL_USERNAME=root# PIO_STORAGE_SOURCES_MYSQL_PASSWORD=root
# Elasticsearch Example PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=firstcluster PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=localhost PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300 PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=$PIO_HOME/vendors/elasticsearch-1.4.4
# ocal File System ExamplePIO_STORAGE_SOURCES_LOCALFS_TYPE=localfsPIO_STORAGE_SOURCES_LOCALFS_PATH=$PIO_FS_BASEDIR/models
# HBase ExamplePIO_STORAGE_SOURCES_HBASE_TYPE=hbasePIO_STORAGE_SOURCES_HBASE_HOME=$PIO_HOME/vendors/hbase-1.0.0

步驟9:在ElasticSearch配置中配置集群名稱 (Step 9: Configure cluster name in ElasticSearch config)

Since this line PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=firstcluster points to our cluster name in the ElasticSearch configuration, let’s replace a default cluster name in ElasticSearch configuration.

由于此行PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=firstcluster指向我們在ElasticSearch配置中的集群名稱,因此讓我們替換ElasticSearch配置中的默認集群名稱。

nano PredictionIO-0.10.0-incubating/vendors/elasticsearch-1.4.4/config/elasticsearch.yml

步驟10:導出預測IO路徑 (Step 10: Export The Prediction IO Path)

Let’s now export the Prediction IO path so we could freely use the pio command without pointing to it’s bin every time. Run the following command in your terminal:

現在讓我們導出Prediction IO路徑,以便我們可以自由使用pio命令,而不必每次都指向它的bin。 在終端中運行以下命令:

PATH=$PATH:/home/you/pio/PredictionIO-0.10.0-incubating/bin; export PATH

PATH=$PATH:/home/you/pio/PredictionIO-0.10.0-incubating/bin; export PATH

步驟#11:授予預測IO 安裝 權限 (Step #11: Give Permission To Prediction IO Installation)

sudo chmod -R 775 ~/pio

This is vital because if we didn’t give permission to the pio folder, the Prediction IO process won’t be able to write log files.

這很重要,因為如果我們不授予pio文件夾許可,則Prediction IO進程將無法寫入日志文件。

步驟#12:啟動預測IO服務器 (Step #12: Start Prediction IO Server)

Now we’re ready to go, let’s start our Prediction IO server. Before running this command make sure you exported the pio path described above.

現在我們可以開始了,讓我們啟動Prediction IO服務器。 在運行此命令之前,請確保已導出上述pio路徑。

pio-start-all
#if you forgot to export the pio path, it won't work and you manually have to point the pio bin path.

If everything is Ok to this point, you would see the output something like this.

如果到目前為止一切正常,您將看到類似以下的輸出。

Note: If you forget to give permission then, there will be issues writing logs and if your JAVA_HOME path is incorrect HBASE wouldn’t start properly and it would give you the error.
注意:如果您忘記授予權限,那么在編寫日志時會出現問題,并且如果您的JAVA_HOME路徑不正確,則HBASE無法正確啟動,并且會給您錯誤。

步驟#13:驗證過程 (Step #13: Verify The Process)

Now let’s verify our installation with pio status, if everything is Ok, you will get an output like this:

現在讓我們以pio status驗證安裝,如果一切正常,您將獲得如下輸出:

If you encounter error in Hbase or any other backend storage, make sure everything was started properly.

如果您在Hbase或任何其他后端存儲中遇到錯誤,請確保一切均已正確啟動。

Our Prediction IO Server is ready to implement the template now.

我們的Prediction IO Server現在準備實施模板。

實施推薦引擎 (Implementing the Recommendation Engine)

A recommendation engine template is a Prediction IO engine template that uses collaborative filtering to make personalized recommendation to the user. It uses can be in E-commerce site, news site, or any application that collects user histories of event to give a personalized experiences to the user.

推薦引擎模板是Prediction IO引擎模板,它使用協作過濾向用戶做出個性化推薦。 它可以在電子商務站點,新聞站點或任何收集事件的用戶歷史記錄以向用戶提供個性化體驗的應用程序中使用。

We’ll implement this template in Prediction IO with few eCommerce user data, just to do an sample experiment with Prediction IO machine learning server.

我們將使用很少的電子商務用戶數據在Prediction IO中實現此模板,僅用于Prediction IO機器學習服務器的示例實驗。

Now let’s back to our home dir cd ~

現在讓我們回到主目錄cd ~

步驟14: 下載推薦模板 (Step 14: Download the Recommendation Template)

pio template get apache/incubator-predictionio-template-recommender MyRecommendation

It will ask for company name and author name, input subsequently, now we have a MyRecommendation Template inside our home dir. Just a reminder: you can put the template anywhere you want.

它將詢問公司名稱和作者名稱,然后輸入,現在我們的主目錄中有一個MyRecommendation模板。 提醒一下:您可以將模板放置在所需的任何位置。

15. 創建我們的第一個預測IO應用程序 (15. Create Our First Prediction IO App)

Now let’s go inside the MyRecommendation dir cd MyRecommendation

現在讓我們進入MyRecommendation目錄cd MyRecommendation

After you’re inside the template dir, let’s create our first Prediction IO app called ourrecommendation.

進入模板目錄后,讓我們創建第一個Prediction IO應用程序,稱為ourrecommendation

You will get output like this. Please remember that you can give any name to your app, but for this example I’ll be using the app name ourrecommendation.

您將得到這樣的輸出。 請記住,您可以給您的應用程序起任何名字,但是在本例中,我將使用應用程序名稱ourrecommendation

pio app new ourrecommendation

This command will output something like this:

該命令將輸出如下內容:

Let’s verify that our new app is there with this command:

讓我們使用以下命令驗證我們的新應用是否存在:

pio app list

Now our app should be listed in the list.

現在,我們的應用程序應在列表中列出。

步驟16:導入一些樣本數據 (Step 16: Import Some Sample Data)

Let’s download the sample-data from gist, and put that inside importdata folder inside MyRecommendation folder.

讓我們從gist下載示例數據 ,然后將其放入MyRecommendation文件夾中的importdata文件夾中。

mkdir importdata

Copy the sample-data.json file that you just created inside the importdata folder.

復制您剛剛在importdata文件夾中創建的sample-data.json文件。

Finally let’s import the data inside our ourrecommendation app. Considering you’re inside the MyRecommendation dir you can do this to batch import the events.

最后,讓我們將數據導入我們的推薦應用程序中。 考慮到您位于MyRecommendation dir ,可以執行此操作以批量導入事件。

pio import — appid 1 — input importdata/data-sample.json

(Note: make sure the appid of ourrecommendation is same as of your appid that you just provided)

(注意:請確保我們推薦的appid與您剛提供的appid相同)

步驟17:建立應用程式 (Step 17: Build The App)

Before building the app, let’s edit engine.json file inside the MyRecommendation directory to replicate our app name inside it. It should look something like this:

在構建應用程序之前,讓我們在MyRecommendation目錄中編輯engine.json文件,以在其中復制我們的應用程序名稱。 它看起來應該像這樣:

Note: Don’t copy this, just change the “appName” in your engine.json.

注意:請勿復制此文件,只需在engine.json中更改“ appName”即可。

{  "id": "default",  "description": "Default settings",  "engineFactory": "orgname.RecommendationEngine",  "datasource": {    "params" : {      "appName": "ourrecommendation"    }  },  "algorithms": [    {      "name": "als",      "params": {        "rank": 10,        "numIterations": 5,        "lambda": 0.01,        "seed": 3      }    }  ]}

Note: the “engineFactory” will be automatically generated when you pull the template in our step 14, so you don’t have to change that. In my case, it’s my orgname, which I put in the terminal prompt during installation of the template. In you engine.json you just need to modify the appName, please don’t change anything else in there.

注意:在我們的第14步中提取模板時,“ engineFactory”將自動??生成,因此您無需更改它。 就我而言,這是我的組織名稱,在模板安裝過程中將其放在終端提示中。 在engine.json中,您只需要修改appName,請不要在其中進行任何更改。

In the same dir where our MyRecommendation engine template lies, let’s run this pio command to build our app.

在MyRecommendation引擎模板所在的目錄中,讓我們運行此pio命令來構建我們的應用程序。

pio build

(Note: if you wanna see all the messages during the building process, you can run this pio build — verbose)

(注意:如果您想在構建過程中看到所有消息,則可以運行此pio build — verbose )

It can take sometimes to build our app, since this is the first time. From next time it takes less time. You should get an output like this:

由于這是第一次,有時可能需要構建我們的應用程序。 從下一次開始,將花費更少的時間。 您應該得到如下輸出:

Our engine is now ready to train our data.

現在,我們的引擎已準備好訓練我們的數據。

步驟18: 訓練數據集 (Step 18: Train The dataset)

pio train

If you get an error like the one below in the middle of the training, then you may have to change number of iterations inside your engine.json and rebuild the app.

如果在培訓過程中遇到類似以下的錯誤,則可能必須更改engine.json中的迭代次數并重新構建應用程序。

Let’s change the numItirations in engine.json which is by default 20 to 5:

讓我們將engine.json中的numItirations更改為默認值20到5:

“numIterations”: 5,

Now let’s build the app with pio build, again do pio train. The training should be completed successfully. After finishing the training you will get the message like this:

現在,讓我們使用pio build構建應用程序,再次執行pio train 。 培訓應成功完成。 完成培訓后,您將收到以下消息:

Please note that this training works just for small data, if you however want to try with large data set then we would have to set up an standalone spark worker to accomplish the training. (I will write about this in a future post.)

請注意,此培訓僅適用于小數據,但是,如果您要嘗試使用大數據集,則我們將必須設置一個獨立的Spark工作者來完成培訓。 (我將在以后的文章中對此進行介紹。)

步驟19: 部署并提供預測 (Step 19: Deploy and Serve the prediction)

pio deploy#by default it will take 8000 port.

We will now have our prediction io server running.

現在,我們將運行預測io服務器。

Note: to keep it simple, I’m not discussing about event server in this post, since it may get even longer, thus we’re focusing on simple use case of Prediction IO.

注意:為簡單起見,本文中不再討論事件服務器,因為它可能會更長,因此我們將重點放在Prediction IO的簡單用例上。

Now let’s get the prediction using curl.

現在,讓我們使用curl進行預測。

Open up a new terminal and hit:

打開一個新終端,然后單擊:

curl -H “Content-Type: application/json” \-d ‘{ “user”: “user1”, “num”: 4 }’ http://localhost:8000/queries.json

In the above query, the user signifies to the user_id in our event data, and the num means, how many recommendation we want to get.

在上面的查詢中,用戶在事件數據中表示user_id,而num表示我們要獲得多少推薦。

Now you will get the result like this:

現在您將獲得如下結果:

{"itemScores":[{"item":"product5","score":3.9993937903501093},{"item":"product101","score":3.9989989282500904},{"item":"product30","score":3.994934059438341},{"item":"product98","score":3.1035806376677866}]}

That’s it! Great Job. We’re done. But wait, what’s next?

而已! 很好。 大功告成 但是,等等, 下一步是什么?

  • Next we will use spark standalone cluster to train large dataset (believe me, its easy, if you wanna do it right now, you could follow the documenation in Prediction IO)

    接下來,我們將使用獨立的Spark集群來訓練大型數據集(相信我,這很簡單,如果您想立即進行操作,可以遵循Prediction IO中的文檔說明 )

  • We will use Universal Recommender from Action ML to build a recommendation engine.

    我們將使用Action ML的Universal Recommender構建推薦引擎。

Important Notes:

重要筆記:

  • The template we used uses ALS algorithm with explicit feedback, however you can easily switch to implicit depending upon your need.

    我們使用的模板使用具有顯式反饋的ALS算法 ,但是您可以根據需要輕松切換為隱式。

  • If you’re curious about Prediction IO and want to learn more you can do that on the Prediction IO official site.

    如果您對Prediction IO感到好奇并想了解更多信息,可以在Prediction IO官方網站上進行 。

  • If your Java version is not suitable for Prediction IO specification, then you are sure to run into problems. So make sure you configure this first.

    如果您的Java版本不適合Prediction IO規范,那么您肯定會遇到問題。 因此,請確保您首先配置它。
  • Don’t run any of the commands described above with sudo except to give permission. Otherwise you will run into problems.

    除非獲得許可,否則不要使用sudo運行上述任何命令。 否則,您將遇到問題。

  • Make sure your java path is correct, and make sure to export the Prediction IO path. You might want to add the Prediction IO path to your .bashrc or profile as well depending upon your need.

    確保您的Java路徑正確,并確保導出Prediction IO路徑。 您可能還需要根據需要將Prediction IO路徑添加到.bashrc或配置文件中。

2017/07/14更新:使用Spark訓練真實數據集 (Update 2017/07/14: Using Spark To Train Real Data Sets)

We have the spark installed inside our vendors folders, with our current installation, our spark bin in the following dir.

我們已經將spark安裝在我們的vendor文件夾中,并且當前安裝是在以下目錄中的spark bin。

~/pio/PredictionIO-0.10.0-incubating/vendors/spark-1.5.1-bin-hadoop2.6/sbin

From there we have to setup a spark primary and replica to execute our model training to accomplish it faster. If your training seems to stuck we can use the spark options to accomplish the training tasks.

從那里,我們必須設置一個spark主對象和一個副本來執行我們的模型訓練,以更快地完成它。 如果您的培訓似乎停滯不前,我們可以使用spark選項來完成培訓任務。

啟動Spark主數據庫 (Start the Spark primary)

~/pio/PredictionIO-0.10.0-incubating/vendors/spark-1.5.1-bin-hadoop2.6/sbin/start-master.sh

This will start the spark primary. Now let’s browse the spark primary’s web UI by going into http://localhost:8080/ in the browser.

這將啟動主火花。 現在,通過在瀏覽器中進入http:// localhost:8080 /來瀏覽spark primary的Web UI。

Now let’s copy the primary-url to start the replica worker. In our case the primary spark URL is something like this:

現在,讓我們復制主URL以啟動副本工作器。 在我們的例子中,主要的Spark URL是這樣的:

spark://your-machine:7077 (your machine signifies to your machine name)

spark://您的機器:7077(您的機器表示您的機器名稱)

~/pio/PredictionIO-0.10.0-incubating/vendors/spark-1.5.1-bin-hadoop2.6/sbin/start-slave.sh spark://your-machine:7077

The worker will start. Refresh the web ui you will see the registered worker this time. Now let’s run the training again.

工人將開始。 刷新Web用戶界面,您這次將看到注冊的工作者。 現在,讓我們再次運行培訓。

pio train -- --master spark://localhost:7077 --driver-memory 4G --executor-memory 6G

Great!

大!

Special Thanks: Pat Ferrel From Action ML & Marius Rabenarivo

特別鳴謝: Action ML和Marius Rabenarivo的Pat Ferrel

翻譯自: https://www.freecodecamp.org/news/building-an-recommendation-engine-with-apache-prediction-io-ml-server-aed0319e0d8/

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/395874.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/395874.shtml
英文地址,請注明出處:http://en.pswp.cn/news/395874.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

JavaScript DOM編程藝術第二版學習(1/4)

接下來項目需要網頁相關知識&#xff0c;故在大牛的指引下前來閱讀本書。 記錄方式&#xff1a;本書分四部分閱讀&#xff0c;完成閱讀之后會多寫一篇包括思維導圖的算是閱讀指南的東西&#xff0c;瀏覽的童鞋看著指南可以跳過一些不必要的坑~ 當前水平&#xff1a;HTML&CS…

github開源大項目_GitHub剛剛發布了一份大規模的開源指南

github開源大項目Here are three links worth your time:這是三個值得您花費時間的鏈接&#xff1a; GitHub just released a massive guide to contributing to open source (5 to 60 minute read) GitHub剛剛發布了一份有關開源的大型指南( 閱讀5至60分鐘 ) A new way to br…

mysql中where條件判斷語句_MySQL Where 條件語句介紹和運算符小結

WHERE 條件有時候操作數據庫時&#xff0c;只操作一些有條件限制的數據&#xff0c;這時可以在SQL語句中添加WHERE子句來規定數據操作的條件。語法&#xff1a;SELECT column,… FROM tb_name WHERE definitionWHERE 關鍵字后面接有效的表達式(definition)&#xff0c;該表達式…

node webkit(nw.js) 設置自動更新

原理&#xff1a;把更新的文件放在服務器上&#xff0c;設置一個客戶端版本號&#xff0c;每次打開客戶端的時候&#xff0c;通過接口獲取服務器上的版本&#xff0c;如果高于本地的版本就下載服務器上的代碼&#xff0c;低于或等于就不更新 1 <script>2 var htt…

個人工作總結04(沖刺二)

今天是團隊第二次沖刺階段開始的第04天&#xff0c;我的工作總結如下&#xff1a; 一、昨天干了什么&#xff1f; 知識圈查詢功能 基本實現數據庫查詢 (未完成) 二、今天準備做什么&#xff1f; 知識圈查詢功能 基本實現數據庫查詢 三、遇到了什么困難&#xff1f; 數據庫訪問出…

mysql8.0版1130_navicat premium連接mysql 8.0報錯error 10061和error1130問題

昨天安裝了最新版的mysql navicat premium, 但沒來得及測試使用Navicat連接。今天上班時&#xff0c;使用Navicat premium連接mysql時&#xff0c;出現報錯ERROR 2003 (HY000): Can’t connect to MySQL server on ‘1XX.XX.XX.XX’ (10061).起初以為是mysql沒有安裝成功&#…

freecodecamp_為什么您一定要參與freeCodeCamp的一個研究小組

freecodecampby Frederick Ige弗雷德里克艾格(Frederick Ige) 為什么您一定要參與freeCodeCamp的一個研究小組 (Why you should definitely get involved with one of freeCodeCamp’s study groups) I’m writing this article in hopes of convincing you to take advantage…

C語言運行時數據結構

段&#xff08;Segment&#xff09;&#xff1a; 對象文件/可執行文件&#xff1a; SVr4 UNIX上被稱為ELF&#xff08;起初"Extensible Linker Format", 現在"Executable and Linking Format"&#xff09;文件。BSD UNIX上被稱為a.out。這些格式都具有段的…

Java掛起線程

2019獨角獸企業重金招聘Python工程師標準>>> 不優雅的suspend import java.util.concurrent.TimeUnit;public class SuspendTest {static Object lock new Object();SuppressWarnings("deprecation")public static void main(String[] args) {Suspend s1…

Hibernate包及相關工具包下載地址

Hibernate包及相關工具包下載地址&#xff1a; http://prdownloads.sourceforge.net/hibernate/ 這里包含所有hibernate各個版本的包下載&#xff0c;且提供了 Middlegen Hibernate及hibernate-extensions包的下載。這兩個包是用于自動生成相就的JAVA和*.hb…

init(coder:)_2018年《 New Coder》調查:31,000人告訴我們他們如何學習編碼并在工作中獲得工作…

init(coder:)More than 31,000 people responded to our 2018 New Coder Survey, granting researchers an unprecedented glimpse into how adults are learning to code.超過31,000人對我們的2018年《新編碼器調查》做出了回應&#xff0c;使研究人員對成年人如何學習編碼有了…

Redis源碼解析:21sentinel(二)定期發送消息、檢測主觀下線

六&#xff1a;定時發送消息 哨兵每隔一段時間&#xff0c;會向其所監控的所有實例發送一些命令&#xff0c;用于獲取這些實例的狀態。這些命令包括&#xff1a;”PING”、”INFO”和”PUBLISH”。 “PING”命令&#xff0c;主要用于哨兵探測實例是否活著。如果對方超過一段時間…

[SDOI2018]原題識別

題解&#xff1a; 。。感覺挺煩得 而且我都沒有注意到樹隨機這件事情。。 就寫個30分的莫隊。。 #include <bits/stdc.h> using namespace std; #define rint register int #define IL inline #define rep(i,h,t) for (int ih;i<t;i) #define dep(i,t,h) for (int it;…

django app中擴展users表

app models中編寫新的User1 # _*_ coding:utf-8 _*_2 from __future__ import unicode_literals34 from django.db import models5 from django.contrib.auth.models import AbstractUser # 繼承user67 # Create your models here.8910 class UserProfile(AbstractUser):11 …

[bzoj2301] [HAOI2011]Problem b

Description 對于給出的n個詢問&#xff0c;每次求有多少個數對(x,y)&#xff0c;滿足a≤x≤b&#xff0c;c≤y≤d&#xff0c;且gcd(x,y) k&#xff0c;gcd(x,y)函數為x和y的最大公約數。 Input 第一行一個整數n&#xff0c;接下來n行每行五個整數&#xff0c;分別表示a、b、…

華為p4用鴻蒙系統嗎_華為p40pro是鴻蒙系統嗎

華為的鴻蒙OS是一款“面向未來”的操作系統&#xff0c;一款基于微內核的面向全場景的分布式操作系統&#xff0c;此前mate30系列并沒有搭載鴻蒙系統。那華為p40pro是鴻蒙系統嗎&#xff1f;品牌型號&#xff1a;華為p40pro華為p40pro是鴻蒙系統嗎&#xff1f;華為p40pro沒有搭…

設置MYSQL允許用IP訪問

mysql>use mysql;mysql>update user set host % where user root;mysql>flush privileges;mysql>select host,user from user where userroot;mysql>quit 轉載于:https://www.cnblogs.com/vipstone/p/5541619.html

Web優化 --利用css sprites降低圖片請求

sprites是鬼怪&#xff0c;小妖精&#xff0c;調皮鬼的意思&#xff0c;初聽這個高端洋氣的名字我被震懾住了&#xff0c;一步步掀開其面紗后發覺非常easy的東西。作用卻非常大 什么是CSS Sprites CSS Sprites是指把網頁中非常多小圖片&#xff08;非常多圖標文件&#xff09;做…

[BZOJ3203][SDOI2013]保護出題人(凸包+三分)

https://www.cnblogs.com/Skyminer/p/6435544.html 先不要急于轉化成幾何模型&#xff0c;先把式子化到底再對應到幾何圖形中去。 1 #include<cstdio>2 #include<algorithm>3 #define rep(i,l,r) for (int i(l); i<(r); i)4 typedef long long ll;5 using names…

輕松創建nodejs服務器(1):一個簡單nodejs服務器例子

這篇文章主要介紹了一個簡單nodejs服務器例子,本文實現了一個簡單的hello world例子,并展示如何運行這個服務器,需要的朋友可以參考下我們先來實現一個簡單的例子&#xff0c;hello world。 似乎每種語言教程的第一節都會講這個&#xff0c;我們也不例外。 首先我們先創建一個項…