工作中,如果我們本地要操作的數據量大,那么主機是跑不起來python腳本的,這個時候,就要用到服務器(也叫堡壘機)了。那么如何用HIVE調用堡壘機上的python腳本呢?今天小白就總結一下步驟和一些注意事項~
1.首先將Python腳本上傳到堡壘機上
2.上傳后,在HIVE中編寫shell JOB
#這里設置地址等變量,可以重復使用
file_path="/home/chen_lib" #服務器大目錄
file_name_t="traindatas.csv" #訓練數據
file_name_y="df2.csv" #結果數據集
python_name="Untitled1.py" #python腳本
#傳入數據 由于兩個庫不通,所以直接把測試數據放到服務器上,就不做從表讀入的操作
#若要將HIVE中的表讀入到服務器上,執行下面命令
hive -e "set hive.resultset.use.unique.column.names=false;set hive.cli.print.header=true;
select * from table " >> $file_path/$file_name_t
#執行python腳本,將結果傳入到服務器的df表中
python2.7 $file_path/$python_name $file_path $file_path
#從服務器上讀取表到HIVE表中
hive -e "LOAD DATA LOCAL INPATH '$file_path/$file_name_y' OVERWRITE INTO TABLE tablename partition (d='${zdt.format("yyyy-MM-dd")}')"
echo "導入數據完成!"
3.建HIVE表時要注意:
1)文件的分割要用‘,’,因為是csv文件,否則數據讀入就是空的
2)建表是要建成textfile,如果建成orc會報錯:
Caused by: java.io.IOException: Malformed ORC file
原因是:ORC格式是列式存儲的表,不能直接從本地文件導入數據,只有當數據源表也是ORC格式存儲時,才可以直接加載,否則會出現上述報錯。
USE database;
CREATE TABLE tablename(
hotelid int COMMENT 'field1 comment',
max_quantity int COMMENT 'field2 comment',
section_query_min int COMMENT 'field2 comment',
section_query_max intCOMMENT 'NULL'
)
COMMENT 'owner:chen'
PARTITIONED BY (d string COMMENT 'date')
row format delimited fields terminated by ','
STORED AS textfile;
4.Python腳本中有幾點需要注意一下
1)輸出的路徑要與HIVE中的路徑一致
2)傳出的表df2要去掉列名,不然也會被讀入到hive表中
3)傳出csv文件要用‘,’分割,否則,如果用‘\’,會輸出到一列中去,無法讀入hive表中
# coding: utf-8
import pandas as pd
import numpy as np
file_path="/home/hotel/chen_lib/"
file_name_t="traindatas.csv"
file_name_y="df2.csv"
data_ctrip = pd.read_csv(file_path+file_name_t,header = 'infer')
ret1=[]
#循環取出每一行的最大值
for row in range(data_ctrip.shape[0]):
ret = []
quantitylist=[data_ctrip.loc[row,'quantity0_1'],data_ctrip.loc[row,'quantity1_2'],data_ctrip.loc[row,'quantity2_3'],data_ctrip.loc[row,'quantity3_4'],data_ctrip.loc[row,'quantity4_5'],data_ctrip.loc[row,'quantity5_6'],data_ctrip.loc[row,'quantity6_7'],data_ctrip.loc[row,'quantity7_8'],data_ctrip.loc[row,'quantity8_9'],data_ctrip.loc[row,'quantity9_10'],
data_ctrip.loc[row,'quantity10_11'],data_ctrip.loc[row,'quantity11_12'],data_ctrip.loc[row,'quantity12_13'],data_ctrip.loc[row,'quantity13_14'],data_ctrip.loc[row,'quantity14_15'],data_ctrip.loc[row,'quantity15_16'],data_ctrip.loc[row,'quantity16_17'],data_ctrip.loc[row,'quantity17_18'],data_ctrip.loc[row,'quantity18_19'],data_ctrip.loc[row,'quantity19_20']]
sectionlist=[data_ctrip.loc[row,'section0_1'],data_ctrip.loc[row,'section1_2'],data_ctrip.loc[row,'section2_3'],data_ctrip.loc[row,'section3_4'],data_ctrip.loc[row,'section4_5'],data_ctrip.loc[row,'section5_6'],data_ctrip.loc[row,'section6_7'],data_ctrip.loc[row,'section7_8'],data_ctrip.loc[row,'section8_9'],data_ctrip.loc[row,'section9_10'],
data_ctrip.loc[row,'section10_11'],data_ctrip.loc[row,'section11_12'],data_ctrip.loc[row,'section12_13'],data_ctrip.loc[row,'section13_14'],data_ctrip.loc[row,'section14_15'],data_ctrip.loc[row,'section15_16'],data_ctrip.loc[row,'section16_17'],data_ctrip.loc[row,'section17_18'],data_ctrip.loc[row,'section18_19'],data_ctrip.loc[row,'section19_20']]
max_quantity = max(quantitylist) #取出最大的間夜
max_quantity_index = quantitylist.index(max_quantity)#取出最大間夜對應的索引
section_query = sectionlist[max_quantity_index]#取出最大間夜對應的區間
section_query_min = int(section_query.split("-", 1)[0])#取出價格區間的最小值
section_query_max = int(section_query.split("-", 1)[1])#取出價格區間的最大值
ret.append([data_ctrip.loc[row,'hotelid'],max_quantity,section_query_min,section_query_max])
ret1.extend(ret)#追加行
df1 = pd.DataFrame(ret1)
df1.to_csv('file_path+file_name_y',sep=',',index=False,header=False)
以上就是在用堡壘機和HIVE來執行時的步驟和可能遇到的問題啦~
標簽:loc,name,機上,ctrip,python,hive,file,data,row