賽題描述:根據提供的用戶行為數據,選手需要分析用戶行為特征與廣告內容的匹配關系,準確預測用戶對測試集廣告的點擊情況,通過AUC計算得分。
得分0.6120,排名60+。
嘗試了很多模型都沒有能夠提升效果,好奇大佬的代碼是咋寫的。
分享一下思路:
特征處理
時間特征是大多數廣告點擊預測任務中的核心因素。用戶在不同時間段的行為差別較大(比如:晚上適合網易云)。
從曝光時間中提取出了,week,hour,hour_m,cos_hour,day_of_week
特征,將一天劃分成了四個時間段:早上、下午、晚上、夜晚,增加了一個工作時間的判斷。
data['exposure_time'] = pd.to_datetime(data['exposure_time'])
data['week'] = data['exposure_time'].dt.isocalendar().week
data['hour'] = data['exposure_time'].dt.hour
data['hour_m'] = data['hour'] + data['exposure_time'].dt.minute / 60
data['cos_hour'] = np.cos(2 * np.pi * data['hour_m'] / 24)
data['day_of_week'] = data['exposure_time'].dt.dayofweekdef get_time_period(hour):if 6 <= hour < 12:return 'morning'elif 12 <= hour < 18:return 'afternoon'elif 18 <= hour < 24:return 'evening'else:return 'night'
data['time_period'] = data['hour'].apply(get_time_period)
data['is_work_time'] = data['hour'].apply(lambda x: 1 if 9 <= x < 17 else 0)
除此之外,增添了兩個新的特征。
purchase_efficiency
:購買效率。
ad_quality_score
:廣告質量。
data['purchase_efficiency'] = data['purchase_history'] / (data['activity_score'] + 1e-6)
data['ad_quality_score'] = data['advertiser_score'] * data['historical_ctr']
并對職業、地區、廣告類型等數據使用了LabelEncoder
編碼。
label_encoders = {}
for col in ['occupation', 'category', 'material_type', 'region', 'device', 'time_period']:le = LabelEncoder()data[col] = le.fit_transform(data[col])label_encoders[col] = le
對于職業、地區、設備等數據就行了頻率編碼,捕捉類別的熱門程度。
data['purchase_efficiency'] = data['purchase_history'] / (data['activity_score'] + 1e-6)
data['ad_quality_score'] = data['advertiser_score'] * data['historical_ctr']
創建了三個交互特征:職業-廣告類型,設備-廣告類型,地區-商品材質。
data['occupation_category'] = data['occupation'].astype(str) + '_' + data['category'].astype(str)
data['region_material_type'] = data['region'].astype(str) + '_' + data['material_type'].astype(str)
data['device_category'] = data['device'].astype(str) + '_' + data['category'].astype(str)
對purchase_history,activity_score
進行分箱,減少對異常值的敏感。
bins_purchase = [0, 1, 5, 10, 20, 50, 100]
labels_purchase = [0, 1, 2, 3, 4, 5]
data['purchase_history_bin'] = pd.cut(data['purchase_history'], bins=bins_purchase, labels=labels_purchase, include_lowest=True)bins_activity = [0, 10, 20, 30, 40, 50, 100]
labels_activity = [0, 1, 2, 3, 4, 5]
data['activity_score_bin'] = pd.cut(data['activity_score'], bins=bins_activity, labels=labels_activity, include_lowest=True)
模型參數設置
使用LightGBM模型進行訓練。
params = {'boosting_type': 'gbdt','objective': 'binary','metric': 'auc','num_leaves': 63,'learning_rate': 0.01,'feature_fraction': 0.8,'bagging_fraction': 0.8,'bagging_freq': 5,'verbose': -1,'n_estimators': 5000,'n_jobs': -1
}
使用 StratifiedKFold 進行交叉驗證,保證每個折中的正負樣本比例相似。每個折內,我們訓練一個LightGBM模型,并計算每個折的AUC。
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
oof_preds = np.zeros(len(df_train))
test_preds = np.zeros(len(df_test))
auc_scores = []for fold, (train_idx, val_idx) in enumerate(skf.split(df_train, df_train[label])):X, X_val = df_train[feats].iloc[train_idx], df_train[feats].iloc[val_idx]y, y_val = df_train[label].iloc[train_idx], df_train[label].iloc[val_idx]model = LGBMClassifier(**params)model.fit(X, y, eval_set=[(X_val, y_val)], early_stopping_rounds=100, verbose=200)val_pred = model.predict_proba(X_val)[:, 1]auc = roc_auc_score(y_val, val_pred)auc_scores.append(auc)