????🍋🍋大數據學習🍋🍋
🔥系列專欄: 👑哲學語錄: 用力所能及,改變世界。
💖如果覺得博主的文章還不錯的話,請點贊👍+收藏??+留言📝支持一下博主哦🤞
十一、分組去重(保留每組最新 / 最早記錄)
核心思路:
使用窗口函數?ROW_NUMBER()
?按分組排序,保留指定順序的第一條記錄。
典型例題:查詢每個用戶最后一次登錄記錄。
代碼模板:
WITH ranked_logins AS (SELECT user_id,login_date,ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY login_date DESC -- 降序取最新,升序取最早) AS rnFROM user_login
)
SELECT user_id, login_date
FROM ranked_logins
WHERE rn = 1;
十二、間隔計算(事件時間差分析)
核心思路:
用窗口函數?LAG()
/LEAD()
?獲取相鄰行的時間值,計算時間間隔。
典型例題:計算用戶兩次登錄的時間間隔(分鐘)。
代碼模板:
SELECT user_id,login_date,TIMESTAMPDIFF(MINUTE, LAG(login_date) OVER (PARTITION BY user_id ORDER BY login_date), login_date) AS interval_minutes
FROM user_login
ORDER BY user_id, login_date;
十三、行列轉換(行轉列 / 列轉行)
核心思路:
- 行轉列:用?
CASE WHEN
?配合聚合函數(如?MAX
/SUM
)實現。 - 列轉行:用?
LATERAL VIEW
?或?UNNEST
?展開列數據。
典型例題:將用戶月度活躍天數轉為列(行轉列)。
代碼模板:
SELECT user_id,MAX(CASE WHEN month = 1 THEN active_days ELSE 0 END) AS jan_active,MAX(CASE WHEN month = 2 THEN active_days ELSE 0 END) AS feb_active,...
FROM (SELECT user_id,MONTH(login_date) AS month,COUNT(DISTINCT login_date) AS active_daysFROM user_loginGROUP BY user_id, MONTH(login_date)
) t
GROUP BY user_id;
十四、流量路徑分析(漏斗模型)
核心思路:
用?COUNT(DISTINCT)
?按階段過濾用戶,計算各階段轉化率。
典型例題:計算注冊→瀏覽→下單的漏斗轉化率。
代碼模板:
WITH funnel_stages AS (SELECT user_id,MAX(CASE WHEN event = 'register' THEN 1 ELSE 0 END) AS registered,MAX(CASE WHEN event = 'browse' THEN 1 ELSE 0 END) AS browsed,MAX(CASE WHEN event = 'order' THEN 1 ELSE 0 END) AS orderedFROM user_eventsGROUP BY user_id
)
SELECT SUM(registered) AS total_registrants,SUM(browsed)/SUM(registered) AS reg_to_browse_rate,SUM(ordered)/SUM(browsed) AS browse_to_order_rate
FROM funnel_stages;
十五、字符串處理(提取、替換、分割)
核心思路:
使用字符串函數(如?SUBSTRING
、REGEXP_EXTRACT
、REPLACE
)處理非結構化數據。
典型例題:從 URL 中提取域名(如?https://www.example.com/path
?→?example.com
)。
代碼模板:
SELECT url,REGEXP_EXTRACT(url, 'https?://(www\\.)?([^/]+)', 2) AS domain
FROM urls;
十六、遞歸查詢(層級結構數據處理)
核心思路:
使用?WITH RECURSIVE
?遞歸 CTE 處理樹狀結構數據(如部門層級、類目層級)。
典型例題:查詢所有子部門及其上級部門路徑。
代碼模板(MySQL 示例):
WITH RECURSIVE dept_hierarchy AS (-- 初始節點:根部門SELECT dept_id, parent_id, dept_name, CAST(dept_name AS CHAR(100)) AS pathFROM departmentsWHERE parent_id IS NULL -- 根節點條件UNION ALL-- 遞歸查詢:關聯子部門SELECT d.dept_id, d.parent_id, d.dept_name, CONCAT(h.path, ' → ', d.dept_name)FROM departments dJOIN dept_hierarchy h ON d.parent_id = h.dept_id
)
SELECT * FROM dept_hierarchy;
十七、會話分析(用戶行為會話劃分)
核心思路:
通過事件時間間隔劃分會話,間隔超過閾值則視為新會話(如 30 分鐘無操作)。
典型例題:計算每個用戶的會話數及會話時長。
代碼模板:
WITH session_data AS (SELECT user_id,event_time,-- 生成會話標識:時間差超過30分鐘則新會話SUM(CASE WHEN TIMESTAMPDIFF(MINUTE, LAG(event_time) OVER (PARTITION BY user_id ORDER BY event_time), event_time) > 30 THEN 1 ELSE 0 END) OVER (PARTITION BY user_id ORDER BY event_time) AS session_idFROM user_events
)
SELECT user_id,session_id,MIN(event_time) AS session_start,MAX(event_time) AS session_end,TIMESTAMPDIFF(MINUTE, MIN(event_time), MAX(event_time)) AS session_duration
FROM session_data
GROUP BY user_id, session_id;
十八、同比環比進階(跨年度對比)
核心思路:
用窗口函數?LAG()
?關聯去年同期數據,計算同比增長率。
典型例題:計算 2023 年各月銷售額同比 2022 年的增長率。
代碼模板:
WITH monthly_sales AS (SELECT YEAR(sale_date) AS year,MONTH(sale_date) AS month,SUM(amount) AS salesFROM salesGROUP BY YEAR(sale_date), MONTH(sale_date)
)
SELECT year,month,sales,LAG(sales, 12) OVER (ORDER BY year, month) AS sales_last_year,(sales - LAG(sales, 12) OVER (ORDER BY year, month)) / LAG(sales, 12) OVER (ORDER BY year, month) AS yoy_growth
FROM monthly_sales
WHERE year = 2023;
十九、抽樣分析(隨機采樣數據)
核心思路:
用?RAND()
?生成隨機數,配合?WHERE
?過濾實現抽樣(如抽取 10% 數據)。
典型例題:從訂單表中隨機抽取 5% 樣本用于分析。
代碼模板:
SELECT *
FROM orders
WHERE RAND() < 0.05; -- 抽取5%樣本
二十、數據透視表(多維度交叉分析)
核心思路:
用?GROUP BY
?結合聚合函數,按多個維度(如時間、地區、品類)交叉統計。
典型例題:按年份、地區統計各品類銷售額占比。
代碼模板:
WITH category_sales AS (SELECT YEAR(sale_date) AS year,region,category,SUM(amount) AS salesFROM salesGROUP BY YEAR(sale_date), region, category
)
SELECT year,region,SUM(CASE WHEN category = 'A' THEN sales ELSE 0 END) AS cat_A_sales,SUM(CASE WHEN category = 'B' THEN sales ELSE 0 END) AS cat_B_sales,SUM(sales) AS total_sales,SUM(sales)/SUM(SUM(sales)) OVER (PARTITION BY year, region) AS category_ratio
FROM category_sales
GROUP BY year, region;