python股市
始終關注大局 (Keep Your Eyes on the Big Picture)
I’ve been fascinated with the stock market since I was a little kid. There is certainly no shortage of data to analyze, and if you find an edge you can make some easy money. To stay on top of the market, I designed a dashboard that incorporates interesting option market order flow, price charts, chatter and fundamentals. Having it all in one place makes it easy to monitor market sentiment and find potential plays! In this article, I’ll describe the components of the dashboard and explain how I created it using Python and Plotly’s Dash.
從小我就著迷于股票市場。 當然,不乏要分析的數據,如果您找到優勢,可以輕松賺錢。 為了保持在市場的領先地位,我設計了一個儀表板,其中包含有趣的期權市場訂單流程,價格圖表,震蕩和基本面。 將所有功能集中在一處,可以輕松監控市場情緒并找到潛在的機會! 在本文中,我將描述儀表板的組件,并說明如何使用Python和Plotly's Dash創建儀表板。
內容: (Contents:)
Reviewing the DashboardSourcing the DataReviewing Dash Framework Designing the File StructureCreating the Function FilesAdding CallbacksFinal Thoughts and Complete Code
查看儀表板采購數據查看儀表板框架設計文件結構創建功能文件添加回調最終想法和完整代碼
The full code and GitHub link are toward the bottom of the page if you’re already familiar with Dash.
如果您已經熟悉Dash,則完整的代碼和GitHub鏈接位于頁面底部。
If you’re completely new to Dash, I recommend starting here:
如果您完全不熟悉Dash,建議從這里開始 :
查看儀表板 (Reviewing the Dashboard)
The dashboard is designed using Dash Bootstrap CSS and is fairly responsive by default! On startup, the dashboard will load data from Twitter and Reddit. These data feeds are used to monitor interesting option flow and market chatter/sentiment.
儀表板是使用Dash Bootstrap CSS設計的,默認情況下響應速度非常快 ! 啟動時,儀表板將從Twitter和Reddit加載數據。 這些數據饋送用于監視有趣的期權流和市場波動/情緒。

Beyond the data sources, the dashboard takes 3 initial inputs from a user:
除了數據源之外,儀表板還接受了來自用戶的3個初始輸入:
Stock ticker, Start Date, End date
股票行情,開始日期,結束日期

The Start Date and End Date are pre-populated with the maximum date range. When a ticker is entered, the dashboard pulls data from Yahoo! Finance and Market Watch to produce information about the company’s financials and price history. The price history data from Yahoo! Finance is used to produce three charts:
“ 開始日期”和“ 結束日期”已預先填充了最大日期范圍 。 輸入代碼后 ,儀表板將從Yahoo!提取數據。 Finance and Market Watch產生有關公司財務和價格歷史的信息。 來自Yahoo!的價格歷史數據 財務用于生成三個圖表:
3 Year daily chart, 5 day 15 min chart, 1 day 1 minute chart
3年日圖,5天15分鐘圖,1天1分鐘圖

The 3 Year daily Chart can be adjusted by the Start Date and End Date fields giving a little bit more flexibility with the price data.
可以通過“ 開始日期”和“ 結束日期”字段來調整3年日圖表,從而使價格數據更具靈活性。
采購數據 (Sourcing the Data)
The dashboard pulls data from multiple sources. Most of the data is scraped from the web on the fly, but the Twitter data is also stored and read from an SQLite database to make refreshing and filtering much more performant.
儀表板從多個來源提取數據。 大部分數據都是從網絡上即時獲取的,但Twitter數據也可以存儲和從SQLite數據庫讀取,以提高刷新和過濾性能。

a ! 金融 (Yahoo! Finance)
Although Yahoo! Finance decommissioned their API a while back, the python library yfinance offers a reliable, threaded, and Pythonic way to download historical market data for free!
雖然雅虎! 不久前 ,Finance停用了他們的API,python庫yfinance提供了一種可靠的,線程化的和Python的方式來免費下載歷史市場數據!
pip install yfinance
I use Yahoo! Finance to pull price history and company information like the beta and sector.
我使用Yahoo! 財務可以獲取價格歷史記錄和Beta和行業等公司信息。
市場觀察 (Market Watch)
Market Watch is a market and business news information website.
Market Watch是一個市場和商業新聞信息網站。

Scraping Market Watch, it is easy to pull financial information about the company going back 5 years. The data is scraped from the website using the Beautiful Soup library.
刮擦市場觀察,很容易提取有關公司5年的財務信息。 使用Beautiful Soup庫從網站上抓取數據。
pip install beautifulsoup4
It helps if you are somewhat familiar with the html before getting into web scraping. The basics of web scraping are beyond the scope of this article.
如果您在進入網絡抓取之前對html有點熟悉,那么它會有所幫助。 Web抓取的基本知識超出了本文的范圍。
推特 (Twitter)
For this tutorial, you will need to register an app with Twitter to get API Keys. I use the Python library Tweepy to interact with Twitter. Connecting Tweepy to Twitter uses OAuth1. Check out this tutorial for getting started with Tweepy and the Twitter API if needed!
對于本教程, 您將需要在Twitter上注冊一個應用程序以獲取API密鑰 。 我使用Python庫Tweepy與Twitter進行交互。 將Tweepy連接到Twitter使用OAuth1 。 如果需要,請查看本教程以開始使用Tweepy和Twitter API !
pip install tweepy
Twitter is used to download free options order flow data. Two users post free order flow data: SwaggyStocks and Unusual_whales. The premise behind watching option order flow is that big orders in the options market can indicate momentum in the underlying asset. Some people believe following big orders is following smart money. Just remember that even smart money can be wrong!
Twitter用于下載免費期權訂單流數據 。 兩個用戶發布了免費訂單流數據: SwaggyStocks和Unusual_whales 。 觀察期權訂單流的前提是,期權市場中的大訂單可以表明基礎資產的動量 。 有些人認為跟隨大訂單正在追隨精明的錢。 只要記住,即使是聰明的錢也可能是錯誤的!
Reddit (Reddit)
I am using PRAW to connect to the Reddit API. A user account to Reddit is required to use the API. It is completely free and only requires an email address! Read this tutorial if you’re completely new to using Reddit and the Reddit API.
我正在使用PRAW連接到Reddit API。 使用該API需要Reddit用戶帳戶。 它是完全免費的,只需要一個電子郵件地址! 如果您不熟悉Reddit和Reddit API的使用,請閱讀本教程 。
pip install praw
Reddit is used to scrape new posts from the subreddit WallStreet Bets. It is a large community in which traders place high risk/high reward trades. It is useful for gauging market chatter and sentiment.
Reddit用來從subreddit WallStreet Bets中抓取新帖子。 這是一個大型社區,交易員在其中進行高風險/高獎勵交易。 它對于衡量市場的chat不休和情緒很有用。
Dash框架刷新器 (Dash Framework Refresher)
Dash is a framework for Python written on top of Flask, Plotly.js, and React.js. Dash and Plotly are vast and powerful tools, and this is only a taste of what they can do! Check out the Dash community or Learn Python Dashboards for more examples and tutorials!
Dash是在Flask,Plotly.js和React.js之上編寫的Python框架 。 Dash和Plotly是強大的工具,這只是他們可以做的事! 查看Dash社區或“ 學習Python儀表板”以獲取更多示例和教程!
pip install dash
Dash apps are composed of a Layout and Callbacks:
Dash應用程序由Layout和Callbacks組成 :
布局 (Layout)
The layout is made up of a tree of components that describe what the application looks like and how users experience the content.
布局由描述應用程序外觀以及用戶如何體驗內容的組件樹組成。
回呼 (Callbacks)
Callbacks make the Dash apps interactive. Callbacks are Python functions that are automatically called whenever an input property changes.
回調使Dash應用程序具有交互性。 回調是Python函數,只要輸入屬性發生更改,就會自動調用它們。
短跑引導組件 (Dash Bootstrap Components)
To make it easier to design the app layout, I’m using Dash Bootstrap Components. Similar to how the dash-html-components library allows you to apply HTML using Python, the dash-bootstrap-components library allows you to apply Bootstraps front-end components that are affected by the Bootstrap CSS framework.
為了使設計應用程序布局更容易,我使用了Dash Bootstrap組件。 與dash-html-components庫允許您使用Python應用HTML相似, dash-bootstrap-components庫允許您應用受Bootstrap CSS框架影響的Bootstraps前端組件。
pip install dash-bootstrap-components
The responsive grid system in bootstrap CSS and the convenient container wrappers allow for a lot of customization. Bootstrap’s grid system uses a series of containers, rows, and 12 columns in which one can lay out and align content. Those have been included as components in dash-bootstrap-components library as Container
, Row
, and Col
.
引導CSS中的響應式網格系統和方便的容器包裝器允許進行大量自定義 。 Bootstrap的網格系統使用一系列容器,行和12列,在其中可以布置和對齊內容。 這些已作為dash-bootstrap-components中的組件包括在內 圖書館為 Container
, Row
,和Col
。
文件結構 (The File Structure)
Although this seems like a lot of files, it isn’t really that much code! It is completely fine to have all the code in 1 file for this dashboard. I personally like short files, so I put the functions into their own file and call index.py to run the app.
盡管這看起來像很多文件,但實際上并不是那么多代碼! 將該儀表板的所有代碼都放在1個文件中完全可以。 我個人比較喜歡短文件,因此我將這些函數放入自己的文件中,然后調用index.py來運行該應用程序。

文件config.py和管理API密鑰 (File config.py & Managing API Keys)
This project uses API keys to get data from Reddit and Twitter. If you’re new to managing your keys, make sure to save them into a config.py file instead of hard-coding them in your app. API keys can be very valuable and must be protected. Add the config file to your gitignore file to prevent it from being pushed to your repo too!
該項目使用API??密鑰從Reddit和Twitter獲取數據 。 如果您不熟悉密鑰管理,請確保將其保存到config.py文件中,而不是在應用程序中對其進行硬編碼。 API密鑰可能非常有價值,必須加以保護 。 將配置文件添加到您的gitignore文件中,以防止其也被推送到您的倉庫中!
文件stocks.sqlite (File stocks.sqlite)
The file stocks.sqlite is the database file used to store the tweet data. That makes it easier to stream into the app as new tweets are generated. The refresh occurs automatically thanks to Dash’s Interval component!
文件stocks.sqlite是用于存儲推文數據的數據庫文件。 這樣可以在生成新的推文時更輕松地將其流式傳輸到應用程序。 由于Dash的Interval組件,刷新自動發生!
創建文件 (Creating the Files)
Finally on to the code! I’ll start with the simplest functions that pull the data and then end with creating the actual app file, index.py. I do my best to include comments in the code, as well as describe the overall functions.
終于到了代碼! 我將從提取數據的最簡單函數開始,然后以創建實際的應用程序文件index.py結尾 。 我會盡力在代碼中包含注釋,并描述整體功能。
導入依賴 (Import dependencies)
Start by importing the required libraries
首先導入所需的庫
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
import dash_table
from dash.exceptions import PreventUpdateimport flask
from flask import Flask
import pandas as pd
import dateutil.relativedelta
from datetime import date
import datetime
import yfinance as yf
import numpy as np
import praw
import sqlite3import plotly
import plotly.graph_objects as go
from plotly.subplots import make_subplots
創建dash_utils.py (Create dash_utils.py)
The file dash_utils.py contains helper functions for the dash components used in the app’s layout. Anyone who has used dash before knows the layout can get pretty long pretty fast, especially if using a lot of data tables like this dashboard! To reduce the repetitiveness of adding components and bloating the layout, I created functions that I can reuse to add components to the dash app:
文件dash_utils.py包含用于應用程序布局中的破折號組件的輔助函數。 以前使用破折號的任何人都知道布局可以很快變長,特別是在使用大量數據表(例如儀表板)的情況下! 為了減少添加組件和使布局過大的重復性,我創建了一些函數,可以重復使用這些函數以將組件添加到dash應用程序:
- ticker_inputs ticker_inputs
- make_table make_table
- make_card make_card
- make_item make_item
ticker_inputs (ticker_inputs)
The function ticker_inputs is used to return the components that allow the user to enter the stock ticker and select a Start Date and End Date.
函數ticker_inputs用于返回允許用戶輸入股票報價器并選擇開始日期和結束日期的組件。
def ticker_inputs(inputID, pickerID, MONTH_CUTTOFF):#calculate the current date
currentDate = date.today() #calculate past date for the max allowed date
pastDate = currentDate - dateutil.relativedelta.relativedelta(months=MONTH_CUTTOFF)#return the layout components
return html.Div([
dcc.Input(id = inputID, type="text", placeholder="MSFT")
, html.P(" ")
, dcc.DatePickerRange(
id = pickerID,
min_date_allowed=pastDate,
start_date = pastDate,
#end_date = currentDate
)])
Notice the function takes inputID and pickerID as arguments to use as the component ID. Component ID’s must be unique and are used by the callbacks.
注意,該函數將inputID和pickerID作為參數用作組件ID 。 組件ID必須是唯一的,并由回調使用 。
make_card (make_card)
To make things look nice, I like using Cards! The function make_card is used to combine Dash Bootstrap components Card and Alert. Bootstrap’s cards provide a flexible content container and allow a fair amount of customization. Alert is used to easily add color and more style if desired.
為了讓事情看起來更好,我喜歡使用Cards! 函數make_card用于組合Dash Bootstrap組件Card和Alert 。 引導卡提供了靈活的內容容器,并允許進行大量的自定義。 如果需要,可以使用Alert輕松添加顏色和更多樣式。
def make_card(alert_message, color, cardbody, style_dict = None):
return dbc.Card([ dbc.Alert(alert_message, color=color)
,dbc.CardBody(cardbody)
], style = style_dict)
Notice the function takes in an alert message for the header, a color, a card body, and a style dictionary.
注意,該函數接收標題, 顏色 , 卡片正文和樣式字典的警報消息 。
make_item (make_item)
The function make_item is used to build the price chart Accordion.
函數make_item用于構建手風琴價格圖表。

def make_item(button, cardbody, i):# This function makes the accordion items
return dbc.Card([
dbc.CardHeader(
html.H2(
dbc.Button(
button,
color="link",
id=f"group-{i}-toggle"))),
dbc.Collapse(
dbc.CardBody(cardbody),
id=f"collapse-{i}")])
Notice the function takes a button for the button name, cardbody for card body, and i for the Dash Bootstrap Collapse component ID. The component ID’s must be unique!
注意,該函數使用一個按鈕來表示按鈕名稱,使用cardbody表示卡體 ,使用i表示Dash Bootstrap折疊組件ID。 組件ID必須是唯一的!
make_table (make_table)
The last of the dash_util functions is make_table. Since Dash DataTable’s have so many parameters, tweaking them all individually can be tedious. Therefore, I made a function!
dash_util函數的最后一個是make_table 。 由于Dash DataTable的參數太多,因此分別調整它們可能很乏味。 因此,我做了一個功能!
def make_table(id, dataframe, lineHeight = '17px', page_size = 5):
return dash_table.DataTable(
id=id,
css=[{'selector': '.row', 'rule': 'margin: 0'}],
columns=[
{"name": i, "id": i} for i in dataframe.columns
],
style_header={
'backgroundColor': 'rgb(230, 230, 230)',
'fontWeight': 'bold'},
style_cell={'textAlign': 'left'},
style_data={
'whiteSpace': 'normal',
'height': 'auto',
'lineHeight': lineHeight
},
style_data_conditional=[
{
'if': {'row_index': 'odd'},
'backgroundColor': 'rgb(248, 248, 248)'
}
],
style_cell_conditional=[
{'if': {'column_id': 'title'},
'width': '130px'},
{'if': {'column_id': 'post'},
'width': '500px'},
{'if': {'column_id': 'datetime'},
'width': '130px'},
{'if': {'column_id': 'text'},
'width': '500px'}],
page_current=0,
page_size=page_size,
page_action='custom',filter_action='custom',
filter_query='',sort_action='custom',
sort_mode='multi',
sort_by=[]
)#end table
The only arguments needed for the function are id for a unique Component ID, and df for a Pandas DataFrame. Review this tutorial for details on all the Dash Data_table parameters.
該函數所需的唯一參數是唯一組件ID的id和熊貓數據框的df 。 查看本教程以獲取有關所有Dash Data_table參數的詳細信息 。
創建fin_report_data.py (Create fin_report_data.py)
This file contains the get_financial_report function used to scrape Market Watch and return financial data like EPS, EPS Growth, Net Income, and EBITDA. This function is a little complicated so I’ll explain it in chunks.
該文件包含get_financial_report函數,該函數用于抓取Market Watch并返回財務數據,例如EPS,EPS增長,凈收入和EBITDA。 這個函數有點復雜,所以我將分塊解釋它。
The function get_financial_report takes in the stock ticker and it builds two URLs to scrape:
函數get_financial_report接受股票行情自動收錄器,并構建兩個要抓取的URL:
- Market Watch’s /financials 市場觀察的/金融
- Market Watch’s /financials/balance-sheet 市場觀察的/金融/資產負債表
def get_financial_report(ticker):#build URLs
urlfinancials = 'https://www.marketwatch.com/investing/stock/'+ticker+'/financials'
urlbalancesheet = 'https://www.marketwatch.com/investing/stock/'+ticker+'/financials/balance-sheet'#request the data using beautiful soup
text_soup_financials = BeautifulSoup(requests.get(urlfinancials).text,"html")
text_soup_balancesheet = BeautifulSoup(requests.get(urlbalancesheet).text,"html")
Now that the web data is scraped, I want to find all the row titles. If the row title matches the value we want to use in the dashboard, I’ll save it to a list.
現在已經抓取了Web數據,我想查找所有行標題。 如果行標題與我們要在儀表板中使用的值匹配,我將其保存到列表中。

# build lists for Income statement titlesfinancials = text_soup_financials.findAll('td', {'class': 'rowTitle'})
epslist=[]
netincomelist = []
longtermdebtlist = []
interestexpenselist = []
ebitdalist= []#load data into lists if the row title is found
for title in titlesfinancials:
if 'EPS (Basic)' in title.text:
epslist.append ([td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])
if 'Net Income' in title.text:
netincomelist.append ([td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])
if 'Interest Expense' in title.text:
interestexpenselist.append ([td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])
if 'EBITDA' in title.text:
ebitdalist.append ([td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])# find the table headers for the Balance sheet titlesbalancesheet = text_soup_balancesheet.findAll('td', {'class': 'rowTitle'})
equitylist=[]
for title in titlesbalancesheet:
if 'Total Shareholders\' Equity' in title.text:
equitylist.append( [td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])
if 'Long-Term Debt' in title.text:
longtermdebtlist.append( [td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])
Notice findNextSiblings (beautifulsoup) is used to return the siblings of the Tag that match the given criteria and appear after this Tag in the document. To clarify, since the left most item (rowTitle) is what I use to match, findNextSiblings gets me to the value I want to save in the list.
注意findNextSiblings (beautifulsoup)用于返回與給定條件匹配并在文檔中此Tag之后出現的Tag的同級。 需要說明的是,由于最左邊的項(rowTitle)是我用來匹配的項,因此findNextSiblings將我帶到要保存在列表中的值。
Once the values are scraped, the helper function get_element is used to load the data.
刮取值后,將使用輔助函數get_element加載數據。
#get the data from the income statement lists
#use helper function get_element
eps = get_element(epslist,0)
epsGrowth = get_element(epslist,1)
netIncome = get_element(netincomelist,0)
shareholderEquity = get_element(equitylist,0)
roa = get_element(equitylist,1)longtermDebt = get_element(longtermdebtlist,0)
interestExpense = get_element(interestexpenselist,0)
ebitda = get_element(ebitdalist,0)
Once the values have been saved to the lists, transform them into a pandas DataFrame. Reset the dataframe index and return the dataframe!
將值保存到列表后,將其轉換為pandas DataFrame。 重置數據幀索引并返回數據幀!
# load all the data into dataframe
fin_df= pd.DataFrame({'eps': eps,'eps Growth': epsGrowth,'net Income': netIncome,'shareholder Equity': shareholderEquity,'roa':
roa,'longterm Debt': longtermDebt,'interest Expense': interestExpense,'ebitda': ebitda},index=range(date.today().year-5,date.today().year))
fin_df.reset_index(inplace=True)return fin_df#helper functiondef get_element(list,element):
try:
return list[element]
except:
return '-'
Notice the helper function get_element is used to return an “ — “ if it cannot find an item in the list of scraped data.
請注意,如果輔助函數get_element不能在已抓取的數據列表中找到某項,則該函數將返回“ —”。
創建reddit_data.py (Create reddit_data.py)
The file reddit_data.py contains the functions to interact with the Reddit API through Praw. Import the dependencies and the API key from the config.py file, then use a function to transform the data into a data frame.
文件reddit_data.py包含用于通過Praw與Reddit API進行交互的函數。 從config.py文件中導入依賴項和API密鑰,然后使用函數將數據轉換為數據框。
import pandas as pd
import praw
from config import r_cid, r_csec, r_uagdef get_reddit(cid= r_cid, csec= r_csec, uag= r_uag, subreddit='wallstreetbets'): #connect to reddit reddit = praw.Reddit(client_id= cid, client_secret= csec, user_agent= uag)#get the new reddit posts posts = reddit.subreddit(subreddit).new(limit=None)#load the posts into a pandas dataframe p = []
for post in posts:
p.append([post.title, post.score, post.selftext])posts_df = pd.DataFrame(p,columns=['title', 'score', 'post'])return posts_df
Notice I define the function get_reddit(). It takes in the API credentials and the subreddit name (wallstreetbets by default).Notice within the function the data gets saved from Reddit to the variable posts. The data is then unpacked, and appended in list variable p, and then saved as pandas DataFrame object named posts_df.
注意,我定義了函數get_reddit() 。 它接受API憑據和subreddit名稱(默認情況下為wallstreetbets)。請注意,函數中的數據已從Reddit保存到變量posts 。 然后將數據解壓縮,并附加到列表變量p中,然后另存為名為posts_df的 pandas DataFrame對象。
創建twitter_data.py (Create twitter_data.py)
The file tweet_data.py contains the functions to interact with the twitter API through Tweepy. Pulling historical tweets from users is a little complex since we want to save the results to SQLite. To make it less complex, I split it into two functions. The first is responsible for getting the twitter data. The second is responsible for cleaning and saving it:
文件tweet_data.py包含通過Tweepy與Twitter API交互的函數。 從用戶提取歷史推文有點復雜,因為我們希望將結果保存到SQLite。 為了使其不那么復雜,我將其分為兩個功能。 第一個負責獲取Twitter數據。 第二個負責清理和保存它:
- get_all_tweets get_all_tweets
- get_options_flow get_options_flow
The function get_all_tweets pulls as many historical tweets as possible from the user, up to around 3200 max.
函數get_all_tweets從用戶處拉取盡可能多的歷史推文,最大到3200個左右。
def get_all_tweets(screen_name
,consumer_key = t_conkey
, consumer_secret= t_consec
, access_key= t_akey
, access_secret= t_asec
): #Twitter only allows access to a users most recent 3240 tweets with this method #authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth) #initialize a list to hold all the tweepy Tweets
alltweets = [] #make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name = screen_name,count=200)#save most recent tweets
alltweets.extend(new_tweets)#save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1#keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:#all subsequent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest)#save most recent tweets
alltweets.extend(new_tweets)#update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
outtweets = [[tweet.id_str, tweet.created_at, tweet.text] for tweet in alltweets]
tweets_df = pd.DataFrame(outtweets, columns = ['time', 'datetime', 'text'])return tweets_df
Notice the function takes the twitter API credentials and the username of the twitter account I want to track. The function cycles through the historical tweets, adding them to a list. The list is then transformed into a pandas DataFrame objected named tweets_df.
請注意,該函數采用了Twitter API憑據和我要跟蹤的Twitter帳戶的用戶名。 該功能循環瀏覽歷史推文,并將它們添加到列表中。 然后將該列表轉換為名為tweets_df的熊貓DataFrame 對象 。
The function get_options_flow takes no arguments. It calls the get_all_tweets function, cleans the tweet data and saves it to the SQLite database so it can be called into the app frequently and automatically without impacting performance.
函數get_options_flow不帶參數。 它調用get_all_tweets函數,清除tweet數據并將其保存到SQLite數據庫,以便可以在不影響性能的情況下自動頻繁地將其調用到應用程序中。
def get_options_flow():#connect to the sqlite database
conn = sqlite3.connect('stocks.sqlite')#use get_all_tweets to pull the data from the twitter users
ss = get_all_tweets(screen_name ="SwaggyStocks")
uw = get_all_tweets(screen_name ="unusual_whales")#clean the text data
ss['source'] = 'swaggyStocks'
ss['text'] = hero.remove_urls(ss['text'])
ss['text'] = [n.replace('$','') for n in ss['text']] #clean the text data
uw['source'] = 'unusual_whales'
uw['text'] = hero.remove_urls(uw['text'])
uw['text'] = [n.replace('$','') for n in uw['text']]
uw['text'] = [n.replace(':','') for n in uw['text']]
uw['text'] = [n.replace('\n',' ') for n in uw['text']]
uw['text'] = [n.replace(' ',' ') for n in uw['text']] #concat the tweets into one dataframe tweets = pd.concat([ss, uw])#save the tweets to sqlite database
tweets.to_sql('tweets', conn, if_exists = 'replace')return print('done')
Notice the function uses regex and Texthero to clean the tweet text of special characters and URLs. Notice the two dataframes ss and uw of tweets are concatenated together and saved as one table in stocks.sqlite.
注意,該函數使用正則表達式和Texthero清除特殊字符和URL的tweet文本。 請注意,推文的兩個數據幀ss和uw串聯在一起,并保存為stocks.sqlite中的一個表。
創建index.py (Create index.py)
The index.py file consists of the code to instantiate the Dash app. It contains the layout and callbacks used to make the app interactive. This is the file I execute in the terminal using a command like $ python index.py
index.py文件包含用于實例化Dash應用程序的代碼。 它包含用于使應用程序具有交互性的布局和回調。 這是我在終端中使用$ python index.py之類的命令執行的文件
It is finally time to put it all together and construct the Dash App! Assuming the dependencies have been imported, start by instantiating the Dash App and calling the data functions to load the Twitter and Reddit data.
終于是時候將所有內容放在一起并構建Dash App了! 假設已導入依賴項,請先實例化Dash App并調用數據函數以加載Twitter和Reddit數據。
#Connect to sqlite database
conn = sqlite3.connect('stocks.sqlite')#instantiate dash app server using flask for easier hosting
server = Flask(__name__)
app = dash.Dash(__name__,server = server ,meta_tags=[{ "content": "width=device-width"}], external_stylesheets=[dbc.themes.BOOTSTRAP])#used for dynamic callbacks
app.config.suppress_callback_exceptions = True#get options flow from twitter
get_options_flow()
flow = pd.read_sql("select datetime, text from tweets order by datetime desc", conn)#get reddit data
global dfr
dfr = get_reddit()
After instantiating the server and loading the data, create the layout. The layout I went with is fairly simple. The layout components are wrapped around each other to achieve the desired layout look. I use an html.Div component to wrap the bootstrap grid components dbc.Row and dbc.Col. I construct a layout organizing Rows and Columns within one another like so:
實例化服務器并加載數據后,創建布局。 我使用的布局非常簡單。 布局組件彼此纏繞以實現所需的布局外觀。 我使用html.Div組件包裝引導網格組件dbc.Row和dbc.Col 。 我構建了一個布局,將行和列相互組織起來,如下所示:

layout1 = html.Div([
dbc.Row([dbc.Col(make_card("Enter Ticker", "success", ticker_inputs('ticker-input', 'date-picker', 36)))]) #row 1
,dbc.Row([dbc.Col([make_card("Twitter Order Flow", 'primary', make_table('table-sorting-filtering2', flow, '17px', 10))])
,dbc.Col([make_card("Fin table ", "secondary", html.Div(id="fin-table"))])
])
, dbc.Row([make_card("select ticker", "warning", "select ticker")],id = 'cards') #row 2
, dbc.Row([
dbc.Col([
dbc.Row([make_card("Wallstreet Bets New Posts", 'primary'
,[html.P(html.Button('Refresh', id='refresh'))
, make_table('table-sorting-filtering', dfr, '17px', 4)])], justify = 'center')
])
,dbc.Col([dbc.Row([dbc.Alert("_Charts_", color="primary")], justify = 'center')
,dbc.Row(html.Div(id='x-vol-1'), justify = 'center')
, dcc.Interval(
id='interval-component',
interval=1*150000, # in milliseconds
n_intervals=0)
, dcc.Interval(
id='interval-component2',
interval=1*60000, # in milliseconds
n_intervals=0)
,dbc.Row([html.Div(id='tweets')])
])#end col
])#end row
]) #end divapp.layout= layout1
Notice two things: Bold functions and Interval components.
注意兩件事: 粗體函數和Interval組件 。
I set all of the dash_util functions to bold so it is easier to see how they are strung together to produce cards with tables inside. For example, look at the Twitter Order Flow card:
我將所有dash_util函數設置為粗體,因此更容易看到它們如何串在一起以產生內部帶有表的卡。 例如,查看Twitter訂單流程卡:
make_card("Twitter Order Flow", 'primary', make_table('table-sorting-filtering2', flow, '17px', 10))

Notice I pass function make_table as cardbody in function make_card(title, color, cardbody, style_dict). That is how the tables appear inside the card in the layout!
注意,我在函數make_card( title,color,cardbody,style_dict ) 中將函數make_table作為卡片體傳遞。 這就是表格在布局中卡內的顯示方式!
Dash core component dcc.Interval is used to automatically refresh the Twitter feed every minute or so. That is why there is no Refresh button like the Reddit data.
Dash核心組件dcc.Interval用于每分鐘左右自動刷新Twitter feed。 這就是為什么沒有像Reddit數據這樣的Refresh按鈕的原因。
添加回調 (Adding Callbacks)
Now that the layout is complete, I’ll make the app functional by adding the callbacks. There are seven callbacks. I’ll break them down as follows:
現在布局已經完成,我將通過添加回調使應用程序正常運行。 有七個回調。 我將它們分解如下:
Refreshing Twitter DataLoading the Company Info Cards Sorting and Filtering Reddit and Twitter tablesPopulating the Financial ReportPopulating the Charts
刷新Twitter數據加載公司信息卡排序和過濾Reddit和Twitter表格填充財務報告填充圖表
刷新Twitter數據 (Refreshing Twitter Data)
This callback uses the interval component to automatically execute the get_options_flow function every minute.
此回調使用間隔組件每分鐘自動執行get_options_flow函數。
@app.callback(
Output('tweets', 'children'),
[Input('interval-component2', 'n_intervals'),
])
def new_tweets(n):
get_options_flow()
return html.P(f"Reloaded Tweets {n}")
加載公司信息卡 (Loading the Company Info Cards)
This callback passes the input Ticker to yfinance and returns bootstrap component Cards populated with company and historic price data. The make_card function is bold to make it easier to see.
此回調將輸入的股票行情傳遞給yfinance,并返回包含公司和歷史價格數據的引導程序組件卡。 make_card函數為粗體,以便于查看。
@app.callback(Output('cards', 'children'),
[Input('ticker-input', 'value')])
def refresh_cards(ticker):
ticker = ticker.upper()
if ticker is None:
TICKER = 'MSFT'
else:
TICKER = yf.Ticker(ticker)
cards = [ dbc.Col(make_card("Previous Close ", "secondary", TICKER.info['previousClose']))
, dbc.Col(make_card("Open", "secondary", TICKER.info['open']))
, dbc.Col(make_card("Sector", 'secondary', TICKER.info['sector']))
, dbc.Col(make_card("Beta", 'secondary', TICKER.info['beta']))
, dbc.Col(make_card("50d Avg Price", 'secondary', TICKER.info['fiftyDayAverage']))
, dbc.Col(make_card("Avg 10d Vol", 'secondary', TICKER.info['averageVolume10days']))
] #end cards list
return cards
Notice I’m setting the ticker to upper case using .upper() for best results using yfinance and Market Watch.
請注意,我使用將股票行情設置為大寫。 上()使用yfinance和市場觀察最好的結果。
排序和過濾Reddit和Twitter表 (Sorting and Filtering Reddit and Twitter tables)
These callbacks are fairly similar so I’m only going to review one here. Check the full code at the end for both. The callbacks use mostly boiler plate code for Dash tables with sorting and filtering enabled.
這些回調非常相似,因此我只在這里回顧一下。 最后檢查全部代碼。 回調函數在啟用了排序和過濾功能的Dash表中主要使用樣板代碼 。
The Reddit data callback takes in the n_clicks input from the dcc.Button component so the Refresh button can be used to reload the results.
Reddit數據回調從dcc.Button組件接受n_clicks輸入,因此可以使用Refresh按鈕來重新加載結果。
@app.callback(
Output('table-sorting-filtering', 'data'),
[Input('table-sorting-filtering', "page_current"),
Input('table-sorting-filtering', "page_size"),
Input('table-sorting-filtering', 'sort_by'),
Input('table-sorting-filtering', 'filter_query'),
Input('refresh', 'n_clicks')])
def update_table(page_current, page_size, sort_by, filter, n_clicks):
filtering_expressions = filter.split(' && ') if n_clicks is None:
raise PreventUpdate
else:
dff = get_reddit()
for filter_part in filtering_expressions:
col_name, operator, filter_value = split_filter_part(filter_part) if operator in ('eq', 'ne', 'lt', 'le', 'gt', 'ge'):# these operators match pandas series operator method names
dff = dff.loc[getattr(dff[col_name], operator)(filter_value)]
elif operator == 'contains':
dff = dff.loc[dff[col_name].str.contains(filter_value)]
elif operator == 'datestartswith':# this is a simplification of the front-end filtering logic,
# only works with complete fields in standard format
dff = dff.loc[dff[col_name].str.startswith(filter_value)] if len(sort_by):
dff = dff.sort_values(
[col['column_id'] for col in sort_by],
ascending=[
col['direction'] == 'asc'
for col in sort_by
],
inplace=False) page = page_current
size = page_size
return dff.iloc[page * size: (page + 1) * size].to_dict('records')
填充財務報告 (Populating the Financial Report)
This callback takes the ticker and passes it to the get_fin_report function. It returns a Dash Bootstrap table instead of using the make_table function.
此回調采用代碼,并將其傳遞給get_fin_report函數。 它返回一個Dash Bootstrap表,而不使用make_table函數。
@app.callback(Output('fin-table', 'children'),
[Input('ticker-input', 'value')])
def fin_report(sym):
sym = sym.upper()
df = get_financial_report(sym)
table = dbc.Table.from_dataframe(df, striped=True
, bordered=True, hover=True)
return table
填充圖表 (Populating the Charts)
All three charts are called at once in the same callback. The callback takes the Ticker, Start Date and End Date as inputs. It also takes an interval input from dbc.Interval to refresh the chart data automatically. The graphs are generated using Plotly’s candlestick charts for the authentic trader experience! The callback returns the chart Accordion.
在同一個回調中一次調用所有三個圖表。 回調將代碼行 , 開始日期和結束日期作為輸入。 它還需要從dbc.Interval輸入間隔,以自動刷新圖表數據。 這些圖是使用Plotly的燭臺圖生成的 ,以提供真實的交易經驗! 回調返回圖表Accordion。

@app.callback(Output('x-vol-1', 'children'),
[Input('ticker-input', 'value')
, Input('date-picker', 'start_date')
, Input('date-picker', 'end_date')
, Input('interval-component', 'n_intervals')
])
def create_graph(ticker,startdate, enddate, n):
ticker = ticker.upper()
df1 = yf.download(ticker,startdate, enddate)
df1.reset_index(inplace=True)
fig1 = go.Figure(data=[go.Candlestick(x=df1['Date'],
open=df1['Open'], high=df1['High'],
low=df1['Low'], close=df1['Close'])
])
df2 = yf.download(ticker, period = "5d", interval = "1m")
df2.reset_index(inplace=True)
fig2 = go.Figure(data=[go.Candlestick(x=df2['Datetime'],
open=df2['Open'], high=df2['High'],
low=df2['Low'], close=df2['Close'])
]) df3 = yf.download(ticker, period = "1d", interval = "1m")
df3.reset_index(inplace=True)
fig3 = go.Figure(data=[go.Candlestick(x=df3['Datetime'],
open=df3['Open'], high=df3['High'],
low=df3['Low'], close=df3['Close'])
])
accordion = html.Div([make_item("Daily Chart",
dcc.Graph(figure = fig1), 1 )
, make_item("5d 5m Chart"
, dcc.Graph( figure = fig2), 2)
, make_item("1d 1m Chart"
, dcc.Graph(figure = fig3), 3)
], className="accordion")
return accordion
Congratulations, the dashboard is completed! You are on your way to finding an edge and dominating the stock market!
恭喜,儀表板已完成! 您正在尋找優勢并主導股票市場的方式!
最終想法和完整代碼 (Final Thoughts and Complete Code)
Although this seems like a lot of code to go through, using Dash is fairly easy once you understand the layout and callback patterns. Using functions makes putting together the layout a breeze and saves on repeating code. This app is a great place to start, but can easily be enhanced to include technical analysis, more company info, and predictive analytics. It might even be possible to set up a broker API and trade through it!
盡管這似乎需要很多代碼,但是一旦您了解了布局和回調模式,使用Dash就會相當容易。 使用函數使布局變得輕而易舉,并節省了重復代碼。 這個應用程序是一個很好的起點,但可以輕松進行增強以包括技術分析,更多公司信息和預測分析。 甚至可以設置經紀人API并通過它進行交易!
Thanks for reading. Check out my other articles if you’re interested in the stock market, programming and data science:
謝謝閱讀。 如果您對股市,編程和數據科學感興趣,請查看我的其他文章:
代碼 (The Code)
Find it on Github too!
也可以在Github上找到它!
Dash_utils.py (Dash_utils.py)
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
import dash_tableimport flask
from flask import Flask
import pandas as pd
import dateutil.relativedelta
from datetime import date
import datetime
import yfinance as yf
import numpy as np
import praw
import sqlite3import plotly
import plotly.graph_objects as go
from plotly.subplots import make_subplotsdef make_table(id, dataframe, lineHeight = '17px', page_size = 5):
return dash_table.DataTable(
id=id,
css=[{'selector': '.row', 'rule': 'margin: 0'}],
columns=[
{"name": i, "id": i} for i in dataframe.columns
],
style_header={
'backgroundColor': 'rgb(230, 230, 230)',
'fontWeight': 'bold'},
style_cell={'textAlign': 'left'},
style_data={
'whiteSpace': 'normal',
'height': 'auto',
'lineHeight': lineHeight
},
# style_table = {'width':300},
style_data_conditional=[
{
'if': {'row_index': 'odd'},
'backgroundColor': 'rgb(248, 248, 248)'
}
],
style_cell_conditional=[
{'if': {'column_id': 'title'},
'width': '130px'},
{'if': {'column_id': 'post'},
'width': '500px'},
{'if': {'column_id': 'datetime'},
'width': '130px'},
{'if': {'column_id': 'text'},
'width': '500px'}],
page_current=0,
page_size=page_size,
page_action='custom',filter_action='custom',
filter_query='',sort_action='custom',
sort_mode='multi',
sort_by=[],
#dataframe.to_dict('records')
)def make_card(alert_message, color, cardbody, style_dict = None):
return dbc.Card([ dbc.Alert(alert_message, color=color)
,dbc.CardBody(cardbody)
], style = style_dict)#end carddef ticker_inputs(inputID, pickerID, MONTH_CUTTOFF):
currentDate = date.today()
pastDate = currentDate - dateutil.relativedelta.relativedelta(months=MONTH_CUTTOFF)
return html.Div([
dcc.Input(id = inputID, type="text", placeholder="MSFT")
, html.P(" ")
, dcc.DatePickerRange(
id = pickerID,
min_date_allowed=pastDate,
#max_date_allowed=currentDate,
#initial_visible_month=dt(2017, 8, 5),
start_date = pastDate,
#end_date = currentDate
)])def make_item(button, cardbody, i):
# we use this function to make the example items to avoid code duplication
return dbc.Card([
dbc.CardHeader(
html.H2(
dbc.Button(
button,
color="link",
id=f"group-{i}-toggle",
))
),
dbc.Collapse(
dbc.CardBody(cardbody),
id=f"collapse-{i}",
)])
Reddit_data.py (Reddit_data.py)
import pandas as pd
import dateutil.relativedelta
from datetime import date
import datetime
import yfinance as yf
import numpy as np
import praw
import sqlite3from config import r_cid, r_csec, r_uag#return a dataframe for the newest reddit posts
def get_reddit(cid= r_cid, csec= r_csec, uag= r_uag, subreddit='wallstreetbets'):
#connect to reddit
reddit = praw.Reddit(client_id= cid, client_secret= csec, user_agent= uag)#get the new reddit posts
posts = reddit.subreddit(subreddit).new(limit=None)#load the posts into a pandas dataframe
p = []
for post in posts:
p.append([post.title, post.score, post.selftext])
posts_df = pd.DataFrame(p,columns=['title', 'score', 'post'])
return posts_df
tweet_data.py (tweet_data.py)
import tweepy
import pandas as pd
import sqlite3
import json
import datetime
from datetime import date
import texthero as hero
import regex as re
import stringfrom config import t_conkey, t_consec, t_akey, t_asecpd.set_option('display.max_colwidth',None)def get_all_tweets(screen_name
,consumer_key = t_conkey
, consumer_secret= t_consec
, access_key= t_akey
, access_secret= t_asec
):
#Twitter only allows access to a users most recent 3240 tweets with this method
#authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
#initialize a list to hold all the tweepy Tweets
alltweets = []
#make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name = screen_name,count=200)
#save most recent tweets
alltweets.extend(new_tweets)
#save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:
#print(f"getting tweets before {oldest}")
#all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest)
#save most recent tweets
alltweets.extend(new_tweets)
#update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
outtweets = [[tweet.id_str, tweet.created_at, tweet.text] for tweet in alltweets]
tweets_df = pd.DataFrame(outtweets, columns = ['time', 'datetime', 'text'])return tweets_dfdef get_options_flow():
conn = sqlite3.connect('stocks.sqlite')ss = get_all_tweets(screen_name ="SwaggyStocks")
uw = get_all_tweets(screen_name ="unusual_whales")
ss['source'] = 'swaggyStocks'
ss['text'] = hero.remove_urls(ss['text'])
ss['text'] = [n.replace('$','') for n in ss['text']]
uw['source'] = 'unusual_whales'
uw['text'] = hero.remove_urls(uw['text'])
uw['text'] = [n.replace('$','') for n in uw['text']]
uw['text'] = [n.replace(':','') for n in uw['text']]
uw['text'] = [n.replace('\n',' ') for n in uw['text']]
uw['text'] = [n.replace(' ',' ') for n in uw['text']]
tweets = pd.concat([ss, uw])
tweets.to_sql('tweets', conn, if_exists = 'replace')return print('done')
get_fin_report.py (get_fin_report.py)
import pandas as pd
from bs4 import BeautifulSoup
import requests
from datetime import datedef get_financial_report(ticker):# try:
urlfinancials = 'https://www.marketwatch.com/investing/stock/'+ticker+'/financials'
urlbalancesheet = 'https://www.marketwatch.com/investing/stock/'+ticker+'/financials/balance-sheet'text_soup_financials = BeautifulSoup(requests.get(urlfinancials).text,"html") #read in
text_soup_balancesheet = BeautifulSoup(requests.get(urlbalancesheet).text,"html") #read in# build lists for Income statement
titlesfinancials = text_soup_financials.findAll('td', {'class': 'rowTitle'})
epslist=[]
netincomelist = []
longtermdebtlist = []
interestexpenselist = []
ebitdalist= []for title in titlesfinancials:
if 'EPS (Basic)' in title.text:
epslist.append ([td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])
if 'Net Income' in title.text:
netincomelist.append ([td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])
if 'Interest Expense' in title.text:
interestexpenselist.append ([td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])
if 'EBITDA' in title.text:
ebitdalist.append ([td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])# find the table headers for the Balance sheet
titlesbalancesheet = text_soup_balancesheet.findAll('td', {'class': 'rowTitle'})
equitylist=[]
for title in titlesbalancesheet:
if 'Total Shareholders\' Equity' in title.text:
equitylist.append( [td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])
if 'Long-Term Debt' in title.text:
longtermdebtlist.append( [td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text])#get the data from the income statement lists
#use helper function get_element
eps = get_element(epslist,0)
epsGrowth = get_element(epslist,1)
netIncome = get_element(netincomelist,0)
shareholderEquity = get_element(equitylist,0)
roa = get_element(equitylist,1)longtermDebt = get_element(longtermdebtlist,0)
interestExpense = get_element(interestexpenselist,0)
ebitda = get_element(ebitdalist,0)# load all the data into dataframe
fin_df= pd.DataFrame({'eps': eps,'eps Growth': epsGrowth,'net Income': netIncome,'shareholder Equity': shareholderEquity,'roa':
roa,'longterm Debt': longtermDebt,'interest Expense': interestExpense,'ebitda': ebitda},index=range(date.today().year-5,date.today().year))
fin_df.reset_index(inplace=True)
return fin_dfdef get_element(list,element):
try:
return list[element]
except:
return '-'
index.py (index.py)
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
import dash_table
from dash.exceptions import PreventUpdateimport flask
from flask import Flask
import pandas as pd
import dateutil.relativedelta
from datetime import date
import datetime
import yfinance as yf
import numpy as np
import praw
import sqlite3import plotly
import plotly.graph_objects as go
from plotly.subplots import make_subplotsfrom dash_utils import make_table, make_card, ticker_inputs, make_item
from reddit_data import get_reddit
from tweet_data import get_options_flow
from fin_report_data import get_financial_report #, get_financial_reportformattedconn = sqlite3.connect('stocks.sqlite')server = Flask(__name__)
app = dash.Dash(__name__,server = server ,meta_tags=[{ "content": "width=device-width"}], external_stylesheets=[dbc.themes.BOOTSTRAP])app.config.suppress_callback_exceptions = Trueget_options_flow()
flow = pd.read_sql("select datetime, text from tweets order by datetime desc", conn)global dfr
dfr = get_reddit()
layout1 = html.Div([
dbc.Row([dbc.Col(make_card("Enter Ticker", "success", ticker_inputs('ticker-input', 'date-picker', 36)))]) #row 1
,dbc.Row([dbc.Col([make_card("Twitter Order Flow", 'primary', make_table('table-sorting-filtering2', flow, '17px', 10))])
,dbc.Col([make_card("Fin table ", "secondary", html.Div(id="fin-table"))])
])
, dbc.Row([make_card("select ticker", "warning", "select ticker")],id = 'cards') #row 2
, dbc.Row([
dbc.Col([
dbc.Row([make_card("Wallstreet Bets New Posts"
, 'primary'
,[html.P(html.Button('Refresh', id='refresh'))
, make_table('table-sorting-filtering', dfr, '17px', 4)]
)], justify = 'center')
])
,dbc.Col([dbc.Row([dbc.Alert("__Charts__", color="primary")], justify = 'center')
,dbc.Row(html.Div(id='x-vol-1'), justify = 'center')
, dcc.Interval(
id='interval-component',
interval=1*150000, # in milliseconds
n_intervals=0)
, dcc.Interval(
id='interval-component2',
interval=1*60000, # in milliseconds
n_intervals=0)
,dbc.Row([html.Div(id='tweets')])
])#end col
])#end row
]) #end divapp.layout= layout1operators = [['ge ', '>='],
['le ', '<='],
['lt ', '<'],
['gt ', '>'],
['ne ', '!='],
['eq ', '='],
['contains '],
['datestartswith ']]def split_filter_part(filter_part):
for operator_type in operators:
for operator in operator_type:
if operator in filter_part:
name_part, value_part = filter_part.split(operator, 1)
name = name_part[name_part.find('{') + 1: name_part.rfind('}')]value_part = value_part.strip()
v0 = value_part[0]
if (v0 == value_part[-1] and v0 in ("'", '"', '`')):
value = value_part[1: -1].replace('\\' + v0, v0)
else:
try:
value = float(value_part)
except ValueError:
value = value_part# word operators need spaces after them in the filter string,
# but we don't want these later
return name, operator_type[0].strip(), valuereturn [None] * 3@app.callback(Output('cards', 'children'),
[Input('ticker-input', 'value')])
def refresh_cards(ticker):
ticker = ticker.upper()
if ticker is None:
TICKER = 'MSFT'
else:
TICKER = yf.Ticker(ticker)
cards = [ dbc.Col(make_card("Previous Close ", "secondary", TICKER.info['previousClose']))
, dbc.Col(make_card("Open", "secondary", TICKER.info['open']))
, dbc.Col(make_card("Sector", 'secondary', TICKER.info['sector']))
, dbc.Col(make_card("Beta", 'secondary', TICKER.info['beta']))
, dbc.Col(make_card("50d Avg Price", 'secondary', TICKER.info['fiftyDayAverage']))
, dbc.Col(make_card("Avg 10d Vol", 'secondary', TICKER.info['averageVolume10days']))
] #end cards list
return cards@app.callback(
[Output(f"collapse-{i}", "is_open") for i in range(1, 4)],
[Input(f"group-{i}-toggle", "n_clicks") for i in range(1, 4)],
[State(f"collapse-{i}", "is_open") for i in range(1, 4)],
)
def toggle_accordion(n1, n2, n3, is_open1, is_open2, is_open3):
ctx = dash.callback_context
if not ctx.triggered:
return ""
else:
button_id = ctx.triggered[0]["prop_id"].split(".")[0]
if button_id == "group-1-toggle" and n1:
return not is_open1, False, False
elif button_id == "group-2-toggle" and n2:
return False, not is_open2, False
elif button_id == "group-3-toggle" and n3:
return False, False, not is_open3
return False, False, False@app.callback(Output('x-vol-1', 'children'),
[Input('ticker-input', 'value')
, Input('date-picker', 'start_date')
, Input('date-picker', 'end_date')
, Input('interval-component', 'n_intervals')
])
def create_graph(ticker,startdate, enddate, n):
ticker = ticker.upper()
df1 = yf.download(ticker,startdate, enddate)
df1.reset_index(inplace=True)
fig1 = go.Figure(data=[go.Candlestick(x=df1['Date'],
open=df1['Open'], high=df1['High'],
low=df1['Low'], close=df1['Close'])
])
df2 = yf.download(ticker, period = "5d", interval = "1m")
df2.reset_index(inplace=True)
fig2 = go.Figure(data=[go.Candlestick(x=df2['Datetime'],
open=df2['Open'], high=df2['High'],
low=df2['Low'], close=df2['Close'])
])df3 = yf.download(ticker, period = "1d", interval = "1m")
df3.reset_index(inplace=True)
fig3 = go.Figure(data=[go.Candlestick(x=df3['Datetime'],
open=df3['Open'], high=df3['High'],
low=df3['Low'], close=df3['Close'])
])
accordion = html.Div([make_item("Daily Chart", dcc.Graph(figure = fig1), 1 )
, make_item("5d 5m Chart",dcc.Graph( figure = fig2), 2)
, make_item("1d 1m Chart", dcc.Graph(figure = fig3), 3)
], className="accordion")
return accordion@app.callback(
Output('tweets', 'children'),
[Input('interval-component2', 'n_intervals'),
])
def new_tweets(n):
get_options_flow()
return html.P(f"Reloaded Tweets {n}")@app.callback(
Output('table-sorting-filtering', 'data'),
[Input('table-sorting-filtering', "page_current"),
Input('table-sorting-filtering', "page_size"),
Input('table-sorting-filtering', 'sort_by'),
Input('table-sorting-filtering', 'filter_query'),
Input('refresh', 'n_clicks')])
def update_table(page_current, page_size, sort_by, filter, n_clicks):
filtering_expressions = filter.split(' && ')if n_clicks is None:
raise PreventUpdate
else:
dff = get_reddit()
for filter_part in filtering_expressions:
col_name, operator, filter_value = split_filter_part(filter_part)if operator in ('eq', 'ne', 'lt', 'le', 'gt', 'ge'):
# these operators match pandas series operator method names
dff = dff.loc[getattr(dff[col_name], operator)(filter_value)]
elif operator == 'contains':
dff = dff.loc[dff[col_name].str.contains(filter_value)]
elif operator == 'datestartswith':
# this is a simplification of the front-end filtering logic,
# only works with complete fields in standard format
dff = dff.loc[dff[col_name].str.startswith(filter_value)]if len(sort_by):
dff = dff.sort_values(
[col['column_id'] for col in sort_by],
ascending=[
col['direction'] == 'asc'
for col in sort_by
],
inplace=False)page = page_current
size = page_size
return dff.iloc[page * size: (page + 1) * size].to_dict('records')@app.callback(
Output('table-sorting-filtering2', 'data'),
[Input('table-sorting-filtering2', "page_current"),
Input('table-sorting-filtering2', "page_size"),
Input('table-sorting-filtering2', 'sort_by'),
Input('table-sorting-filtering2', 'filter_query'),
Input('interval-component', 'n_intervals')
])
def update_table2(page_current, page_size, sort_by, filter, n):
filtering_expressions = filter.split(' && ')
conn = sqlite3.connect('stocks.sqlite')
flow = pd.read_sql("select datetime, text, source from tweets order by datetime desc", conn)
dff = flowfor filter_part in filtering_expressions:
col_name, operator, filter_value = split_filter_part(filter_part)if operator in ('eq', 'ne', 'lt', 'le', 'gt', 'ge'):
# these operators match pandas series operator method names
dff = dff.loc[getattr(dff[col_name], operator)(filter_value)]
elif operator == 'contains':
dff = dff.loc[dff[col_name].str.contains(filter_value)]
elif operator == 'datestartswith':
# this is a simplification of the front-end filtering logic,
# only works with complete fields in standard format
dff = dff.loc[dff[col_name].str.startswith(filter_value)]if len(sort_by):
dff = dff.sort_values(
[col['column_id'] for col in sort_by],
ascending=[
col['direction'] == 'asc'
for col in sort_by
],
inplace=False
)page = page_current
size = page_size
return dff.iloc[page * size: (page + 1) * size].to_dict('records')@app.callback(Output('fin-table', 'children'),
[Input('ticker-input', 'value')])
def fin_report(sym):
sym = sym.upper()
df = get_financial_report(sym)
#table = make_table('table-sorting-filtering3', df, '20px',8)
table = dbc.Table.from_dataframe(df, striped=True, bordered=True, hover=True)return tableif __name__ == '__main__':
app.run_server()
Thanks!
謝謝!
翻譯自: https://medium.com/swlh/how-to-create-a-dashboard-to-dominate-the-stock-market-using-python-and-dash-c35a12108c93
python股市
本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。 如若轉載,請注明出處:http://www.pswp.cn/news/389957.shtml 繁體地址,請注明出處:http://hk.pswp.cn/news/389957.shtml 英文地址,請注明出處:http://en.pswp.cn/news/389957.shtml
如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!