python 實現分步累加
As data scientists, we are always on the look for new data and information to analyze and manipulate. One of the main approaches to find data right now is scraping the web for a particular inquiry.
作為數據科學家,我們一直在尋找新的數據和信息進行分析和處理。 目前查找數據的主要方法之一是抓取特定查詢的網絡。
When we browse the internet, we come across a massive number of websites, these websites display various data on the browser. If we, for some reason want to use this data for a project or an ML algorithm, we can — but shouldn’t — gather this data manually. So, we will copy the sections we want and paste them in a doc or CSV file.
當我們瀏覽Internet時,我們會遇到大量網站,這些網站在瀏覽器上顯示各種數據。 如果出于某種原因我們想要將此數據用于項目或ML算法,我們可以(但不應該)手動收集此數據。 因此,我們將復制所需的部分并將其粘貼到doc或CSV文件中。
Needless to say, that will be quite a tedious task. That’s why most data scientists and developers go with web scraping using code. It’s easy to write code to extract data from a 100 webpage than do them by hand.
不用說,這將是一個繁瑣的任務。 這就是為什么大多數數據科學家和開發人員都使用代碼進行Web抓取的原因。 與手動編寫代碼相比,編寫代碼從100個網頁提取數據要容易得多。
Web Scraping is the technique used by programmers to automate the process of finding and extracting data from the internet within a relatively short time.
Web Scraping是程序員用來在相對較短的時間內自動從Internet查找和提取數據的過程的技術。
The most important question when it comes to web scraping, is it legal?
關于網頁抓取,最重要的問題是合法的嗎?
網站抓取合法嗎? (Is web scraping legal?)
Short answer, yes.
簡短的回答, 是的 。
The more detailed answer, scraping publically available data for non-commercial purposes was announced to be completely legal in late January 2020.
更為詳細的答案是,出于非商業目的收集可公開獲得的數據在2020年1月下旬宣布是完全合法的。
You might wonder, what does publically available mean?
您可能想知道, 公開可用是什么意思?
Publically available information is the information that anyone can see/ find on the internet without the need for special access. So, information on Wikipedia, social media or Google’s search results are examples of publically available data.
公開信息是任何人都可以在互聯網上看到/找到的信息,而無需特殊訪問。 因此,有關維基百科,社交媒體或Google搜索結果的信息就是公開可用數據的示例。
Now, social media is somewhat complicated, because there are parts of it that are not publically available, such as when a user sets their information to be private. In this case, this information is illegal to be scraped.
現在,社交媒體有些復雜,因為社交媒體的某些部分是不公開的,例如當用戶將其信息設置為私人信息時。 在這種情況下,此信息被非法刪除。
One last thing, there’s a difference between publically available and copyrighted. For example, you can scrap YouTube for video titles, but you can’t use the videos for commercial use because they are copyrighted.
最后一件事,公開可用和受版權保護之間有區別。 例如,您可以刪除YouTube上的視頻標題,但不能將其用于商業用途,因為它們已受版權保護。
如何抓取網頁? (How to scrap the web?)
There are different programming languages that you can use to scrape the web, and within every programming language, there are different libraries to achieve the same goal.
您可以使用多種編程語言來抓取Web,并且在每種編程語言中,都有不同的庫可以實現相同的目標。
So, what to use?
那么,使用什么呢?
In this article, I will use Python, Requests, and BeautifulSoup to scrap some pages from Wikipedia.
在本文中,我將使用Python , Requests和BeautifulSoup從Wikipedia抓取一些頁面。
To scrap and extract any information from the internet, you’ll probably need to go through three stages: Fetching HTML, Obtaining HTML Tree, then Extracting information from the tree.
要從互聯網上抓取和提取任何信息,您可能需要經歷三個階段:獲取HTML,獲取HTML樹,然后從樹中提取信息。

We will use the Requests library to fetch the HTML code from a specific URL. Then, we will use BeautifulSoup to Parse and Extract the HTML tree, and finally, we will use pure Python to organize the data.
我們將使用Requests庫從特定的URL提取HTML代碼。 然后,我們將使用BeautifulSoup解析和提取HTML樹,最后,我們將使用純Python來組織數據。
基本HTML (Basic HTML)
Before we get scraping, let’s revise HTML basics quickly. Everything in HTML is defined within tags. The most important tag is <HTML> which means that the text to follow is HTML code.
在抓取之前,讓我們快速修改HTML基礎。 HTML中的所有內容都在標記中定義。 最重要的標記是<HTML>,這意味著要跟隨的文本是HTML代碼。
In HTML, each opened tag must be closed. So, at the end of the HTML file, we need a closure tag </HTML>.
在HTML中,必須關閉每個打開的標簽。 因此,在HTML文件的末尾,我們需要一個結束標記</ HTML>。

Different tags in HTML means different things. Using a combination of tags, a webpage is represented. Any text enclosed between an open and close tag is called inner HTML text.
HTML中的不同標記意味著不同的含義。 使用標簽的組合來表示網頁。 包含在打開和關閉標簽之間的任何文本都稱為內部HTML文本 。
If we have multiple elements with the same tag, we might — actually, always — want to differentiate between them somehow. There are two ways to do that, either through using classes or ids. Ids are unique, which means we can’t have two elements with the same id. Classes, on the other hand, are not. More than one element can have the same class.
如果我們有多個具有相同標簽的元素,則我們可能-實際上一直-希望以某種方式區分它們。 有兩種方法可以做到這一點,或者通過使用類或ID。 ID是唯一的,這意味著我們不能有兩個具有相同ID的元素。 另一方面,類不是。 多個元素可以具有相同的類。
Here are 10 HTML tags you will see a lot when scraping the web.
這是10個HTML標記,在抓取網絡時會看到很多。

基本刮 (Basic Scraping)
Awesome, now that we know the basics, let’s start up small and then build up!
太棒了,現在我們已經了解了基礎知識,讓我們從小處開始,然后逐步建立!
Our first step is to install BeautifulSoup by typing the following in the command line.
我們的第一步是通過在命令行中鍵入以下內容來安裝BeautifulSoup。
pip install bs4
To get familiar with scraping basics, we will consider an example HTML code and learn how to use BeautifulSoup to explore it.
為了熟悉抓取的基礎知識,我們將考慮一個示例HTML代碼,并學習如何使用BeautifulSoup進行探索。
<HTML><HEAD><TITLE>My cool title</TITLE></HEAD><BODY><H1>This is a Header</H1><ul id="list" class="coolList"><li>item 1</li><li>item 2</li><li>item 3</li></ul></BODY>
</HTML>
BeautifulSoup doesn’t fetch HTML from the web, it is, however, extremely good at extracting information from an HTML string.
BeautifulSoup不能從Web上獲取HTML,但是,它非常擅長從HTML字符串中提取信息。
In order to use the above HTML in Python, we will set it up as a string and then use different BeautifulSoup to explore it.
為了在Python中使用上述HTML,我們將其設置為字符串,然后使用其他BeautifulSoup對其進行探索。
Note: if you’re using Jupyter Notebook to follow this article, you can type the following command to view HTML within the Notebook.
注意:如果您使用Jupyter Notebook跟隨本文,則可以鍵入以下命令以在Notebook中查看HTML。
from IPython.core.display import display, HTML
display(HTML(some_html_str))
For example, the above HTML will look something like this:
例如,上面HTML將如下所示:

Next, we need to feed this HTML to BeautifulSoup in order to generate the HTML tree. HTML tree is a representation of the different levels of the HTML code, it shows the hierarchy of the code.
接下來,我們需要將此HTML饋送到BeautifulSoup,以便生成HTML樹。 HTML樹表示HTML代碼的不同級別,它顯示了代碼的層次結構。
The HTML tree of the above code is:
上面代碼HTML樹為:

To generate the tree, we write
為了生成樹,我們寫
some_html_str = """
<HTML>
<HEAD>
<TITLE>My cool title</TITLE>
</HEAD><BODY>
<H1>This is a Header</H1>
<ul id="list" class="coolList">
<li>item 1</li>
<li>item 2</li>
<li>item 3</li>
</ul>
</BODY>
</HTML>
"""
#Feed the HTML to BeautifulSoup
soup = bs(some_html_str)
The variable soup
now has the information extracted from the HTML string. We can use this variable to obtain information from the HTML tree.
現在,變量soup
具有從HTML字符串中提取的信息。 我們可以使用此變量從HTML樹中獲取信息。
BeautifulSoup has many functions that can be used to extract specific aspects of the HTML string. However, two functions are used to most: find
and find_all.
BeautifulSoup具有許多功能,可用于提取HTML字符串的特定方面。 但是,大多數情況下使用兩個函數: find
和find_all.

The function find
returns only the first occurrence of the search query, while find_all
returns a list of all matches.
函數find
僅返回搜索查詢的第一個匹配項,而find_all
返回所有匹配項的列表。
Say, we are searching for all <h1> headers in the code.
說,我們正在搜索代碼中的所有<h1>標頭。

As you can see, the find
function gave me the <h1> tag. With the tags and all. Often, we only want to extract the inner HTML text. To do that we use .text
.
如您所見, find
函數給了我<h1>標記。 隨著標簽和所有。 通常,我們只想提取內部HTML文本。 為此,我們使用.text
。

That was simply because we only have one <h1> tag. But what if we want to look for list items — we have an unordered list with three items in our example — we can’t use find
. If we do, we will only get the first item.
那僅僅是因為我們只有一個<h1>標簽。 但是,如果我們要查找列表項-我們的示例中有一個包含三個項目的無序列表,該怎么辦-我們不能使用find
。 如果這樣做,我們只會得到第一項。

To find all the list items, we need to use find_all
.
要查找所有列表項,我們需要使用find_all
。

Okay, now that we have a list of items, let’s answer two questions:
好的,現在我們有了項目列表,讓我們回答兩個問題:
1- How to get the inner HTML of the list items?
1-如何獲取列表項的內部HTML?
To obtain the inner text only, we can’t use .text straight away, because now we have a list of elements and not just one. Hence, we need to iterate over the list and obtain the inner HTML of each list item.
只獲取內部文本,我們不能立即使用.text,因為現在我們有了元素列表,而不僅僅是一個。 因此,我們需要遍歷列表并獲取每個列表項的內部HTML。

2- What if we have multiple lists in the code?
2-如果代碼中有多個列表怎么辦?
If we have more than one list in the code — which is usually the case — we can be precise when searching for elements. In our example, the list has id=’list’ and class=’coolList’. We can use this — both or just one — with the find_all
or find
functions to be precise and get the information we want.
如果我們在代碼中有多個列表(通常是這種情況),則在搜索元素時可以很精確。 在我們的示例中,列表具有id ='list'和class ='coolList'。 我們可以將它(全部或僅一個)與find_all
一起使用,或精確find
功能并獲取所需的信息。

One thing to note here is the return of the find
or find_all
functions are BeautifulSoup objects and those can be traversed further. So, we can treat them just like the object obtained directly from the HTML string.
這里要注意的一件事是, find
或find_all
函數的返回值都是BeautifulSoup對象,可以進一步遍歷這些對象。 因此,我們可以將它們像直接從HTML字符串獲得的對象一樣對待。
Complete code for this section:
本節的完整代碼:
#Import needed libraries
from bs4 import BeautifulSoup as bs
import requests as rq
#HTML string
some_html_str = """
<HTML><HEAD><TITLE>My cool title</TITLE></HEAD><BODY><H1>This is a Header</H1><ul id="list" class="coolList"><li>item 1</li><li>item 2</li><li>item 3</li></ul>
</BODY>
</HTML>
"""
soup = bs(some_html_str)
#Get headers
print(soup.find('h1'))
print(soup.find('h1').text)
#Get all list items
inner_text = [item.text for item in soup.find_all('li')]
print(inner_text)
ourList = soup.find(attrs={"class":"coolList", "id":"list"})
print(ourList.find_all('li'))
We can traverse the HTML tree using other BeautifulSoup functions, like children
, parent
, next
, etc.
我們可以使用其他BeautifulSoup功能,如遍歷HTML樹children
, parent
, next
,等

抓取一個網頁 (Scraping one webpage)
Let’s consider a more realistic example, where we fetch the HTML from a URL and then use BeautifulSoup to extract patterns and data.
讓我們考慮一個更現實的示例,其中我們從URL提取HTML,然后使用BeautifulSoup提取模式和數據。
We will start by fetching one webpage. I love coffee, so let’s try fetching the Wikipedia page listing countries by coffee production and then plot the countries using Pygal.
我們將從獲取一個網頁開始。 我喜歡咖啡,所以讓我們嘗試獲取按咖啡產量列出國家的Wikipedia頁面,然后使用Pygal繪制國家。
To fetch the HTML we will use the Requests library and then pass the fetched HTML to BeautifulSoup.
要獲取HTML,我們將使用Requests庫,然后將獲取HTML傳遞給BeautifulSoup。

If we opened this wiki page, we will find a big table with the countries, and different measures of coffee production. We just want to extract the country name and the coffee production in tons.
如果打開此Wiki頁面,我們將找到一張列出了各個國家/地區以及不同的咖啡生產量度的大表。 我們只想提取國家名稱和噸咖啡產量。
To extract this information, we need to study the HTML of the page to know what to query. We can just highlight a country name, right-click, and choose inspect.
要提取此信息,我們需要研究頁面HTML以了解要查詢的內容。 我們可以僅突出顯示一個國家名稱, 單擊鼠標右鍵 ,然后選擇inspect 。

Through inspecting the page, we can see that the country names and the quantity are enclosed within a ‘table’ tag. Since it is the first table on the page, we can just use the find
function to extract it.
通過檢查頁面,我們可以看到國家名稱和數量包含在“表格”標簽中。 由于它是頁面上的第一張表,因此我們可以使用find
函數來提取它。
However, extracting the table directly will give us all the table’s content, with the table header — the first row of the table — and the quantity in different measures.
但是,直接提取表格將為我們提供表格的所有內容,包括表格標題(表格的第一行)和數量(采用不同度量)。
So, we need to fine-tune our search. Let’s try it out with the top 10 countries.
因此,我們需要微調搜索。 讓我們與前10個國家/地區一起嘗試一下。

Notice that to clean up the results, I used string manipulation to extract the information I want.
注意,為了清理結果,我使用了字符串操作來提取所需的信息。
I can use this list to finally plot the top 10 countries using Pygal.
我可以使用此列表最終使用Pygal列出前10個國家/地區。


Complete code for this section:
本節的完整代碼:
#Import needed libraries
from bs4 import BeautifulSoup as bs
import requests as rq
import pygal
from IPython.display import display, HTML
#Fetch HTML
url = 'https://en.wikipedia.org/wiki/List_of_countries_by_coffee_production'
#Extract HTMl tree
page = rq.get(url).text
soup = bs(page)
#Find countries and quantiy
table = soup.find('table')
top_10_countries = []
for row in table.find_all('tr')[2:11]:temp = row.text.replace('\n\n',' ').strip() #obtain only the quantiy in tonstemp_list = temp.split()top_10_countries.append((temp_list[0],temp_list[2]))
#Plot the top 10 countries
bar_chart = pygal.Bar(height=400)
[bar_chart.add(item[0],int(item[1].replace(',',''))) for item in top_10_countries]
display(HTML(base_html.format(rendered_chart=bar_chart.render(is_unicode=True))))
抓取多個網頁 (Scraping multiple webpages)
Wow, that was a lot! 😃
哇,好多啊! 😃
But, we yet to write code that scraps different webpages.
但是,我們尚未編寫可刪除不同網頁的代碼。
For this section, we will scrap the wiki page with the best 100 books of all time, and then we will categorize these books based on their genre. Trying to see if we can find a relation between the genre and the list — which genre performed best.
在本節中,我們將在Wiki頁面上廢棄有史以來最好的100本書 ,然后根據這些書籍的類型對其進行分類。 嘗試查看我們是否可以找到流派和列表之間的關系-哪種流派表現最好。
The wiki page contains links to each of the 100 books as well as their authors. We want our code to navigate the list, go to the book wiki page, extract info like genre, name, author, and publishing year and then store this info in a Python dictionary — you can store the data in a Pandas frame as well.
Wiki頁面包含指向這100本書及其作者的鏈接。 我們希望我們的代碼在列表中導航,轉到書籍Wiki頁面,提取類型,名稱,作者和出版年份等信息,然后將此信息存儲在Python字典中-您也可以將數據存儲在Pandas框架中。
So, to do this we need a couple of steps:
因此,要做到這一點,我們需要幾個步驟:
- Fetch the main URL HTML code. 獲取主URL HTML代碼。
- Feed that HTML to BeautifulSoup. 將HTML饋送給BeautifulSoup。
- Extract each book from the list and get the wiki link of each book. 從列表中提取每本書,并獲得每本書的Wiki鏈接。
- Obtain data for each book. 獲取每本書的數據。
- Get all books data, clean, and plot final results. 獲取所有書籍數據,清理并繪制最終結果。
Let’s get started…
讓我們開始吧…
Step #1: Fetch main URL HTML code
步驟1:獲取主網址HTML代碼
url = 'https://en.wikipedia.org/wiki/Time%27s_List_of_the_100_Best_Novels'
page = rq.get(url).text
Step #2: Feed that HTML to BeautifulSoup
步驟2:將HTML饋送到BeautifulSoup
soup = bs(page)
Step #3: Extract each book from the list and get the wiki link of each book
步驟#3:從清單中提取每本書,并獲取每本書的Wiki鏈接
rows = soup.find('table').find_all('tr')[1:]
books_links = [row.find('a')['href'] for row in rows]
base_url = 'https://en.wikipedia.org'
books_urls = [base_url + link for link in books_links]
Step #4: Obtain data for each book
步驟4:獲取每本書的數據
This is the most lengthy and important step. We will first consider only one book, assume it’s the first one in the list. If we open the wiki page of the book we will see the different information of the book enclosed in a table on the right side of the screen.
這是最漫長和重要的一步。 我們首先只考慮一本書,假設它是列表中的第一本書。 如果打開該書的Wiki頁面,我們將在屏幕右側的表格中看到該書的不同信息。

Going through the HTML we can see where everything is stored.
通過HTML,我們可以看到所有內容的存儲位置。
To make things easier and more efficient, I wrote custom functions to extract different information from the book’s wiki.
為了使事情變得更容易和更有效,我編寫了自定義函數以從本書的Wiki中提取不同的信息。
def find_book_name(table):if table.find('caption'):name = table.find('caption')return name.textdef get_author(table):author_name = table.find(text='Author').next.textreturn author_namedef get_genre(table):if table.find(text='Genre'):genre = table.find(text='Genre').next.textelse:genre = table.find(text='Subject').next.next.next.text return genredef get_publishing_date(table):if table.find(text='Publication date'):date = table.find(text='Publication date').next.textelse:date = table.find(text='Published').next.textpattern = re.compile(r'\d{4}')year = re.findall(pattern, date)[0]return int(year)def get_pages_count(table):pages = table.find(text='Pages').next.textreturn int(pages)
Now, that we have these cool functions, let’s write a function to use these functions, this will help us with the automation.
現在,我們有了這些很酷的功能,讓我們編寫一個使用這些功能的功能,這將幫助我們實現自動化。
def get_book_info_robust(book_url):#To avoid breaking the codetry:book_soup = parse_wiki_page(book_url)book_table = book_soup.find('table',class_="infobox vcard")except:print(f"Cannot parse table: {book_url}")return Nonebook_info = {}#get info with custom functionsvalues = ['Author', 'Book Name', 'Genre', 'Publication Year', 'Page Count']functions = [get_author, find_book_name, get_genre,get_publishing_date, get_pages_count]for val, func in zip(values, functions):try:book_info[val] = func(book_table)except:book_info[val] = Nonereturn book_info
In this function, I used the try..except formate to avoid crashing if some of the book's info is missing.
在此功能中,我使用了try..except formate來避免在書中某些信息丟失時崩潰。
Step #5: Get all books data, clean, and plot final results
步驟#5:獲取所有書籍數據,清理并繪制最終結果
We have all we need to automate the code and run it.
我們擁有自動化代碼并運行它所需的全部。
One last thing to note: It is legal to scrap Wikipedia, however, they don’t like it when you scrap more than one page each second. So we will need to add pauses between each fetch to avoid breaking the server.
最后要注意的一件事:廢棄Wikipedia是合法的,但是,當您每秒廢棄多個頁面時,他們不喜歡它。 因此,我們將需要在每次抓取之間添加暫停,以免破壞服務器。
#to add puases
import time
#to store books info
book_info_list = []
#loop first books
for link in books_urls:#get book infobook_info = get_book_info_robust(link)#if everything is correct and no error occursif book_info:book_info_list.append(book_info)#puase a second between each booktime.sleep(1)
Data collected! this will take 100 seconds to finish, so feel free to do something else while you wait 😉
收集數據! 這將需要100秒才能完成,因此在您等待時隨意做些其他事情😉
Finally, let’s clean the data, get the genre count, and plot the results.
最后,讓我們清理數據,獲取類型數,然后繪制結果。
#Collect different genres
genres = {}
for book in book_info_list:book_gen = book['Genre']if book_gen:if 'fiction' in book_gen or 'Fiction' in book_gen:book_gen = 'fiction'if book_gen not in genres: #count books in each genregenres[book_gen] = 1else:genres[book_gen] += 1print(genres)
#Plot results
bar_chart = pygal.Bar(height=400)
[bar_chart.add(k,v) for k,v in genres.items()]
display(HTML(base_html.format(rendered_chart=bar_chart.render(is_unicode=True))))
And we are done!
我們完成了!

I have to say, collecting data is not always a 100% accurate, as you can see in the plot, the longest bar belongs to the ‘None’ value. Which means one of two things
我不得不說,收集數據并不總是100%準確,正如您在圖中所看到的,最長的柱屬于“無”值。 這意味著兩件事之一
- Either the wiki page didn’t include the book’s genre. 維基頁面均未包含該書的體裁。
- Or, the code for that specific book is different than the rest. 或者,該特定書的代碼與其他書不同。
That’s why after automating the data collection, we often go through the weird and unusual results and recheck them manually.
這就是為什么在自動執行數據收集后,我們通常會經歷怪異和異常的結果并手動重新檢查它們。
結論 (Conclusion)
Web scraping is one of the essential skills a data scientist needs. And it can’t be any easier than with using Python, Requests, and BeautifulSoup.
Web抓取是數據科學家需要的基本技能之一。 而且,使用Python,Requests和BeautifulSoup絕非易事。
We can never trust full automation, sometimes we will need to go through the final result a recheck for abnormal information manually.
我們永遠不能相信完全自動化,有時我們需要手動檢查最終結果,以重新檢查異常信息。
The full code for the books section:
書籍部分的完整代碼:
#Import needed libraries
from bs4 import BeautifulSoup as bs
import requests as rq
import pygal
import time
#define functions for data collection
def find_book_name(table):if table.find('caption'):name = table.find('caption')return name.textdef get_author(table):author_name = table.find(text='Author').next.textreturn author_namedef get_genre(table):if table.find(text='Genre'):genre = table.find(text='Genre').next.textelse:genre = table.find(text='Subject').next.next.next.text return genredef get_publishing_date(table):if table.find(text='Publication date'):date = table.find(text='Publication date').next.textelse:date = table.find(text='Published').next.textpattern = re.compile(r'\d{4}')year = re.findall(pattern, date)[0]return int(year)def get_pages_count(table):pages = table.find(text='Pages').next.textreturn int(pages)def get_book_info_robust(book_url):#To avoid breaking the codetry:book_soup = parse_wiki_page(book_url)book_table = book_soup.find('table',class_="infobox vcard")except:print(f"Cannot parse table: {book_url}")return Nonebook_info = {}#get info with custom functionsvalues = ['Author', 'Book Name', 'Genre', 'Publication Year', 'Page Count']functions = [get_author, find_book_name, get_genre,get_publishing_date, get_pages_count]for val, func in zip(values, functions):try:book_info[val] = func(book_table)except:book_info[val] = Nonereturn book_info#to store books info
book_info_list = []
#loop first books
for link in books_urls:#get book infobook_info = get_book_info_robust(link)#if everything is correct and no error occursif book_info:book_info_list.append(book_info)#puase a second between each booktime.sleep(1)#Collect different genres
genres = {}
for book in book_info_list:book_gen = book['Genre']if book_gen:if 'fiction' in book_gen or 'Fiction' in book_gen:book_gen = 'fiction'if book_gen not in genres: #count books in each genregenres[book_gen] = 1else:genres[book_gen] += 1print(genres)
#Plot results
bar_chart = pygal.Bar(height=400)
[bar_chart.add(k,v) for k,v in genres.items()]
display(HTML(base_html.format(rendered_chart=bar_chart.render(is_unicode=True))))
翻譯自: https://towardsdatascience.com/a-step-by-step-guide-to-web-scraping-in-python-5c4d9cef76e8
python 實現分步累加
本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。 如若轉載,請注明出處:http://www.pswp.cn/news/391036.shtml 繁體地址,請注明出處:http://hk.pswp.cn/news/391036.shtml 英文地址,請注明出處:http://en.pswp.cn/news/391036.shtml
如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!