回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>我带回了十几个不完整的条目类型,将它们清理并存储在SQL中的各个表中。我可以为每一项编写说明,但是以编程方式管理各种列表/数据帧/表创建似乎更整洁。你知道吗</p>
<ul>
<li>不幸的是,当我试图通过引用dict中的一个条目来调用scrapy项时,Python将其作为字符串而不是类型或类来读取。你知道吗</li>
<li>类似地,当我尝试引用一个列表名时,Python仍然看到一个字符串不允许我使用.append()。你知道吗</li>
</ul>
<p>如果您能帮助Python将字符串作为类引用或列表引用来读取,我们将不胜感激。你知道吗</p>
<p>以下是我的代码版本:</p>
<pre><code>from scrapy import signals
from dealinfo.items import List, Details, Rd, Status, CompletedDetails, Syndicate
from dealinfo.items import CompanyDetails, CompanyContactInfo, CompanyTeam, Compadvisors, CompanyPInvestors
from dealinfo.items import CompanyExecSum, CurrentRd, PastRd, AnnFin#, CompanyDocs
import pandas as pd
from sqlalchemy import create_engine
class SQLPipeline(object):
engine=create_engine('mssql+pyodbc://username:password@database')
#### matrix of table names by type ######
prep = {'item_names': ['List', 'Details', 'Rd', 'Fin', 'Status', 'CompletedDetails', 'Syndicate', 'CompanyDetails', 'CompanyContactInfo', 'CompanyTeam', 'Compadvisors', 'CompanyPInvestors', 'CompanyExecSum', 'CompanyCurrentRd', 'CompanyPastRd', 'CompannFin', 'CompanyDocs'],
'temp_table': ['items_dl', 'items_dd', 'items_dr', 'items_df', 'items_nds', 'items_ncd', 'items_ns', 'items_cd', 'items_cci', 'items_ct', 'items_ca', 'items_cpi', 'items_es', 'items_cr', 'items_pr', 'items_af', 'items_cdoc'],
'data_frame': ['dl', 'dd', 'dr', 'df', 'nds', 'ncd', 'ns', 'cd', 'cci', 'ct', 'ca', 'cpi', 'es', 'cr', 'pr', 'af', 'cdoc'],
'sql_table': ['list', 'details', 'rd', 'fin', 'status', 'completed_details', 'syndicate', 'company_details', 'company_contact_info', 'company_team', 'company_advisors', 'company_pinvestors', 'company_execsum', 'company_current_rd', 'company_past_rd', 'company_ann_fin', 'company_docs']
}
#### assigning temporary lists for capturing parsed items ######
for x in prep['temp_table']:
globals()[x] = []
#### create sql schema to receive final output ######
def __init__(self):
try: ## Check schema exists, create if not
SQLPipeline.engine.execute("create schema dealinfo")
except:
pass
#### clean each scrapy item and add contents to temporary list (ahead of conversion to dataframe) ######
def process_item(self, item, spider):
for i in range(len(SQLPipeline.prep['item_names'])):
if isinstance(item, SQLPipeline.prep['item_names'][i]):####<<---error - not able to call item using string
for key,value in item.items():
if isinstance(item[key], list):
item[key] = [x.strip() for x in item[key] if x]
item[key] = [x for x in item[key] if x]
item[key] = ', '.join(item[key])
SQLPipeline.prep['temp_table'][i].append(item.copy())####<<---error - not able to call item using string
#### convert parsed items to pandas dataframe before sending to sql as tables ######
def close_spider(self, spider):
for i in SQLPipeline.prep['item_names']:
try:
SQLPipeline.prep['data_frame'][i] = pd.DataFrame(SQLPipeline.prep['temp_table'][i])
print(SQLPipeline.prep['data_frame'][i])
SQLPipeline.prep['data_frame'][i].to_sql(SQLPipeline.prep['sql_table'][i], SQLPipeline.engine, schema='dealinfo', if_exists='replace', index=False)
except Exception as ex:
print(ex)
pass
</code></pre>