京公网安备 11010802034615号
经营许可证编号:京B2-20210330
python实现将html表格转换成CSV文件的方法
本文实例讲述了python实现将html表格转换成CSV文件的方法。分享给大家供大家参考。具体如下:
使用方法:python html2csv.py *.html
这段代码使用了 HTMLParser 模块
#!/usr/bin/python
# -*- coding: iso-8859-1 -*-
# Hello, this program is written in Python - http://python.org
programname = 'html2csv - version 2002-09-20 - http://sebsauvage.net'
import sys, getopt, os.path, glob, HTMLParser, re
try: import psyco ; psyco.jit() # If present, use psyco to accelerate the program
except: pass
def usage(progname):
''' Display program usage. '''
progname = os.path.split(progname)[1]
if os.path.splitext(progname)[1] in ['.py','.pyc']: progname = 'python '+progname
return '''%s
A coarse HTML tables to CSV (Comma-Separated Values) converter.
Syntax : %s source.html
Arguments : source.html is the HTML file you want to convert to CSV.
By default, the file will be converted to csv with the same
name and the csv extension (source.html -> source.csv)
You can use * and ?.
Examples : %s mypage.html
: %s *.html
This program is public domain.
Author : Sebastien SAUVAGE <sebsauvage at sebsauvage dot net>
http://sebsauvage.net
''' % (programname, progname, progname, progname)
class html2csv(HTMLParser.HTMLParser):
''' A basic parser which converts HTML tables into CSV.
Feed HTML with feed(). Get CSV with getCSV(). (See example below.)
All tables in HTML will be converted to CSV (in the order they occur
in the HTML file).
You can process very large HTML files by feeding this class with chunks
of html while getting chunks of CSV by calling getCSV().
Should handle badly formated html (missing <tr>, </tr>, </td>,
extraneous </td>, </tr>...).
This parser uses HTMLParser from the HTMLParser module,
not HTMLParser from the htmllib module.
Example: parser = html2csv()
parser.feed( open('mypage.html','rb').read() )
open('mytables.csv','w+b').write( parser.getCSV() )
This class is public domain.
Author: Sébastien SAUVAGE <sebsauvage at sebsauvage dot net>
http://sebsauvage.net
Versions:
2002-09-19 : - First version
2002-09-20 : - now uses HTMLParser.HTMLParser instead of htmllib.HTMLParser.
- now parses command-line.
To do:
- handle <PRE> tags
- convert html entities (&name; and &#ref;) to Ascii.
'''
def __init__(self):
HTMLParser.HTMLParser.__init__(self)
self.CSV = '' # The CSV data
self.CSVrow = '' # The current CSV row beeing constructed from HTML
self.inTD = 0 # Used to track if we are inside or outside a <TD>...</TD> tag.
self.inTR = 0 # Used to track if we are inside or outside a <TR>...</TR> tag.
self.re_multiplespaces = re.compile('\s+') # regular expression used to remove spaces in excess
self.rowCount = 0 # CSV output line counter.
def handle_starttag(self, tag, attrs):
if tag == 'tr': self.start_tr()
elif tag == 'td': self.start_td()
def handle_endtag(self, tag):
if tag == 'tr': self.end_tr()
elif tag == 'td': self.end_td()
def start_tr(self):
if self.inTR: self.end_tr() # <TR> implies </TR>
self.inTR = 1
def end_tr(self):
if self.inTD: self.end_td() # </TR> implies </TD>
self.inTR = 0
if len(self.CSVrow) > 0:
self.CSV += self.CSVrow[:-1]
self.CSVrow = ''
self.CSV += '\n'
self.rowCount += 1
def start_td(self):
if not self.inTR: self.start_tr() # <TD> implies <TR>
self.CSVrow += '"'
self.inTD = 1
def end_td(self):
if self.inTD:
self.CSVrow += '",'
self.inTD = 0
def handle_data(self, data):
if self.inTD:
self.CSVrow += self.re_multiplespaces.sub(' ',data.replace('\t',' ').replace('\n','').replace('\r','').replace('"','""'))
def getCSV(self,purge=False):
''' Get output CSV.
If purge is true, getCSV() will return all remaining data,
even if <td> or <tr> are not properly closed.
(You would typically call getCSV with purge=True when you do not have
any more HTML to feed and you suspect dirty HTML (unclosed tags). '''
if purge and self.inTR: self.end_tr() # This will also end_td and append last CSV row to output CSV.
dataout = self.CSV[:]
self.CSV = ''
return dataout
if __name__ == "__main__":
try: # Put getopt in place for future usage.
opts, args = getopt.getopt(sys.argv[1:],None)
except getopt.GetoptError:
print usage(sys.argv[0]) # print help information and exit:
sys.exit(2)
if len(args) == 0:
print usage(sys.argv[0]) # print help information and exit:
sys.exit(2)
print programname
html_files = glob.glob(args[0])
for htmlfilename in html_files:
outputfilename = os.path.splitext(htmlfilename)[0]+'.csv'
parser = html2csv()
print 'Reading %s, writing %s...' % (htmlfilename, outputfilename)
try:
htmlfile = open(htmlfilename, 'rb')
csvfile = open( outputfilename, 'w+b')
data = htmlfile.read(8192)
while data:
parser.feed( data )
csvfile.write( parser.getCSV() )
sys.stdout.write('%d CSV rows written.\r' % parser.rowCount)
data = htmlfile.read(8192)
csvfile.write( parser.getCSV(True) )
csvfile.close()
htmlfile.close()
except:
print 'Error converting %s ' % htmlfilename
try: htmlfile.close()
except: pass
try: csvfile.close()
except: pass
print 'All done. '
希望本文所述对大家的Python程序设计有所帮助。
数据分析咨询请扫描二维码
若不方便扫码,搜微信号:CDAshujufenxi
在神经网络模型搭建中,“最后一层是否添加激活函数”是新手常困惑的关键问题——有人照搬中间层的ReLU激活,导致回归任务输出异 ...
2025-12-05在机器学习落地过程中,“模型准确率高但不可解释”“面对数据噪声就失效”是两大核心痛点——金融风控模型若无法解释决策依据, ...
2025-12-05在CDA(Certified Data Analyst)数据分析师的能力模型中,“指标计算”是基础技能,而“指标体系搭建”则是区分新手与资深分析 ...
2025-12-05在回归分析的结果解读中,R方(决定系数)是衡量模型拟合效果的核心指标——它代表因变量的变异中能被自变量解释的比例,取值通 ...
2025-12-04在城市规划、物流配送、文旅分析等场景中,经纬度热力图是解读空间数据的核心工具——它能将零散的GPS坐标(如外卖订单地址、景 ...
2025-12-04在CDA(Certified Data Analyst)数据分析师的指标体系中,“通用指标”与“场景指标”并非相互割裂的两个部分,而是支撑业务分 ...
2025-12-04每到“双十一”,电商平台的销售额会迎来爆发式增长;每逢冬季,北方的天然气消耗量会显著上升;每月的10号左右,工资发放会带动 ...
2025-12-03随着数字化转型的深入,企业面临的数据量呈指数级增长——电商的用户行为日志、物联网的传感器数据、社交平台的图文视频等,这些 ...
2025-12-03在CDA(Certified Data Analyst)数据分析师的工作体系中,“指标”是贯穿始终的核心载体——从“销售额环比增长15%”的业务结论 ...
2025-12-03在神经网络训练中,损失函数的数值变化常被视为模型训练效果的“核心仪表盘”——初学者盯着屏幕上不断下降的损失值满心欢喜,却 ...
2025-12-02在CDA(Certified Data Analyst)数据分析师的日常工作中,“用部分数据推断整体情况”是高频需求——从10万条订单样本中判断全 ...
2025-12-02在数据预处理的纲量统一环节,标准化是消除量纲影响的核心手段——它将不同量级的特征(如“用户年龄”“消费金额”)转化为同一 ...
2025-12-02在数据驱动决策成为企业核心竞争力的今天,A/B测试已从“可选优化工具”升级为“必选验证体系”。它通过控制变量法构建“平行实 ...
2025-12-01在时间序列预测任务中,LSTM(长短期记忆网络)凭借对时序依赖关系的捕捉能力成为主流模型。但很多开发者在实操中会遇到困惑:用 ...
2025-12-01引言:数据时代的“透视镜”与“掘金者” 在数字经济浪潮下,数据已成为企业决策的核心资产,而CDA数据分析师正是挖掘数据价值的 ...
2025-12-01数据分析师的日常,常始于一堆“毫无章法”的数据点:电商后台导出的零散订单记录、APP埋点收集的无序用户行为日志、传感器实时 ...
2025-11-28在MySQL数据库运维中,“query end”是查询执行生命周期的收尾阶段,理论上耗时极短——主要完成结果集封装、资源释放、事务状态 ...
2025-11-28在CDA(Certified Data Analyst)数据分析师的工具包中,透视分析方法是处理表结构数据的“瑞士军刀”——无需复杂代码,仅通过 ...
2025-11-28在统计分析中,数据的分布形态是决定“用什么方法分析、信什么结果”的底层逻辑——它如同数据的“性格”,直接影响着描述统计的 ...
2025-11-27在电商订单查询、用户信息导出等业务场景中,技术人员常面临一个选择:是一次性查询500条数据,还是分5次每次查询100条?这个问 ...
2025-11-27