京公网安备 11010802034615号
经营许可证编号:京B2-20210330
python实现将html表格转换成CSV文件的方法
本文实例讲述了python实现将html表格转换成CSV文件的方法。分享给大家供大家参考。具体如下:
使用方法:python html2csv.py *.html
这段代码使用了 HTMLParser 模块
#!/usr/bin/python
# -*- coding: iso-8859-1 -*-
# Hello, this program is written in Python - http://python.org
programname = 'html2csv - version 2002-09-20 - http://sebsauvage.net'
import sys, getopt, os.path, glob, HTMLParser, re
try: import psyco ; psyco.jit() # If present, use psyco to accelerate the program
except: pass
def usage(progname):
''' Display program usage. '''
progname = os.path.split(progname)[1]
if os.path.splitext(progname)[1] in ['.py','.pyc']: progname = 'python '+progname
return '''%s
A coarse HTML tables to CSV (Comma-Separated Values) converter.
Syntax : %s source.html
Arguments : source.html is the HTML file you want to convert to CSV.
By default, the file will be converted to csv with the same
name and the csv extension (source.html -> source.csv)
You can use * and ?.
Examples : %s mypage.html
: %s *.html
This program is public domain.
Author : Sebastien SAUVAGE <sebsauvage at sebsauvage dot net>
http://sebsauvage.net
''' % (programname, progname, progname, progname)
class html2csv(HTMLParser.HTMLParser):
''' A basic parser which converts HTML tables into CSV.
Feed HTML with feed(). Get CSV with getCSV(). (See example below.)
All tables in HTML will be converted to CSV (in the order they occur
in the HTML file).
You can process very large HTML files by feeding this class with chunks
of html while getting chunks of CSV by calling getCSV().
Should handle badly formated html (missing <tr>, </tr>, </td>,
extraneous </td>, </tr>...).
This parser uses HTMLParser from the HTMLParser module,
not HTMLParser from the htmllib module.
Example: parser = html2csv()
parser.feed( open('mypage.html','rb').read() )
open('mytables.csv','w+b').write( parser.getCSV() )
This class is public domain.
Author: Sébastien SAUVAGE <sebsauvage at sebsauvage dot net>
http://sebsauvage.net
Versions:
2002-09-19 : - First version
2002-09-20 : - now uses HTMLParser.HTMLParser instead of htmllib.HTMLParser.
- now parses command-line.
To do:
- handle <PRE> tags
- convert html entities (&name; and &#ref;) to Ascii.
'''
def __init__(self):
HTMLParser.HTMLParser.__init__(self)
self.CSV = '' # The CSV data
self.CSVrow = '' # The current CSV row beeing constructed from HTML
self.inTD = 0 # Used to track if we are inside or outside a <TD>...</TD> tag.
self.inTR = 0 # Used to track if we are inside or outside a <TR>...</TR> tag.
self.re_multiplespaces = re.compile('\s+') # regular expression used to remove spaces in excess
self.rowCount = 0 # CSV output line counter.
def handle_starttag(self, tag, attrs):
if tag == 'tr': self.start_tr()
elif tag == 'td': self.start_td()
def handle_endtag(self, tag):
if tag == 'tr': self.end_tr()
elif tag == 'td': self.end_td()
def start_tr(self):
if self.inTR: self.end_tr() # <TR> implies </TR>
self.inTR = 1
def end_tr(self):
if self.inTD: self.end_td() # </TR> implies </TD>
self.inTR = 0
if len(self.CSVrow) > 0:
self.CSV += self.CSVrow[:-1]
self.CSVrow = ''
self.CSV += '\n'
self.rowCount += 1
def start_td(self):
if not self.inTR: self.start_tr() # <TD> implies <TR>
self.CSVrow += '"'
self.inTD = 1
def end_td(self):
if self.inTD:
self.CSVrow += '",'
self.inTD = 0
def handle_data(self, data):
if self.inTD:
self.CSVrow += self.re_multiplespaces.sub(' ',data.replace('\t',' ').replace('\n','').replace('\r','').replace('"','""'))
def getCSV(self,purge=False):
''' Get output CSV.
If purge is true, getCSV() will return all remaining data,
even if <td> or <tr> are not properly closed.
(You would typically call getCSV with purge=True when you do not have
any more HTML to feed and you suspect dirty HTML (unclosed tags). '''
if purge and self.inTR: self.end_tr() # This will also end_td and append last CSV row to output CSV.
dataout = self.CSV[:]
self.CSV = ''
return dataout
if __name__ == "__main__":
try: # Put getopt in place for future usage.
opts, args = getopt.getopt(sys.argv[1:],None)
except getopt.GetoptError:
print usage(sys.argv[0]) # print help information and exit:
sys.exit(2)
if len(args) == 0:
print usage(sys.argv[0]) # print help information and exit:
sys.exit(2)
print programname
html_files = glob.glob(args[0])
for htmlfilename in html_files:
outputfilename = os.path.splitext(htmlfilename)[0]+'.csv'
parser = html2csv()
print 'Reading %s, writing %s...' % (htmlfilename, outputfilename)
try:
htmlfile = open(htmlfilename, 'rb')
csvfile = open( outputfilename, 'w+b')
data = htmlfile.read(8192)
while data:
parser.feed( data )
csvfile.write( parser.getCSV() )
sys.stdout.write('%d CSV rows written.\r' % parser.rowCount)
data = htmlfile.read(8192)
csvfile.write( parser.getCSV(True) )
csvfile.close()
htmlfile.close()
except:
print 'Error converting %s ' % htmlfilename
try: htmlfile.close()
except: pass
try: csvfile.close()
except: pass
print 'All done. '
希望本文所述对大家的Python程序设计有所帮助。
数据分析咨询请扫描二维码
若不方便扫码,搜微信号:CDAshujufenxi
在SQL数据分析与业务查询中,日期数据是高频处理对象——订单创建时间、用户注册日期、数据统计周期等场景,都需对日期进行格式 ...
2026-01-21在实际业务数据分析中,单一数据表往往无法满足需求——用户信息存储在用户表、消费记录在订单表、商品详情在商品表,想要挖掘“ ...
2026-01-21在数字化转型浪潮中,企业数据已从“辅助资源”升级为“核心资产”,而高效的数据管理则是释放数据价值的前提。企业数据管理方法 ...
2026-01-21在数字化商业环境中,数据已成为企业优化运营、抢占市场、规避风险的核心资产。但商业数据分析绝非“堆砌数据、生成报表”的简单 ...
2026-01-20定量报告的核心价值是传递数据洞察,但密密麻麻的表格、复杂的计算公式、晦涩的数值罗列,往往让读者望而却步,导致核心信息被淹 ...
2026-01-20在CDA(Certified Data Analyst)数据分析师的工作场景中,“精准分类与回归预测”是高频核心需求——比如预测用户是否流失、判 ...
2026-01-20在建筑工程造价工作中,清单汇总分类是核心环节之一,尤其是针对楼梯、楼梯间这类包含多个分项工程(如混凝土浇筑、钢筋制作、扶 ...
2026-01-19数据清洗是数据分析的“前置必修课”,其核心目标是剔除无效信息、修正错误数据,让原始数据具备准确性、一致性与可用性。在实际 ...
2026-01-19在CDA(Certified Data Analyst)数据分析师的日常工作中,常面临“无标签高维数据难以归类、群体规律模糊”的痛点——比如海量 ...
2026-01-19在数据仓库与数据分析体系中,维度表与事实表是构建结构化数据模型的核心组件,二者如同“骨架”与“血肉”,协同支撑起各类业务 ...
2026-01-16在游戏行业“存量竞争”的当下,玩家留存率直接决定游戏的生命周期与商业价值。一款游戏即便拥有出色的画面与玩法,若无法精准识 ...
2026-01-16为配合CDA考试中心的 2025 版 CDA Level III 认证新大纲落地,CDA 网校正式推出新大纲更新后的第一套官方模拟题。该模拟题严格遵 ...
2026-01-16在数据驱动决策的时代,数据分析已成为企业运营、产品优化、业务增长的核心工具。但实际工作中,很多数据分析项目看似流程完整, ...
2026-01-15在CDA(Certified Data Analyst)数据分析师的日常工作中,“高维数据处理”是高频痛点——比如用户画像包含“浏览次数、停留时 ...
2026-01-15在教育测量与评价领域,百分制考试成绩的分布规律是评估教学效果、优化命题设计的核心依据,而正态分布则是其中最具代表性的分布 ...
2026-01-15在用户从“接触产品”到“完成核心目标”的全链路中,流失是必然存在的——电商用户可能“浏览商品却未下单”,APP新用户可能“ ...
2026-01-14在产品增长的核心指标体系中,次日留存率是当之无愧的“入门级关键指标”——它直接反映用户对产品的首次体验反馈,是判断产品是 ...
2026-01-14在CDA(Certified Data Analyst)数据分析师的业务实操中,“分类预测”是高频核心需求——比如“预测用户是否会购买商品”“判 ...
2026-01-14在数字化时代,用户的每一次操作——无论是电商平台的“浏览-加购-下单”、APP的“登录-点击-留存”,还是金融产品的“注册-实名 ...
2026-01-13在数据驱动决策的时代,“数据质量决定分析价值”已成为行业共识。数据库、日志系统、第三方平台等渠道采集的原始数据,往往存在 ...
2026-01-13