master
/ 8.5.1 pandas文件读写.ipynb

8.5.1 pandas文件读写.ipynb @d487d71

d487d71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Pandas导入外部数据\n",
    "\n",
    "pandas支持方便快速的从多种格式的外部文件中读取数据形成DataFrame,或将DataFrame写入不同格式的外部文件。\n",
    "下表是Pandas官方手册上给出的一张表格,表格描述的是Pandas中对各种数据文件类型的读、写函数。\n",
    "\n",
    "| Format Type | Data Description      | Reader         | Writer          |\n",
    "| :-----------: | :---------------------: | :--------------: | :---------------: |\n",
    "| text        | CSV                   | read_csv       | to_csv          |\n",
    "| text        | Fixed-Width Text File | read_fwf       |                 |\n",
    "| text        | JSON                  | read_json      | to_json         |\n",
    "| text        | HTML                  | read_html      | to_html         |\n",
    "| text        | LaTeX                 |                | Styler.to_latex |\n",
    "| text        | XML                   | read_xml       | to_xml          |\n",
    "| text        | Local clipboard       | read_clipboard | to_clipboard    |\n",
    "| binary      | MS Excel              | read_excel     | to_excel        |\n",
    "| binary      | OpenDocument          | read_excel     |                 |\n",
    "| binary      | HDF5 Format           | read_hdf       | to_hdf          |\n",
    "| binary      | Feather Format        | read_feather   | to_feather      |\n",
    "| binary      | Parquet Format        | read_parquet   | to_parquet      |\n",
    "| binary      | ORC Format            | read_orc       |                 |\n",
    "| binary      | Stata                 | read_stata     | to_stata        |\n",
    "| binary      | SAS                   | read_sas       |                 |\n",
    "| binary      | SPSS                  | read_spss      |                 |\n",
    "| binary      | Python Pickle Format  | read_pickle    | to_pickle       |\n",
    "| SQL         | SQL                   | read_sql       | to_sql          |\n",
    "| SQL         | Google BigQuery       | read_gbq       | to_gbq          |\n",
    "\n",
    "本节仅介绍几种常用文件的读取方式,其他方法大家如需使用可自行查询[相关文档](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html)。\n",
    "\n",
    "## 读取txt或csv文件\n",
    "```python\n",
    "pandas.read_csv(filepath_or_buffer, sep=NoDefault.no_default, delimiter=None, header='infer', names=NoDefault.no_default, index_col=None, usecols=None, squeeze=None, prefix=NoDefault.no_default, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=None, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal='.', lineterminator=None, quotechar='\"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, encoding_errors='strict', dialect=None, error_bad_lines=None, warn_bad_lines=None, on_bad_lines=None, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None, storage_options=None)\n",
    "```\n",
    "该方法参数较多,下面简单介绍一些常用参数,其他参数含义可自行查询文档。\n",
    "* filepath_or_buffer:读取的文件路径,URL(包含http,ftp,s3)链接等\n",
    "* sep:指定分隔符,默认逗号分隔。\n",
    "* delimiter:定界符,备选分隔符(如果指定该参数,则sep参数失效)\n",
    "* delim_whitespace:boolean, 是否指定空白字符作为分隔符,如果为Ture那么delimiter参数失效。\n",
    "* header:指定作为整个数据集列名的行,默认为第一行.如果数据集中没有列名,则需要设置为None\n",
    "* names:用于结果的列名列表。如果数据文件中没有列标题行,就需要设置header=None\n",
    "* index_col:指定数据集中的某列作为行索引\n",
    "* usecols:指定只读取文件中的某几列数据\n",
    "* dtype:设置每列数据的数据类型\n",
    "* skiprows:设置需要忽略的行数(从文件开始处算起),或需要跳过的行号列表(从0开始)\n",
    "* skipfooter:设置需要忽略的行数(从文件尾部处算起)\n",
    "* nrows:设置需要读取的行数(从文件头开始算起)\n",
    "* skip_blank_lines:如果为True,则跳过空行;否则记为NaN\n",
    "\n",
    "本节结合如下csv文件,仅讲解最简单用法,其他参数大家可结合数据文件自行尝试其效果。\n",
    "<img src=\"images/ch8/11.png\" style=\"zoom:100%;\">"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数据下载:  \n",
    "\n",
    "<a href=\"images/ch8/2020各手机参数对比.xls\" target=\"_blank\">2020各手机参数对比.xls</a>  \n",
    "\n",
    "<a href=\"images/ch8/wine point.csv\" target=\"_blank\">wine point.csv</a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(11, 14)\n",
      "    order   country description  designation  points  price     province  \\\n",
      "0       0     Italy     Aromas        Bianco    87.0    NaN       Sicily   \n",
      "1       1  Portugal       This.     Avidagos    87.0   15.0        Douro   \n",
      "2       2        US        Tart          NaN     NaN   14.0       Oregon   \n",
      "3       3        US   Pineapple      Reserve    87.0    NaN     Michigan   \n",
      "4       4       NaN        Much      Vintner    87.0   65.0       Oregon   \n",
      "5       5     Spain  Blackberry          Ars    87.0   15.0     Northern   \n",
      "6       6     Italy        Here      Belsito    87.0   16.0       Sicily   \n",
      "7       7    France        This          NaN    87.0   24.0       Alsace   \n",
      "8       8   Germany      Savory        Shine    87.0   12.0  Rheinhessen   \n",
      "9       9    France        This  Les Natures    87.0   27.0       Alsace   \n",
      "10     10        US        Soft     Mountain    87.0   19.0   California   \n",
      "\n",
      "      region_1    region_2 taster_name taster_twitter_handle           title  \\\n",
      "0         Etna         NaN       Kerin          @kerinokeefe    Nicosia 2013   \n",
      "1          NaN         NaN       Roger            @vossroger     Quinta 2011   \n",
      "2   Willamette  Willamette        Paul            @paulgwine  Rainstorm 2013   \n",
      "3         Lake         NaN   Alexander                   NaN     Julian 2013   \n",
      "4   Willamette  Willamette        Paul            @paulgwine      Sweet 2012   \n",
      "5      Navarra         NaN     Michael           @wineschach     Tandem 2011   \n",
      "6     Vittoria         NaN       Kerin          @kerinokeefe      Terre 2013   \n",
      "7       Alsace         NaN       Roger            @vossroger   Trimbach 2012   \n",
      "8          NaN         NaN        Anna                   NaN      Heinz 2013   \n",
      "9       Alsace         NaN       Roger            @vossroger       Jean 2012   \n",
      "10        Napa        Napa    Virginie               @vboone   Kirkland 2011   \n",
      "\n",
      "           variety         winery  \n",
      "0            White        Nicosia  \n",
      "1       Portuguese         Quinta  \n",
      "2            Pinot      Rainstorm  \n",
      "3         Riesling     St. Julian  \n",
      "4            Pinot          Sweet  \n",
      "5      Tempranillo         Tandem  \n",
      "6         Frappato          Terre  \n",
      "7   Gewürztraminer       Trimbach  \n",
      "8   Gewürztraminer          Heinz  \n",
      "9            Pinot  Jean-Baptiste  \n",
      "10        Cabernet       Kirkland  \n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "\n",
    "wine_reviews = pd.read_csv(\"images/ch8/wine point.csv\")\n",
    "# 读取/data/bigfiles/wine point.csv\n",
    "print(wine_reviews.shape)   \n",
    "pd.set_option('display.max_columns', None)   # 显示所有列\n",
    "pd.set_option('display.max_rows', None)      # 显示所有行\n",
    "pd.set_option('display.width', None)         # 显示宽度是无限\n",
    "print(wine_reviews)                          # 返回DataFrame"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 明确规定某列做行索引\n",
    "wine_reviews = pd.read_csv(\"images/ch8/wine point.csv\", index_col=0)\n",
    "print(wine_reviews)                          # 返回DataFrame"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 读取excel文件\n",
    "\n",
    "```python\n",
    "pandas.read_excel(io, sheet_name=0, header=0, names=None, index_col=None, usecols=None, squeeze=None, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, parse_dates=False, date_parser=None, thousands=None, decimal='.', comment=None, skipfooter=0, convert_float=None, mangle_dupe_cols=True, storage_options=None)\n",
    "```\n",
    "\n",
    "该函数大部分参数与read_csv()方法相同。其中sheet_name用于指定要读取的sheet,默认读取Excel文件中的第一个sheet。\n",
    "需要注意的是,使用该方法读取.xlsx文件需要借助openpyxl模块,读取.xls文件需要借助xlrd模块。\n",
    "\n",
    "本节结合文件“2020各手机参数对比.xls”,仅讲解最简单用法,其他参数大家可结合数据文件自行尝试其效果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "\n",
    "phone_infos = pd.read_excel(\"images/ch8/2020各手机参数对比.xls\", skiprows=1) # 跳过第一行的表格标题\n",
    "# 读取/data/bigfiles/wine point.csv\n",
    "print(phone_infos.shape)   \n",
    "pd.set_option('display.max_columns', None)   # 显示所有列\n",
    "pd.set_option('display.max_rows', None)      # 显示所有行\n",
    "pd.set_option('display.width', None)         # 显示宽度是无限\n",
    "pd.set_option('display.unicode.east_asian_width', True) # 显示时列对齐\n",
    "print(phone_infos)                          # 返回DataFrame"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 读取HTML网页\n",
    "```python\n",
    "pandas.read_html(io, match='.+', flavor=None, header=None, index_col=None, skiprows=None, attrs=None, parse_dates=False, thousands=',', encoding=None, decimal='.', converters=None, na_values=None, keep_default_na=True, displayed_only=True)\n",
    "```\n",
    "该函数是将HTML的表格转换为DataFrame的一种快速方便的方法,不需要用爬虫获取站点的HTML。match参数通过正则表达式匹配需要的表格;flavor参数设置解析器,默认为lxml。\n",
    "\n",
    "本节以[ESPN网站的中超排名表](https://www.espn.com/soccer/team/_/id/21506/wuhan-three-towns)为例,讲解其基本用法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "pd.set_option('display.max_columns', None)   # 显示所有列\n",
    "pd.set_option('display.max_rows', None)      # 显示所有行\n",
    "pd.set_option('display.width', None)         # 显示宽度是无限\n",
    "pd.set_option('display.unicode.east_asian_width', True) # 显示时列对齐\n",
    "# 函数会将每个table转化为一个DataFrame,返回由两个DataFrame构成的列表\n",
    "CSL_2022 = pd.read_html('https://www.espn.com/soccer/team/_/id/21506/wuhan-three-towns')  # 该页面只有1个table \n",
    "print(type(CSL_2022), type(CSL_2022[0]))\n",
    "print(CSL_2022[0])"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}