Accessing and Managing Financial Data

Note

You are reading Tidy Finance with Python. You can find the equivalent chapter for the sibling Tidy Finance with R here.

In this chapter, we suggest a way to organize your financial data. Everybody who has experience with data is also familiar with storing data in various formats like CSV, XLS, XLSX, or other delimited value storage. Reading and saving data can become very cumbersome when using different data formats and across different projects. Moreover, storing data in delimited files often leads to problems with respect to column type consistency. For instance, date-type columns frequently lead to inconsistencies across different data formats and programming languages.

This chapter shows how to import different open-source datasets. Specifically, our data comes from the application programming interface (API) of Yahoo!Finance, a downloaded standard CSV file, an XLSX file stored in a public Google Drive repository, and other macroeconomic time series. We store all the data in a single database, which serves as the only source of data in subsequent chapters. We conclude the chapter by providing some tips on managing databases.

First, we load the Python packages that we use throughout this chapter. Later on, we load more packages in the sections where we need them.

import pandas as pd
import numpy as np

Moreover, we initially define the date range for which we fetch and store the financial data, making future data updates tractable. In case you need another time frame, you can adjust the dates below. Our data starts with 1960 since most asset pricing studies use data from 1962 on.

start_date = "1960-01-01"
end_date = "2023-12-31"

Fama-French Data

We start by downloading some famous Fama-French factors (e.g., Fama and French 1993) and portfolio returns commonly used in empirical asset pricing. Fortunately, the pandas-datareader package provides a simple interface to read data from Kenneth French’s Data Library.

import pandas_datareader as pdr

We can use the pdr.DataReader() function of the package to download monthly Fama-French factors. The set Fama/French 3 Factors contains the return time series of the market (mkt_excess), size (smb), and value (hml) factors alongside the risk-free rates (rf). Note that we have to do some manual work to parse all the columns correctly and scale them appropriately, as the raw Fama-French data comes in a unique data format. For precise descriptions of the variables, we suggest consulting Prof. Kenneth French’s finance data library directly. If you are on the website, check the raw data files to appreciate the time you can save thanks to pandas_datareader.

factors_ff3_monthly_raw = pdr.DataReader(
  name="F-F_Research_Data_Factors",
  data_source="famafrench", 
  start=start_date, 
  end=end_date)[0]

factors_ff3_monthly = (factors_ff3_monthly_raw
  .divide(100)
  .reset_index(names="date")
  .assign(date=lambda x: pd.to_datetime(x["date"].astype(str)))
  .rename(str.lower, axis="columns")
  .rename(columns={"mkt-rf": "mkt_excess"})
)

We also download the set 5 Factors (2x3), which additionally includes the return time series of the profitability (rmw) and investment (cma) factors. We demonstrate how the monthly factors are constructed in Replicating Fama and French Factors.

factors_ff5_monthly_raw = pdr.DataReader(
  name="F-F_Research_Data_5_Factors_2x3",
  data_source="famafrench", 
  start=start_date, 
  end=end_date)[0]

factors_ff5_monthly = (factors_ff5_monthly_raw
  .divide(100)
  .reset_index(names="date")
  .assign(date=lambda x: pd.to_datetime(x["date"].astype(str)))
  .rename(str.lower, axis="columns")
  .rename(columns={"mkt-rf": "mkt_excess"})
)

It is straightforward to download the corresponding daily Fama-French factors with the same function.

factors_ff3_daily_raw = pdr.DataReader(
  name="F-F_Research_Data_Factors_daily",
  data_source="famafrench", 
  start=start_date, 
  end=end_date)[0]

factors_ff3_daily = (factors_ff3_daily_raw
  .divide(100)
  .reset_index(names="date")
  .rename(str.lower, axis="columns")
  .rename(columns={"mkt-rf": "mkt_excess"})
)

In a subsequent chapter, we also use the monthly returns from ten industry portfolios, so let us fetch that data, too.

industries_ff_monthly_raw = pdr.DataReader(
  name="10_Industry_Portfolios",
  data_source="famafrench", 
  start=start_date, 
  end=end_date)[0]

industries_ff_monthly = (industries_ff_monthly_raw
  .divide(100)
  .reset_index(names="date")
  .assign(date=lambda x: pd.to_datetime(x["date"].astype(str)))
  .rename(str.lower, axis="columns")
)

It is worth taking a look at all available portfolio return time series from Kenneth French’s homepage. You should check out the other sets by calling pdr.famafrench.get_available_datasets().

q-Factors

In recent years, the academic discourse experienced the rise of alternative factor models, e.g., in the form of the Hou, Xue, and Zhang (2014) q-factor model. We refer to the extended background information provided by the original authors for further information. The q-factors can be downloaded directly from the authors’ homepage from within pd.read_csv().

We also need to adjust this data. First, we discard information we will not use in the remainder of the book. Then, we rename the columns with the “R_”-prescript using regular expressions and write all column names in lowercase. We then query the data to select observations between the start and end dates. Finally, we use the double asterisk (**) notation in the assign function to apply the same transform of dividing by 100 to all four factors by iterating through them. You should always try sticking to a consistent style for naming objects, which we try to illustrate here - the emphasis is on try. You can check out style guides available online, e.g., Hadley Wickham’s tidyverse style guide. note that we temporarily adjust the SSL certificate handling behavior in Python’s ssl module when retrieving the \(q\)-factors directly from the web, as demonstrated in Introduction to Tidy Finance. This method should be used with caution, which is why we restore the default settings immediately after successfully downloading the data.

import ssl
ssl._create_default_https_context = ssl._create_unverified_context

factors_q_monthly_link = (
  "https://global-q.org/uploads/1/2/2/6/122679606/"
  "q5_factors_monthly_2023.csv"
)

factors_q_monthly = (pd.read_csv(factors_q_monthly_link)
  .assign(
    date=lambda x: (
      pd.to_datetime(x["year"].astype(str) + "-" +
        x["month"].astype(str) + "-01"))
  )
  .drop(columns=["R_F", "R_MKT", "year"])
  .rename(columns=lambda x: x.replace("R_", "").lower())
  .query(f"date >= '{start_date}' and date <= '{end_date}'")
  .assign(
    **{col: lambda x: x[col]/100 for col in ["me", "ia", "roe", "eg"]}
  )
)

ssl._create_default_https_context = ssl.create_default_context

Macroeconomic Predictors

Our next data source is a set of macroeconomic variables often used as predictors for the equity premium. Welch and Goyal (2008) comprehensively reexamine the performance of variables suggested by the academic literature to be good predictors of the equity premium. The authors host the data on Amit Goyal’s website. Since the data is an XLSX-file stored on a public Google Drive location, we need additional packages to access the data directly from our Python session. Usually, you need to authenticate if you interact with Google drive directly in Python. Since the data is stored via a public link, we can proceed without any authentication.

sheet_id = "1bM7vCWd3WOt95Sf9qjLPZjoiafgF_8EG"
sheet_name = "macro_predictors.xlsx"
macro_predictors_link = (
  f"https://docs.google.com/spreadsheets/d/{sheet_id}" 
  f"/gviz/tq?tqx=out:csv&sheet={sheet_name}"
)

Next, we read in the new data and transform the columns into the variables that we later use:

  1. The dividend price ratio (dp), the difference between the log of dividends and the log of prices, where dividends are 12-month moving sums of dividends paid on the S&P 500 index, and prices are monthly averages of daily closing prices (Campbell and Shiller 1988; Campbell and Yogo 2006).
  2. Dividend yield (dy), the difference between the log of dividends and the log of lagged prices (Ball 1978).
  3. Earnings price ratio (ep), the difference between the log of earnings and the log of prices, where earnings are 12-month moving sums of earnings on the S&P 500 index (Campbell and Shiller 1988).
  4. Dividend payout ratio (de), the difference between the log of dividends and the log of earnings (Lamont 1998).
  5. Stock variance (svar), the sum of squared daily returns on the S&P 500 index (Guo 2006).
  6. Book-to-market ratio (bm), the ratio of book value to market value for the Dow Jones Industrial Average (Kothari and Shanken 1997).
  7. Net equity expansion (ntis), the ratio of 12-month moving sums of net issues by NYSE listed stocks divided by the total end-of-year market capitalization of NYSE stocks (Campbell, Hilscher, and Szilagyi 2008).
  8. Treasury bills (tbl), the 3-Month Treasury Bill: Secondary Market Rate from the economic research database at the Federal Reserve Bank at St. Louis (Campbell 1987).
  9. Long-term yield (lty), the long-term government bond yield from Ibbotson’s Stocks, Bonds, Bills, and Inflation Yearbook (Welch and Goyal 2008).
  10. Long-term rate of returns (ltr), the long-term government bond returns from Ibbotson’s Stocks, Bonds, Bills, and Inflation Yearbook (Welch and Goyal 2008).
  11. Term spread (tms), the difference between the long-term yield on government bonds and the Treasury bill (Campbell 1987).
  12. Default yield spread (dfy), the difference between BAA and AAA-rated corporate bond yields (Fama and French 1989).
  13. Inflation (infl), the Consumer Price Index (All Urban Consumers) from the Bureau of Labor Statistics (Campbell and Vuolteenaho 2004).

For variable definitions and the required data transformations, you can consult the material on Amit Goyal’s website.

ssl._create_default_https_context = ssl._create_unverified_context

macro_predictors = (
  pd.read_csv(macro_predictors_link, thousands=",")
  .assign(
    date=lambda x: pd.to_datetime(x["yyyymm"], format="%Y%m"),
    dp=lambda x: np.log(x["D12"])-np.log(x["Index"]),
    dy=lambda x: np.log(x["D12"])-np.log(x["Index"].shift(1)),
    ep=lambda x: np.log(x["E12"])-np.log(x["Index"]),
    de=lambda x: np.log(x["D12"])-np.log(x["E12"]),
    tms=lambda x: x["lty"]-x["tbl"],
    dfy=lambda x: x["BAA"]-x["AAA"]
  )
  .rename(columns={"b/m": "bm"})
  .get(["date", "dp", "dy", "ep", "de", "svar", "bm", 
        "ntis", "tbl", "lty", "ltr", "tms", "dfy", "infl"])
  .query("date >= @start_date and date <= @end_date")
  .dropna()
)

ssl._create_default_https_context = ssl.create_default_context

Other Macroeconomic Data

The Federal Reserve bank of St. Louis provides the Federal Reserve Economic Data (FRED), an extensive database for macroeconomic data. In total, there are 817,000 US and international time series from 108 different sources. As an illustration, we use the already familiar pandas-datareader package to fetch consumer price index (CPI) data that can be found under the CPIAUCNS key.

cpi_monthly = (pdr.DataReader(
    name="CPIAUCNS", 
    data_source="fred", 
    start=start_date, 
    end=end_date
  )
  .reset_index(names="date")
  .rename(columns={"CPIAUCNS": "cpi"})
  .assign(cpi=lambda x: x["cpi"]/x["cpi"].iloc[-1])
)

Note that we use the assign() in the last line to set the current (latest) price level as the reference inflation level. To download other time series, we just have to look it up on the FRED website and extract the corresponding key from the address. For instance, the producer price index for gold ores can be found under the PCU2122212122210 key.

Setting Up a Database

Now that we have downloaded some (freely available) data from the web into the memory of our Python session, let us set up a database to store that information for future use. We will use the data stored in this database throughout the following chapters, but you could alternatively implement a different strategy and replace the respective code.

There are many ways to set up and organize a database, depending on the use case. For our purpose, the most efficient way is to use an SQLite-database, which is the C-language library that implements a small, fast, self-contained, high-reliability, full-featured SQL database engine. Note that SQL (Structured Query Language) is a standard language for accessing and manipulating databases.

import sqlite3

An SQLite-database is easily created - the code below is really all there is. You do not need any external software. Otherwise, date columns are stored and retrieved as integers. We will use the file tidy_finance_r.sqlite, located in the data subfolder, to retrieve data for all subsequent chapters. The initial part of the code ensures that the directory is created if it does not already exist.

import os

if not os.path.exists("data"):
  os.makedirs("data")
    
tidy_finance = sqlite3.connect(database="data/tidy_finance_python.sqlite")

Next, we create a remote table with the monthly Fama-French factor data. We do so with the pandas function to_sql(), which copies the data to our SQLite-database.

(factors_ff3_monthly
  .to_sql(name="factors_ff3_monthly", 
          con=tidy_finance, 
          if_exists="replace",
          index=False)
)

Now, if we want to have the whole table in memory, we need to call pd.read_sql_query() with the corresponding query. You will see that we regularly load the data into the memory in the next chapters.

pd.read_sql_query(
  sql="SELECT date, rf FROM factors_ff3_monthly",
  con=tidy_finance,
  parse_dates={"date"}
)
date rf
0 1960-01-01 0.0033
1 1960-02-01 0.0029
2 1960-03-01 0.0035
3 1960-04-01 0.0019
4 1960-05-01 0.0027
... ... ...
763 2023-08-01 0.0045
764 2023-09-01 0.0043
765 2023-10-01 0.0047
766 2023-11-01 0.0044
767 2023-12-01 0.0043

768 rows × 2 columns

The last couple of code chunks are really all there is to organizing a simple database! You can also share the SQLite database across devices and programming languages.

Before we move on to the next data source, let us also store the other six tables in our new SQLite database.

data_dict = {
  "factors_ff5_monthly": factors_ff5_monthly,
  "factors_ff3_daily": factors_ff3_daily,
  "industries_ff_monthly": industries_ff_monthly, 
  "factors_q_monthly": factors_q_monthly,
  "macro_predictors": macro_predictors,
  "cpi_monthly": cpi_monthly
}

for key, value in data_dict.items():
    value.to_sql(name=key,
                 con=tidy_finance, 
                 if_exists="replace",
                 index=False)

From now on, all you need to do to access data that is stored in the database is to follow two steps: (i) Establish the connection to the SQLite-database and (ii) execute the query to fetch the data. For your convenience, the following steps show all you need in a compact fashion.

import pandas as pd
import sqlite3

tidy_finance = sqlite3.connect(database="data/tidy_finance_python.sqlite")

factors_q_monthly = pd.read_sql_query(
  sql="SELECT * FROM factors_q_monthly",
  con=tidy_finance,
  parse_dates={"date"}
)

Managing SQLite Databases

Finally, at the end of our data chapter, we revisit the SQLite database itself. When you drop database objects such as tables or delete data from tables, the database file size remains unchanged because SQLite just marks the deleted objects as free and reserves their space for future uses. As a result, the database file always grows in size.

To optimize the database file, you can run the VACUUM command in the database, which rebuilds the database and frees up unused space. You can execute the command in the database using the execute() function.

tidy_finance.execute("VACUUM")

The VACUUM command actually performs a couple of additional cleaning steps, which you can read about in this tutorial.

Exercises

  1. Download the monthly Fama-French factors manually from Kenneth French’s data library and read them in via pd.read_csv(). Validate that you get the same data as via the pandas-datareader package.
  2. Download the daily Fama-French 5 factors using the pdr.DataReader() package. After the successful download and conversion to the column format that we used above, compare the rf, mkt_excess, smb, and hml columns of factors_ff3_daily to factors_ff5_daily. Discuss any differences you might find.

References

Ball, Ray. 1978. Anomalies in relationships between securities’ yields and yield-surrogates.” Journal of Financial Economics 6 (2–3): 103–26. https://doi.org/10.1016/0304-405X(78)90026-0.
Campbell, John Y. 1987. Stock returns and the term structure.” Journal of Financial Economics 18 (2): 373–99. https://doi.org/10.1016/0304-405X(87)90045-6.
Campbell, John Y., Jens Hilscher, and Jan Szilagyi. 2008. In search of distress risk.” The Journal of Finance 63 (6): 2899–939. https://doi.org/10.1111/j.1540-6261.2008.01416.x.
Campbell, John Y., and Robert J. Shiller. 1988. Stock prices, earnings, and expected dividends.” The Journal of Finance 43 (3): 661–76. https://doi.org/10.1111/j.1540-6261.1988.tb04598.x.
Campbell, John Y., and Tuomo Vuolteenaho. 2004. Inflation illusion and stock prices.” American Economic Review 94 (2): 19–23. https://www.aeaweb.org/articles?id=10.1257/0002828041301533.
Campbell, John Y., and Motohiro Yogo. 2006. Efficient tests of stock return predictability.” Journal of Financial Economics 81 (1): 27–60. https://doi.org/10.1016/j.jfineco.2005.05.008.
Fama, Eugene F., and Kenneth R. French. 1989. Business conditions and expected returns on stocks and bonds.” Journal of Financial Economics 25 (1): 23–49. https://doi.org/10.1016/0304-405X(89)90095-0.
———. 1993. Common risk factors in the returns on stocks and bonds.” Journal of Financial Economics 33 (1): 3–56. https://doi.org/10.1016/0304-405X(93)90023-5.
Guo, Hui. 2006. On the out-of-sample predictability of stock market returns.” The Journal of Business 79 (2): 645–70. https://doi.org/10.1086/499134.
Hou, Kewei, Chen Xue, and Lu Zhang. 2014. Digesting anomalies: An investment approach.” Review of Financial Studies 28 (3): 650–705. https://doi.org/10.1093/rfs/hhu068.
Kothari, S. P., and Jay A. Shanken. 1997. Book-to-market, dividend yield, and expected market returns: A time-series analysis.” Journal of Financial Economics 44 (2): 169–203. https://doi.org/10.1016/S0304-405X(97)00002-0.
Lamont, Owen. 1998. Earnings and expected returns.” The Journal of Finance 53 (5): 1563–87. https://doi.org/10.1111/0022-1082.00065.
Welch, Ivo, and Amit Goyal. 2008. A comprehensive look at the empirical performance of equity premium prediction.” Review of Financial Studies 21 (4): 1455–1508. https://doi.org/10.1093/rfs/hhm014.