import pandas as pd
import numpy as np
import sqlite3
import string
WRDS Dummy Data
In this appendix chapter, we alleviate the constraints of readers who don’t have access to WRDS and hence cannot run the code that we provide. We show how to create a dummy database that contains the WRDS tables and corresponding columns such that all code chunks in this book can be executed with this dummy database. We do not create dummy data for tables of macroeconomic variables because they can be freely downloaded from the original sources; check out Accessing and Managing Financial Data.
We deliberately use the dummy label because the data is not meaningful in the sense that it allows readers to actually replicate the results of the book. For legal reasons, the data does not contain any samples of the original data. We merely generate random numbers for all columns of the tables that we use throughout the books.
To generate the dummy database, we use the following packages:
Let us initialize a SQLite database (tidy_finance_python.sqlite
) or connect to your existing one. Be careful, if you already downloaded the data from WRDS, then the code in this chapter will overwrite your data!
= sqlite3.connect(database="data/tidy_finance_python.sqlite") tidy_finance
Since we draw random numbers for most of the columns, we also define a seed to ensure that the generated numbers are replicable. We also initialize vectors of dates of different frequencies over ten years that we then use to create yearly, monthly, and daily data, respectively.
1234)
np.random.seed(
= pd.Timestamp("2003-01-01")
start_date = pd.Timestamp("2022-12-31")
end_date
= np.arange(start_date.year, end_date.year+1, 1)
dummy_years = pd.date_range(start_date, end_date, freq="MS")
dummy_months = pd.date_range(start_date, end_date, freq="D") dummy_days
Create Stock Dummy Data
Let us start with the core data used throughout the book: stock and firm characteristics. We first generate a table with a cross-section of stock identifiers with unique permno
and gvkey
values, as well as associated exchcd
, exchange
, industry
, and siccd
values. The generated data is based on the characteristics of stocks in the crsp_monthly
table of the original database, ensuring that the generated stocks roughly reflect the distribution of industries and exchanges in the original data, but the identifiers and corresponding exchanges or industries do not reflect actual firms. Similarly, the permno
-gvkey
combinations are purely nonsensical and should not be used together with actual CRSP or Compustat data.
= 100
number_of_stocks
= pd.DataFrame({
industries "industry": ["Agriculture", "Construction", "Finance",
"Manufacturing", "Mining", "Public", "Retail",
"Services", "Transportation", "Utilities", "Wholesale"],
"n": [81, 287, 4682, 8584, 1287, 1974, 1571, 4277, 1249, 457, 904],
"prob": [0.00319, 0.0113, 0.185, 0.339, 0.0508, 0.0779,
0.0620, 0.169, 0.0493, 0.0180, 0.03451]
})
= pd.DataFrame({
exchanges "exchange": ["AMEX", "NASDAQ", "NYSE"],
"n": [2893, 17236, 5553],
"prob": [0.113, 0.671, 0.216]
})
= []
stock_identifiers_list for x in range(1, number_of_stocks+1):
= np.random.choice(exchanges["exchange"], p=exchanges["prob"])
exchange = np.random.choice(industries["industry"], p=industries["prob"])
industry
= {
exchcd_mapping "NYSE": np.random.choice([1, 31]),
"AMEX": np.random.choice([2, 32]),
"NASDAQ": np.random.choice([3, 33])
}
= {
siccd_mapping "Agriculture": np.random.randint(1, 1000),
"Mining": np.random.randint(1000, 1500),
"Construction": np.random.randint(1500, 1800),
"Manufacturing": np.random.randint(1800, 4000),
"Transportation": np.random.randint(4000, 4900),
"Utilities": np.random.randint(4900, 5000),
"Wholesale": np.random.randint(5000, 5200),
"Retail": np.random.randint(5200, 6000),
"Finance": np.random.randint(6000, 6800),
"Services": np.random.randint(7000, 9000),
"Public": np.random.randint(9000, 10000)
}
stock_identifiers_list.append({"permno": x,
"gvkey": str(x+10000),
"exchange": exchange,
"industry": industry,
"exchcd": exchcd_mapping[exchange],
"siccd": siccd_mapping[industry]
})
= pd.DataFrame(stock_identifiers_list) stock_identifiers
Next, we construct three panels of stock data with varying frequencies: yearly, monthly, and daily. We begin by creating the stock_panel_yearly
panel. To achieve this, we combine the stock_identifiers
table with a new table containing the variable year
from dummy_years
. The expand_grid()
function ensures that we get all possible combinations of the two tables. After combining, we select only the gvkey
and year
columns for our final yearly panel.
Next, we construct the stock_panel_monthly
panel. Similar to the yearly panel, we use the expand_grid()
function to combine stock_identifiers
with a new table that has the date
variable from dummy_months
. After merging, we select the columns permno
, gvkey
, date
, siccd
, industry
, exchcd
, and exchange
to form our monthly panel.
Lastly, we create the stock_panel_daily
panel. We combine stock_identifiers
with a table containing the date
variable from dummy_days
. After merging, we retain only the permno
and date
columns for our daily panel.
= pd.DataFrame({
stock_panel_yearly "gvkey": np.tile(stock_identifiers["gvkey"], len(dummy_years)),
"year": np.repeat(dummy_years, len(stock_identifiers))
})
= pd.DataFrame({
stock_panel_monthly "permno": np.tile(stock_identifiers["permno"], len(dummy_months)),
"gvkey": np.tile(stock_identifiers["gvkey"], len(dummy_months)),
"date": np.repeat(dummy_months, len(stock_identifiers)),
"siccd": np.tile(stock_identifiers["siccd"], len(dummy_months)),
"industry": np.tile(stock_identifiers["industry"], len(dummy_months)),
"exchcd": np.tile(stock_identifiers["exchcd"], len(dummy_months)),
"exchange": np.tile(stock_identifiers["exchange"], len(dummy_months))
})
= pd.DataFrame({
stock_panel_daily "permno": np.tile(stock_identifiers["permno"], len(dummy_days)),
"date": np.repeat(dummy_days, len(stock_identifiers))
})
Dummy beta
table
We then proceed to create dummy beta values for our stock_panel_monthly
table. We generate monthly beta values beta_monthly
using the rnorm()
function with a mean and standard deviation of 1. For daily beta values beta_daily
, we take the dummy monthly beta and add a small random noise to it. This noise is generated again using the rnorm()
function, but this time we divide the random values by 100 to ensure they are small deviations from the monthly beta.
= (stock_panel_monthly
beta_dummy
.assign(=np.random.normal(
beta_monthly=1, scale=1, size=len(stock_panel_monthly)
loc
),=lambda x: (
beta_daily"beta_monthly"]+np.random.normal(scale=0.01, size=len(x))
x[
)
)
)
(beta_dummy="beta",
.to_sql(name=tidy_finance,
con="replace",
if_exists= False)
index )
Dummy compustat
table
To create dummy firm characteristics, we take all columns from the compustat
table and create random numbers between 0 and 1. For simplicity, we set the datadate
for each firm-year observation to the last day of the year, although it is empirically not the case.
= [
relevant_columns "seq", "ceq", "at", "lt", "txditc", "txdb", "itcb",
"pstkrv", "pstkl", "pstk", "capx", "oancf", "sale",
"cogs", "xint", "xsga", "be", "op", "at_lag", "inv"
]
= {
commands len(stock_panel_yearly)) for col in relevant_columns
col: np.random.rand(
}
= (
compustat_dummy
stock_panel_yearly
.assign(=lambda x: pd.to_datetime(x["year"].astype(str)+"-12-31")
datadate
)**commands)
.assign(
)
(compustat_dummy="compustat",
.to_sql(name=tidy_finance,
con="replace",
if_exists=False)
index )
Dummy crsp_monthly
table
The crsp_monthly
table only lacks a few more columns compared to stock_panel_monthly
: the returns ret
drawn from a normal distribution, the excess returns ret_excess
with small deviations from the returns, the shares outstanding shrout
and the last price per month altprc
both drawn from uniform distributions, and the market capitalization mktcap
as the product of shrout
and altprc
.
= (stock_panel_monthly
crsp_monthly_dummy
.assign(=lambda x: np.fmax(np.random.normal(size=len(x)), -1),
ret=lambda x: (
ret_excess"ret"]-np.random.uniform(0, 0.0025, len(x)), -1)
np.fmax(x[
),=1000*np.random.uniform(1, 50, len(stock_panel_monthly)),
shrout=np.random.uniform(0, 1000, len(stock_panel_monthly)))
altprc=lambda x: x["shrout"]*x["altprc"])
.assign(mktcap=["permno", "date"])
.sort_values(by
.assign(=lambda x: (x.groupby("permno")["mktcap"].shift(1))
mktcap_lag
)=True)
.reset_index(drop
)
(crsp_monthly_dummy="crsp_monthly",
.to_sql(name=tidy_finance,
con="replace",
if_exists=False)
index )
Dummy crsp_daily
table
The crsp_daily
table only contains a date
column and the daily excess returns ret_excess
as additional columns to stock_panel_daily
.
= (stock_panel_daily
crsp_daily_dummy
.assign(=lambda x: np.fmax(np.random.normal(size=len(x)), -1)
ret_excess
)=True)
.reset_index(drop
)
(crsp_daily_dummy="crsp_daily",
.to_sql(name=tidy_finance,
con="replace",
if_exists=False)
index )
Create Bond Dummy Data
Lastly, we move to the bond data that we use in our books.
Dummy fisd
data
To create dummy data with the structure of Mergent FISD, we calculate the empirical probabilities of actual bonds for several variables: maturity
, offering_amt
, interest_frequency
, coupon
, and sic_code
. We use these probabilities to sample a small cross-section of bonds with completely made up complete_cusip
, issue_id
, and issuer_id
.
= 100
number_of_bonds
def generate_cusip():
"""Generate cusip."""
= list(string.ascii_uppercase+string.digits) # Convert to list
characters = ("".join(np.random.choice(characters, size=12))).upper()
cusip
return cusip
= (pd.DataFrame({
fisd_dummy "complete_cusip": [generate_cusip() for _ in range(number_of_bonds)]
})
.assign(=lambda x: np.random.choice(dummy_days, len(x), replace=True),
maturity=lambda x: np.random.choice(
offering_amt1, 101)*100000, len(x), replace=True
np.arange(
)
)
.assign(=lambda x: (
offering_date"maturity"]-pd.to_timedelta(
x[1, 26)*365, len(x), replace=True),
np.random.choice(np.arange(="D"
unit
)
)
)
.assign(=lambda x: (
dated_date"offering_date"]-pd.to_timedelta(
x[-10, 11), len(x), replace=True),
np.random.choice(np.arange(="D"
unit
)
),=lambda x: np.random.choice(
interest_frequency0, 1, 2, 4, 12], len(x), replace=True
[
),=lambda x: np.random.choice(
coupon0, 2.1, 0.1), len(x), replace=True
np.arange(
)
)
.assign(=lambda x: (
last_interest_date"maturity", "offering_date", "dated_date"]].max(axis=1)
x[[
),=lambda x: x.index+1,
issue_id=lambda x: np.random.choice(
issuer_id1, 251), len(x), replace=True
np.arange(
),=lambda x: (np.random.choice(
sic_code1, 10)*1000, len(x), replace=True)
np.arange(str)
).astype(
)
)
(fisd_dummy="fisd",
.to_sql(name=tidy_finance,
con="replace",
if_exists=False)
index )
Dummy trace_enhanced
data
Finally, we create a dummy bond transaction data for the fictional CUSIPs of the dummy fisd
data. We take the date range that we also analyze in the book and ensure that we have at least five transactions per day to fulfill a filtering step in the book.
= 100
number_of_bonds = pd.Timestamp("2014-01-01")
start_date = pd.Timestamp("2016-11-30")
end_date
= pd.DataFrame({
bonds_panel "cusip_id": np.tile(
"complete_cusip"],
fisd_dummy[-start_date).days+1
(end_date
),"trd_exctn_dt": np.repeat(
len(fisd_dummy)
pd.date_range(start_date, end_date),
)
})
= (pd.concat([bonds_panel]*5)
trace_enhanced_dummy
.assign(= lambda x: pd.to_datetime(
trd_exctn_tm "trd_exctn_dt"].astype(str)+" " +
x[0, 24, size=len(x)).astype(str)+":" +
np.random.randint(0, 60, size=len(x)).astype(str)+":" +
np.random.randint(0, 60, size=len(x)).astype(str)
np.random.randint(
),=np.random.uniform(10, 200, len(bonds_panel)*5),
rptd_pr=1000*np.random.choice(
entrd_vol_qtrange(1,21), len(bonds_panel)*5, replace=True
),=np.random.uniform(-10, 10, len(bonds_panel)*5),
yld_pt=np.random.choice(
rpt_side_cd"B", "S"], len(bonds_panel)*5, replace=True
[
),=np.random.choice(
cntra_mp_id"C", "D"], len(bonds_panel)*5, replace=True
[
)
)=True)
.reset_index(drop
)
(trace_enhanced_dummy="trace_enhanced",
.to_sql(name=tidy_finance,
con="replace",
if_exists=False)
index )
As stated in the introduction, the data does not contain any samples of the original data. We merely generate random numbers for all columns of the tables that we use throughout this book.