Wordclouds

As part of my adventures in natural language processing and learning Python, I wanted to try to learn how to make word clouds. We see these things all the time in powerpoint presentations.

They look fairly cool and the technology used to create them seems fairly straightforward. The computer counts the number of times a word appears in some text. Words that appear more frequently are bigger (ignoring common words like “the,” “of,” “a” and such) and words that appear sometimes but not as frequently are still shown, but take up less space.

This was a bit before the 2020 election and I wanted to see if different news sources were covering different topics and I wanted to be able to visualize these differences easily.

I wrote some code (some repurposed from https://www.datacamp.com/community/tutorials/wordcloud-python ) to scrape the RSS feeds of various news sources and generate word clouds. These were the clouds I got from CNN and BBC.

CNN Wordcloud (Oct 3rd 2020)
BBC Word cloud Oct 3rd 2020

And this is the code I used to generate it. It’s currently set up for the CNN URLs, but you can put in what RSS feed URLs in and it should work (you will need to appropriately indent it to get it working, the indents don’t paste properly, unfortunately).


#import library
import requests
from bs4 import BeautifulSoup
#import pandas to create dataframe and CSV
import pandas as pd
import time
from wordcloud import WordCloud, STOPWORDS 
import matplotlib.pyplot as plt 


#enter URL
cnnurls = ["http://rss.cnn.com/rss/cnn_topstories.rss",
        "http://rss.cnn.com/rss/cnn_world.rss",
        "http://rss.cnn.com/rss/cnn_us.rss",
        "http://rss.cnn.com/rss/money_latest.rss",
        "http://rss.cnn.com/rss/cnn_allpolitics.rss",
        "http://rss.cnn.com/rss/cnn_tech.rss",
     #   "http://rss.cnn.com/rss/cnn_health.rss",
     #   "http://rss.cnn.com/rss/cnn_showbiz.rss",
     #   "http://rss.cnn.com/rss/cnn_travel.rss",
        "http://rss.cnn.com/rss/money_news_companies.rss",
        "http://rss.cnn.com/rss/money_news_international.rss",
        "http://rss.cnn.com/rss/money_news_economy.rss"
       ]
bbcurls = ["http://feeds.bbci.co.uk/news/rss.xml",
           "http://feeds.bbci.co.uk/news/world/rss.xml",
           "http://feeds.bbci.co.uk/news/uk/rss.xml",
           "http://feeds.bbci.co.uk/news/business/rss.xml",
           "http://feeds.bbci.co.uk/news/politics/rss.xml",
          # "http://feeds.bbci.co.uk/news/health/rss.xml",
           "http://feeds.bbci.co.uk/news/education/rss.xml",
           "http://feeds.bbci.co.uk/news/technology/rss.xml",
           "http://feeds.bbci.co.uk/news/entertainment_and_arts/rss.xml"
          ]
news_items = []
for url in cnnurls:
    resp = requests.get(url)

    soup = BeautifulSoup(resp.content, features="xml")

    items = soup.findAll('item')

    #print(len(items))
    
    #scarring HTML tags such as Title, Description, Links and Publication date
    for item in items:
        news_item = {}
        news_item['title'] = item.title.text
        news_item['description'] = item.description.text
       # news_item['link'] = item.link.text
       # news_item['pubDate'] = item.pubDate.text

        news_items.append(news_item)
    time.sleep(1)

df = pd.DataFrame(news_items,columns=['title','description'])
df.to_csv('CNNdata1.csv',index=False, encoding = 'utf-8')

df = pd.read_csv('CNNdata1.csv',encoding = 'utf-8') 
  
comment_words = '' 
stopwords = set(STOPWORDS) 
  
# iterate through the csv file 
for val in df.title: 
      
    # typecaste each val to string 
    val = str(val) 
  
    # split the value 
    tokens = val.split() 
      
    # Converts each token into lowercase 
    for i in range(len(tokens)): 
        tokens[i] = tokens[i].lower() 
      
    comment_words += " ".join(tokens)+" "
  
wordcloud = WordCloud(width = 800, height = 800, 
                background_color ='white', 
                stopwords = stopwords, 
                min_font_size = 10).generate(comment_words) 
  
# plot the WordCloud image                        
plt.figure(figsize = (8, 8), facecolor = None) 
plt.imshow(wordcloud) 
plt.axis("off") 
plt.tight_layout(pad = 0) 
  
plt.show()

Natural Language Processing

I recently started looking into some natural language processing (NLP) techniques, largely as a consumer of such research, rather than as a producer of such research. With the large amount of textual data available (10-K MD&A sections, Mutual Fund form N-CSR’s management discussion sections, analyst reports, news articles, earnings calls, etc. etc.) this seems to be fertile ground for new research.

My sense is the earlier work in this area largely revolved around word counts and treating text as a “bag of words” and then counting how many times certain types of words appeared in these bags. For example, for sentiment analysis, a common technique would be to count the number of positive words (where positive words were given by some dictionary, e.g. this one) and then counting the number of negative words and then taking a ratio of positive to negative words to determine the overall sentiment of a piece of text. Some work extended this by created custom dictionaries to address the unique vocabulary in finance and accounting.

Newer work seems more tech-ed up and generally considers the relationship between words (for example the word “board” in “being on board” and “board of directors” means very different things). This type of work uses constructs that are harder to parse through dictionaries, and generally uses some type of machine learning to link blocks of text with a measurable variable. For example, a researcher might train a computer by providing a few thousand sentences, along with the researcher’s classification of these sentences into positive, negative, and neutral sentences. After this, the computer can generally classify sentences quite accurately out-of sample.

I toyed around with the simplest version of this (bag of words, positive vs negative counts, etc.) and wrote some code that takes a news article and gives number of positive words, negative words, and total words. The code is below.


# these are imports not all are needed 
import pandas as pd
import urllib.request
import html2text
import requests
from string import punctuation
from googlefinance import getQuotes
import json
from yahoo_finance import Share
import time
import datetime
import ast

# This bit gets positive and negative words from your dictionaries
pos_sent = open("positive.txt").read()
positive_words=pos_sent.split('\n')
neg_sent = open("negative.txt").read()
negative_words=neg_sent.split('\n')

#this defines a function that takes a block of text as input, along with 3 number variables and returns 3 number variables with 
def parsenews(response,positive_counter,negative_counter,total_words):
    # this next bit formats the response txt as needed -
    txt = response.text
    simpletxt = html2text.html2text(txt)
    #print(simpletxt)
    #print(txt)      
    simpletxt_processed=simpletxt.lower() 
    # this removes punctuation
    for p in list(punctuation):
        simpletxt_processed=simpletxt_processed.replace(p,'')
        words=simpletxt_processed.split(' ')
    for word in words:
        if word in positive_words and len(word) > 2:
            #print(word)
            positive_counter=positive_counter+1
        if word in negative_words and len(word) > 2:
            #print(word)
            negative_counter=negative_counter+1
    total_words = total_words + len(words)
    return positive_counter,negative_counter,total_words

It seemed relatively straightforward to do the “bag of words” positive vs negative sentiment counts. At some point, I might try the more complicated stuff, but for now, I just look forward to seeing more cool studies using these techniques.

Looping and scraping

In the previous posts, I covered how to scrape some data (like a stock price) from a website. To get a workable dataset, we can write some code to continually loop, and collect that same data at a fixed interval.

The code below does this. A few points. (1) Python uses indentation as part of the syntax. After starting a loop (the while 1==1: statement below) or a conditional (the if XXX==YYY statement below), everything you want looping or conditionally done has to be indented. (2) the while 1==1 line simply says keep doing this … forever. Since 1 will always be equal to 1. and (3) the if statement below checks if the current minute is divisible by 5 and runs the scraping code if it is. You can change the interval by changing 5 to another number, or using the now.second or now.hour numbers.

from selenium import webdriver
import datetime
import time
from multiprocessing import Pool,TimeoutError
import urllib.request
import re
from urllib.error import URLError, HTTPError

while 1==1:
now = datetime.datetime.now()
if now.minute/5 == int(now.minute/5):
driverspy = webdriver.Chrome()
driverspy.get(‘https://finance.yahoo.com/quote/SPY?p=SPY’)
sourcespy = driverspy.page_source
now = datetime.datetime.now()
found = re.search(‘”52″>(\d+\.\d+)</span>’, sourcespy).group(1)
print(“Time:”+str(now.hour)+”:”+str(now.minute)+”:”+str(now.second)+” Price:”+str(found))
time.sleep(75)
driverspy.quit()

While the code runs, you’ll get output that looks like the following. You can then either copy paste this to a CSV file or use Python code to export it in order to start building a dataset.

Time:12:15:20 Price:302.10
Time:12:20:8 Price:302.08
Time:12:25:19 Price:302.05
Time:12:30:20 Price:302.07
Time:12:35:9 Price:302.17
Time:12:40:9 Price:302.09
Time:12:45:28 Price:302.22
Time:12:50:28 Price:302.24
Time:12:55:16 Price:302.26
Time:13:0:8 Price:302.18
Time:13:5:9 Price:302.01
Time:13:10:8 Price:301.96
Time:13:15:28 Price:302.01
Time:13:20:29 Price:302.04
Time:13:25:8 Price:301.96
Time:13:30:20 Price:301.96
Time:13:35:19 Price:302.10
Time:13:40:28 Price:302.27
Time:13:45:20 Price:302.24
Time:13:50:8 Price:302.21
Time:13:55:8 Price:302.19
Time:14:0:8 Price:302.16

Webscraping with Python 2

After an interesting class of helping students install Jupyter Notebook and try to get some basic web automation up and running with selenium and chromedriver, I realized there were some common pitfalls with easy (or some not so easy fixes).

When you run code in Python, you will sometimes (in my case, often) get an error. Since Python is a package based language, the error will sometimes be long and complicated. The most important thing to look for is right at the end, which refers to the line of code that generates the error.

So, for example, if you try to copy and run the code in the first Webscraping tutorial, the first error you will receive is:

 

error1

This is a result of the quotation marks on this website being much fancier than those Python can handle. Essentially, all quotation marks should be non-directional so ‘ and ” instead of andand and “ and ”. Replace directional quotations with non-directional ones.

The next error you will likely receive is:

error2

This simply says you need a package (or module) called selenium installed. On a Windows machine, this is done by opening the Anaconda Prompt (Start->Anacoda3->Anaconda Prompt) and typing in the following: pip install selenium <enter>

this should be followed by an installation taking place and some text indicating success. Something that looks like this.

pip install

If you use a Mac, you can do the same thing by opening up a terminal window and typing in the same thing.

The next error you might receive is one involving chromedriver. If might say Chromedriver is not in PATH or perhaps Chromedriver is not compatible with your version of chrome. On a PC, the first error is fixed by putting a copy of chromedriver.exe (not the zip file, and not a shortcut) in the same folder as your Python notebook. If you don’t know where your Python notebooks are in your directory structure, you can search for ipynb files in your computer. Jupyter notebook files have the extension *.ipynb so thye should be quite easy to find.

On a Mac the first error is fixed by adding the folder with Chromedriver to the system PATH (see instructions here and follow the 3rd set of instructions, adding a directory to PATH for all users, forever) . For more information on what PATH is, check out the delightful wiki on the subject.

Finally, the last error you will likely get will be:

error3

This is a cryptic error and simply means that it could not find the snippet of text the re.search command was looking for. That’s because Yahoo often changes the source code and the tag number changes from 35 to something else. AS of the time of writing ,it is 52. With that final fix, the code should be able to run.

noerror

Notice the last line I added: print(found) – without this line, the code would run, but would not do anything. The final line generates feedback to indicate success! The price of SPY at the time of running was $299.35.

So… what can we do with this? Well, we can write a small loop to get the price of SPY every few minutes. More on that in a bit….

 

Quantitative Investing Beyond Equities

I recently received a reference request for an alumnus of my class who was seeking employment at a Financial Advisory Firm. It was a very pleasant and productive encounter – my former student advised me via email that I was listed as a reference and I might get a call; I received an email from a pretty high level person at my student’s prospective employer to schedule a call; we had a very productive call.

During the call, I told the employer about some of the quantitative investing stuff we do in in my class. The employer said it would be useful – their firm did similar stuff for a fixed income product. This was my second run-in with a firm that does quant stuff with fixed income. It appears quantitative investing is growing in fixed income, but there may also be issues.  (see https://www.barrons.com/articles/is-fixed-income-ready-for-factors-1530897141 )

Blackrock has a delightful webpage on the space ( https://www.ishares.com/us/strategies/fixed-income-factors ) where they highlight the main factors in fixed income (FI) as value, quality, momentum, carry, and low vol. Very similar to Equities. There’s also academic work in this regard ( https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2516322  for example).

On the other hand, high transactions costs, large minimum investment amounts, minute differences between bonds that broad factors may not pick up (but that may end up making a huge difference), and buy-and-hold-to-maturity investors may prove to be headwinds in the space.

More specifically, there may be additional signals, besides the usual corporate finance and market price signals, that may be informative. The employer I spoke to was in the muni bond space, and was using geographic data ( I imagine micro level data from the various municipalities whose bonds they were considering )  to try and predict future credit moves.

I’d imagine with the wealth of data out there, and the variety of financial instruments traded, there may be some very interesting predictive relationships to be uncovered outside the equity markets.

Webscraping with Python

This is some code I wrote to scrape stock prices with Python. I wrote it on Jupyter notebook.

First off you’ll need chromedriver (Google “download chromedriver” and get the file on the first link. Put it in the folder with your Jupyter notebook.

Next, you’ll need a bunch of libraries, some of which will need to be pip installed.

from selenium import webdriver
import datetime
import time
from multiprocessing import Pool,TimeoutError
import urllib.request
import re
from urllib.error import URLError, HTTPError

In the code below, you won’t need all of this, but I’m just copying the entire import section of my code.

Next, we’ll fire up a browser.

driverspy = webdriver.Chrome()
driverspy.get(‘https://finance.yahoo.com/quote/SPY?p=SPY’)

This should open a python controlled browser that surfs its way to Yahoo Finance and loads up the page for SPY (a popular S&P 500 ETF).

Finally, we’ll define a function to scrape the price and then scrape the price off this page.

sourcespy = driverspy.page_source
found = re.search(‘”35″>(\d+\.\d+)</span>’, sourcespy).group(1)

If you look at the html code of the page_source of the Yahoo page with the SPY data, you’ll see it has, buried in it, something that looks like this:

<span class=”Trsdu(0.3s) Fw(b) Fz(36px) Mb(-4px) D(ib)” data-reactid=“35”>283.82</span><span class=”Trsdu(0.3s) Fw(500) Pstart(10px) Fz(24px) C($dataGreen)” data-reactid=”36″>+1.72 (+0.61%)</span><div

We rely on the bolded part always being the same (“35”> … </span> and encapsulating the bold+underlined price (283.82) to extract the price. The \d+.\d+ tells Python to look for a positive number, a period and another positive number.

Now, we have a basic scraper to get prices from Yahoo finance. If we set up a loop, we can get prices every few minutes and generate a time series dataset.

Python Basics

I recently learned and started using Python for some of my projects. Python is a high level programming language with a number of pre-programmed packages for a variety of useful tasks. Tasks I’ve used Python for include scraping the web for data (excellent!), machine learning (meh … but that’s more my fault than Python’s), OCR (super meh), and algorithmic name classification, such as gender determination (again, excellent!).

While I will not provide direct code to perform predictive analysis using Python, I will use this post to link to a variety of resources that I have used, along with how I use it.

First, how to get started with Python. I use Jupyter Notebook, along with Anaconda. Both of these are installed when you download and install the latest version of Anaconda – google “download jupyter notebook” and go to the first link. The actual download will be from the Anaconda website.  As of posting, the latest version is Python 3.6. Click “Download,” run the file and choose all the default options and install Python and Jupyter Notebook.

Jupyter Notebook runs inside your browser. Open up Jupyter Notebook, create a folder for coding, and then create a new Notebook. Each Notebook has distinct cells for distinct blocks of code that can be run separately. Once you run the code in a cell, the output is produced right below. Here is an example:

Capture2

 

As you can see, when you run each cell, it simply generates the output right below. One thing I wanted to point out is that variables and variable types are generated dynamically. the code “a=1” first defined a as an integer and then sets it to two. Printing (and other functions) can be applied to integers (e.g. “print(a)”) or strings (e.g. print(‘hello world’)) but not to a mix (see the error in the second cell).

The second thing (and I love this) is the indentation is part of the language.

if 3>2:

print(‘hi’)
print(‘there’)

will return

hi
there

if 3<2:

print(‘hi’)
print(‘there’)

will return nothing

but

if 3<2:

print(‘hi’)

print(‘there’)

will return

there

The indentation controls what is run in the “if” statement. This forces discipline in generating readable (and workable) code.

Once you’ve gotten Python up and running – you’ll need additional packages to do other code.

For webscraping, I’d recommend selenium and chromedriver.

For OCR, I’d recommend Tesseract (Google’s OCR).

For machine learning, I use (but don’t know enough to recommend) tensorflow.

 

 

Limited attention and …

One part of my academic research agenda deals with the effects of limited attention on professional investing. This paper uses marital events as a shock to attention and shows how managers behave differently when they’re getting married or divorced. We find that managers in general become less active in their trading/investing, suffer from behavioral biases more, and perform poorer.

This past semester, I moved from the Univ. of Florida to the Univ. of Alabama and prepped a couple of new classes. While not as stressful as marriage or divorce, I did devote a bit less time to my portfolio this semester … so what happened in my case?

The biggest change was (1) I rebalanced less… I probably ran my screen once or twice the entire semester to look for equities to deploy assets to. (2) I did not, even once, look for improvements to my screens or run any of my secondary screens.

The effects were not immediately felt on performance, but I suspect if I continued on this “autopilot” path, so to speak, the end result would be a less than manicured portfolio and eventually, a stale and less than robust investment screen. All in all, I feel my personal experience is consistent with what we found in the paper above.

Change … in general

With my recent move to the University of Alabama from the University of Florida, I starting thinking about the topic of change. In the context of quantitative investing, I started thinking about changes in basic rules that we take for given when investing quantitatively and what we can do about them.

Here’s an example – academics have long taught the CAPM, a model that predicts that companies with higher systematic risk (risk stemming from overall market conditions) should outperform companies with lower systematic risk. This makes intuitive sense… riskier companies, especially companies with higher risk exposure to overall economic conditions *should* give higher returns, on average. The empirical evidence in the 70s and 80s (in hindsight) was mixed, but in general we accepted this wisdom.

However, starting in the 1990s, we started to question this basic assumption. Fama French 1992, Table II documented this:

betaret

Companies are sorted by “pre-ranking betas” (simply the beta estimated using data from before the period we measure the returns) and the average monthly returns for the next year, by pre-beta deciles are presented. 1A and 1B are the 0-5% and 5-10% comapnies by pre-rnaking beta, 2-9 are the 2nd through 9th decile by pre-ranking beta, 10A and 10B are the 90-95% and 95-100% of firms by preranking beta.

Fama and French wrote, “the beta-sorted portfolios do not support the [CAPM]. … There is no obvious relation between beta and average returns.” [disclaimer: to my eyes, if I squint hard enough, I can sort of see a slight increase in average returns with beta, but the magnitude and monotonicity of the effect are both questionable.]

And so the decline of empirical belief in the CAPM started, and today, there is little faith that stocks with high market betas will outperform the market (although the CAPM is still widely taught). (see, for example “Is Beta Dead” from the Appendix of a popular Finance Textbook. )

In fact, the academic literature has made something of a 180 on this topic. The new hot anomaly is “low vol,” or “low beta.” This literature around this anomaly shows that the low volatility/low beta stocks actually outperform high volatility/high beta stocks and proposes several stories as to why this might be the case. If something so firmly grounded in theory can experience so complete a change, I think it’s a cautionary tale for *all* quantitative strategies … all things (including both the CAPM Beta and my time at Florida), run their course eventually.

 

 

 

Momentum Across Anomalies

In a new academic piece, we examine whether anomalies themselves exhibit momentum. Momentum in the context of investing refers to the idea that stocks that have done well recently continue to do well and stocks that have done poorly recently continue to do poorly. The momentum anomaly in stocks was widely publicized by Jagadeesh and Titman in their 1993 paper titled, “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency.”

We find this same idea holds for anomalies themselves. Examining 13 anomalies, we find anomalies that have performed well recently (in the last month), continue to do well next month. Anomalies that have been performing poorly recently continue to experience poor performance going forward. A chart makes this clear:

anomom

The chart documents the evolution of $10,000 invested in one of three strategies. The top line is a strategy that invests each month in the top half of the 13 anomalies (7 since investing in 6½ anomalies is hard) being analyzed, based on the 13 anomalies’ performance in the previous month. So, for example, if the value, momentum, size, profitability, accruals, investment level, and O-score anomalies did better than the other 6 anomalies we analyzed\ last month, the strategy would equally invest in these 7 anomalies. The bottom line does the opposite, investing in the bottom 6 anomalies and the middle line equally invests in all 13 anomalies across the entire period.

From the chart, it is clear that anomalies themselves exhibit momentum and the result is robust to the usual battery of academic tests. From a practitioner’s perspective, the implication is clear: if you’re interested in smart beta type investing, pick a strategy (or strategies) that has been doing well recently. From an academic’s perspective, the more interesting question is why. If you’re interested in our take on reasons for this, you can read our paper for some reasons we think we observe this momentum across anomalies.