This three-part workshop will prepare participants to move forward with research that uses text analysis, with a special focus on social science applications. We explore fundamental approaches to applying computational methods to text in Python. We cover some of the major packages used in natural language processing, including scikit-learn, NLTK, spaCy, and Gensim.
- Part 1: Preprocessing. How do we standardize and clean text documents? Text data is noisy, and we often need to develop a pipeline in order to standardize the data to better facilitate computational modeling. You will learn common and task-specific operations of preprocessing, becoming familiar with commonly used NLP packages and what they are capable of. You will also learn about tokenizers, and how they have changed since the advent of Large Language Models.
- Part 2: Bag-of-words. In order to do any computational analysis on the text data, we need to devise approaches to convert text into a numeric representation. You will learn how to convert text data to a frequency matrix, and how TF-IDF complements the Bag-of-Words representation. You will also learn about parameter settings of a vectorizer and apply sentiment classification to vectorized text data.
- Part 3: Word Embeddings. Word Embeddings underpin nearly all modern language models. In this workshop, you will learn the differences between a bag-of-words representation and word embeddings. You will be introduced to calculating cosine similarity between words, and learn how word embeddings can suffer from biases.
The materials for this workshop series are designed to build on each other. Part 2 assumes familiarity with the content from Part 1, and Part 3 similarly requires understanding of both preceding parts.
Prerequisites: We recommend attending Python Fundamentals (opens in a new tab), Python Data Wrangling (opens in a new tab), and Python Machine Learning Fundamentals (opens in a new tab)prior to this workshop.