This workshop addresses various topics in Natural Language Processing, primarily through the use of NLTK. We’ll work with a corpus of documents and learn how to identify different types of linguistic structure in the text, which can help in classifying the documents or extracting useful information from them. We’ll cover tokenization, part of speech (POS) tagging, chunking of phrases, named entity recognition (NER), and dependency parsing.
Prior knowledge: Attendees should have thorough knowledge of Python. Completion of D-Lab’s Python FUN!damentals series will be sufficient.
Technology requirements: Please install Python 3 and the following packages before the workshop.
- NLTK (In Bash: $ pip install nltk)
- NLTK corpora (In Python: >>> nltk.download(‘book’))
- Stanford Parser: Download the Stanford Parser 3.6.0 and unzip to a location that’s easy for you to find (e.g. a folder called SourceCode in your Documents folder)