Natural language processing is an ever-growing subfield of AI that has a multitude of different applications. A common and seemingly straightforward application is document similarity, which often implements various NLP algorithms. However, along with the versatility of its different techniques, there are also drawbacks. The different algorithms tend to focus on one or more factors of similarity, which means they can excel in one type of similarity assessment but struggle with another. This thesis investigates three NLP techniques with a focus on their ability to automate similarity assessments. Their focus is on similarity between course content for use in course eligibility or course crediting. At this point in time, this comparison is done manually. To determine what factors are important when crediting courses, three algorithms have been implemented and run on various course comparison tests. The algorithms and factors chosen were TF-IDF for weighted term overlap, N-Gram for contextual matching, and NER with Keyword extraction for topic detection. When assessing their overall accuracy, NER with Keyword extraction seemed the best choice. That is until it became apparent that it was more consistently and confidently giving wrong answers. It gave a high similarity score on courses that had some similarities, such as being from the same university but were not similar enough to be credited as each other. Using N-Grams to determine similarity was the most reliable both on similar and dissimilar courses and was shown to be the reliable choice. TF-IDF did not perform adequately with its current vocabulary. To summarize, context-based similarity with N-Grams proved to be a reliable and useful factor when looking into automatic crediting of courses but needs further work before actual usage.