Many modern natural language programming systems, such as the OpenAI GPT models (including ChatGPT), use tokenizations of text. With a new algorithm, it is possible to use a deterministic finite automaton (DFA) for tokenizations. To reduce the size of these DFAs and see how this affects performance, this report proposes a new algorithm that builds a compressed automaton incrementally. The proposed algorithm uses default transitions between states for compression, which allows for the removal of many normal transitions. This compression reduces the size of the DFA to less than 2% of its original size for realistic and randomly created vocabularies. The total time needed to build the DFA using the incremental algorithm is less than 15% of the total time required for the original. This is an exciting result that enables further exploration of how compression techniques can be used in DFA tokenization.