Splits 'text' into words as a 'tokens' variable then tabulates them within each document in 'counts' variable
jl_count_words(x, ...)
x | a tibble |
---|---|
... | extra arguments to tokenizers::tokenize_* |
a tibble
This is a shortcut for running jl_tokenize_words then jl_count_tokens and should be used in preference to these two.