Splits 'text' into words as a 'tokens' variable then tabulates them within each document in 'counts' variable

jl_count_words(x, ...)

Arguments

x

a tibble

...

extra arguments to tokenizers::tokenize_*

Value

a tibble

Details

This is a shortcut for running jl_tokenize_words then jl_count_tokens and should be used in preference to these two.