nlp
Natural Language Processing in Golang.
GitHub: https://github.com/rotationalio/nlp
Go Docs: https://go.rtnl.ai/nlp
The Vision
nlp will house high-performance natural language processing and quantitative metrics, especially ones that can be computed over text.
Think statistical or structural properties of a string or document.
The first use case is text analysis inside Endeavor.
nlp will enable NLP for AI engineers: numeric metrics that help us reason about what a model did, what the humans expected, and how to compare the two.
We want this package to be:
- Performant (written in Go, reused across tools)
- Composable (individual functions that do one thing well)
- Extensible (easy to add new metrics as we learn more)
Design Goals
- Each metric should be self-contained and independently callable
- Avoid hard dependencies on LLMs or external services
- Define a common interface (e.g., all metrics take
string, return float64 or map[string]float64)
- Organize by category (similarity, counts, readability, lexical, etc.)
- Stub out room for future metrics, even weird ones
End-User API Usage
There are two ways you can use this library:
- Use the unified
text.Text interface (see example below) to perform all of the possible NLP operations using a single object that is configured with the specific tools you wish to use when it is created via text.New(chunk string) *text.Text.
This also includes using the token.Token and tokenlist.TokenList types which have their own useful features.
- Use the various tools in the lower level packages such as the
stem or the tokenize packages on an as-needed basis.
These tools generally use basic Go types such as strings, ints, floats, and slices of the same.
Usage Example
The example below shows most/all of the features of the NLP library when using the text.Text interface.
This is the simplest and easiest way to use the NLP library.
func checkErr(err error) {
if err != nil {
panic(err)
}
}
myText, err := text.New("apple aardvarks zebra bananna aardvark")
checkErr(err)
myTokens, err := myText.Tokens()
checkErr(err)
myStems, err := myText.Stems()
checkErr(err)
if len(myTokens) != len(myStems) {
panic("this should never occur")
}
myCount, err := myText.TypeCount()
checkErr(err)
stringTokens := myTokens.Strings()
length := len(myTokens)
myTokens = append(myTokens, myTokens[0])
myTokens[0] = myTokens[1]
firstToken := myTokens[0]
stringToken := firstToken.String()
runeToken := firstToken.Runes()
byteToken := firstToken.Bytes()
myText, err = text.New(
"cars have engines like motorcycles have engines",
text.WithVocabulary([]string{"car", "engine", "brakes", "transmission"}),
)
checkErr(err)
otherText, err := text.New(
"engines are attached to transmissions",
text.WithVocabulary([]string{"car", "engine", "brakes", "transmission"}),
)
checkErr(err)
similarity, err := myText.CosineSimilarity(otherText)
checkErr(err)
myOneHotVector, err := myText.VectorizeOneHot()
checkErr(err)
myFrequencyVector, err := myText.VectorizeFrequency()
checkErr(err)
ease := myText.FleschKincaidReadingEase()
grade := myText.FleschKincaidGradeLevel()
count := myText.WordsCount()
count = myText.SentencesCount()
count = myText.SyllablesCount()
voyage, err := vectorize.NewVoyageAIEmbedder(
vectorize.VoyageAIEmbedderWithAPIKey("your_voyageai_api_key_here"),
vectorize.VoyageAIEmbedderWithEndpoint("https://api.voyageai.com/v1/embeddings"),
vectorize.VoyageAIEmbedderWithModel("voyage-3.5-lite"),
)
checkErr(err)
chunk1 := "A simple test."
embedding, err := voyage.Vectorize(chunk1)
checkErr(err)
chunks := []string{chunk1, "A slightly more complex test, but only slightly.", "Number three!"}
embeddings, err := voyage.VectorizeAll(chunks)
checkErr(err)
fmt.Printf("used %d tokens\n", voyage.TotalTokensUsed())
See the NLP Go docs for this library for more details.
Features, metrics, and tools
- Tokenization
- Regex tokenization with custom expression support
- Whitespace-only word tokenization
- Sonority Sequencing syllable tokenization
- Counting
- Type counts (map of type -> instance count)
- Counting functions for sentences, words, syllables, etc.
- Stemming
- Porter2/Snowball stemming algorithm
- Similarity metrics
- Vectors & vectorization
- One-hot encoding
- Frequency (count) encoding
- VoyageAI embedding vectorizer API client
- Readability Scoring
- Flesch-Kincaid Reading Ease and grade level scores
Note: There is a stats package for descriptive statistics available that supports Go generics in Rotational's Go x library at https://github.com/rotationalio/x/tree/main/stats.
You can use the stats package by adding it to your Go project using go get go.rtnl.ai/x/stats.
Planned
- Additional Readability Scoring (Soon)
- Part-of-Speech Distributions (Future)
- Named Entities & Keyphrase Counts (Future)
- Custom Classifiers (Distant Future)
Developing in nlp
Different feature categories are separated into different packages, for example we might have similarity metrics in similarity/ and text classifiers in classifiers/.
If you want to add a new feature, please ensure it is placed in a package which fits the category, or create a new package if none yet exist.
Tests should be located next to each feature, for example similarity_tests.go would hold the tests for similarity.go.
Test data should go into the testdata/ folder within the package where the test is located.
Documentation should go into each function's and package's docstrings so the documentation is accessible to the user while using the library in their local IDE and also available using Go's documentation tools.
Documentation can also be included in separate Markdown files as-needed in the docs/ folder or in this README, such as for the text.Text API examples.
Any documentation or research that isn't immediately relevant to the user in the code context should go into the docs/ folder in the root.
Sources and References
To ensure the algorithms in this package are accurate, we pulled information from several references, which have been recorded in docs/sources.md and in the documentation and comments for the individual functions in this library.
Examples
There are several examples for how to use NLP library in the docs/examples folder. Run them using: go run docs/examples/FILENAME.go
Research Notes
Research on different topics will go into the folder docs/research/.
- Go NLP: notes on different NLP packages/libraries for Go
License
See: LICENSE