Namespace: lunr

lunr

A convenience function for configuring and constructing a new lunr Index.

A lunr.Builder instance is created and the pipeline setup with a trimmer, stop word filter and stemmer.

This builder object is yielded to the configuration function that is passed as a parameter, allowing the list of fields and other builder parameters to be customised.

All documents must be added within the passed config function.

Source:
See:

Example

var idx = lunr(function () {
  this.field('title')
  this.field('body')
  this.ref('id')

  documents.forEach(function (doc) {
    this.add(doc)
  }, this)
})

Classes

Builder
Index
MatchData
Pipeline
Query
Set
Token
TokenSet
Vector

Interfaces

PipelineFunction

Namespaces

utils

Methods

(static) generateStopWordFilter(token) → {lunr.PipelineFunction}

lunr.generateStopWordFilter builds a stopWordFilter function from the provided list of stop words.

The built in lunr.stopWordFilter is built using this generator and can be used to generate custom stopWordFilters for applications or non English languages.

Parameters:
Name Type Description
token Array

The token to pass through the filter

Source:
See:
Returns:
Type
lunr.PipelineFunction

(static) stemmer(token) → {lunr.Token}

lunr.stemmer is an english language stemmer, this is a JavaScript implementation of the PorterStemmer taken from http://tartarus.org/~martin

Parameters:
Name Type Description
token lunr.Token

The string to stem

Implements:
Source:
See:
Returns:
Type
lunr.Token

(static) stopWordFilter() → {lunr.Token}

lunr.stopWordFilter is an English language stop word list filter, any words contained in the list will not be passed through the filter.

This is intended to be used in the Pipeline. If the token does not pass the filter then undefined will be returned.

Implements:
Source:
See:
Returns:
Type
lunr.Token

(static) tokenizer(objnullable, metadatanullable) → {Array.<lunr.Token>}

A function for splitting a string into tokens ready to be inserted into the search index. Uses lunr.tokenizer.separator to split strings, change the value of this property to change how strings are split into tokens.

This tokenizer will convert its parameter to a string by calling toString and then will split this string on the character in lunr.tokenizer.separator. Arrays will have their elements converted to strings and wrapped in a lunr.Token.

Optional metadata can be passed to the tokenizer, this metadata will be cloned and added as metadata to every token that is created from the object to be tokenized.

Parameters:
Name Type Attributes Description
obj string | object | Array.<object> <nullable>

The object to convert into tokens

metadata object <nullable>

Optional metadata to associate with every token

Source:
See:
Returns:
Type
Array.<lunr.Token>

(static) trimmer(token) → {lunr.Token}

lunr.trimmer is a pipeline function for trimming non word characters from the beginning and end of tokens before they enter the index.

This implementation may not work correctly for non latin characters and should either be removed or adapted for use with languages with non-latin characters.

Parameters:
Name Type Description
token lunr.Token

The token to pass through the filter

Implements:
Source:
See:
Returns:
Type
lunr.Token