Thisusesthevectorsearchengine[Qdrant](https://qdrant.tech) to search the posts in a vector space. This needs a way to generate embeddings and uses the [OpenAI API](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings). This is implemented by several project besides OpenAI itself, including the python-based fastembed-server found in `supplemental/search/fastembed-api`.
Thedefaultindexingoptionworkforthedefaultmodel(`snowflake-arctic-embed-xs`).Tooptimizeforalowmemoryfootprint,adjusttheindexconfigurationasdescribedinthe[Qdrantdocs](https://qdrant.tech/documentation/guides/optimize/). See also [this blog post](https://qdrant.tech/articles/memory-consumption/) that goes into detail.
Differentembeddingmodelswillneeddifferentvectorsizesettings.Youcanseealistofthemodelssupportedbythefastembedserver[here](https://qdrant.github.io/fastembed/examples/Supported_Models), including their vector dimensions. These vector dimensions need to be set in the `qdrant_index_configuration`.