LocalMode
React

Embeddings

Hooks for text embedding and semantic search.

Embedding Hooks

See it in action

Try Semantic Search and Product Search for working demos of these hooks.

useEmbed

Embed a single text value.

import { useEmbed } from '@localmode/react';
import { transformers } from '@localmode/transformers';

const model = transformers.embedding('Xenova/all-MiniLM-L6-v2');

function Demo() {
  const { data, isLoading, error, execute } = useEmbed({ model });

  return (
    <div>
      <button onClick={() => execute('Hello world')}>Embed</button>
      {data && <p>Vector dimensions: {data.embedding.length}</p>}
    </div>
  );
}

Returns { embedding: Float32Array, usage: { tokens }, response: { modelId, timestamp } }.

useEmbedMany

Embed multiple values in batch.

import { useEmbedMany } from '@localmode/react';

const { data, isLoading, execute } = useEmbedMany({ model });

await execute(['Hello', 'World', 'Foo', 'Bar']);
// data.embeddings = [Float32Array, Float32Array, ...]

useSemanticSearch

Combines embedding and vector DB search in one hook.

import { useSemanticSearch } from '@localmode/react';
import { transformers } from '@localmode/transformers';
import { createVectorDB } from '@localmode/core';

const model = transformers.embedding('Xenova/all-MiniLM-L6-v2');
const db = await createVectorDB({ name: 'notes', dimensions: 384 });

function SearchDemo() {
  const { results, isSearching, search } = useSemanticSearch({
    model,
    db,
    topK: 10,
  });

  return (
    <div>
      <input onChange={(e) => search(e.target.value)} />
      {results.map((r) => (
        <div key={r.id}>{r.content} (score: {r.score.toFixed(2)})</div>
      ))}
    </div>
  );
}

For full API reference on embed() and semanticSearch(), see the Core Embeddings guide. For recommended models, see the Transformers Embeddings guide.

useEmbedImage

Embed a single image into the same vector space as text using CLIP/SigLIP models for cross-modal search.

import { useEmbedImage } from '@localmode/react';
import { transformers } from '@localmode/transformers';

const model = transformers.clipEmbedding('Xenova/clip-vit-base-patch32');

const { data, execute } = useEmbedImage({ model });
await execute(imageDataUrl);
// data.embedding = Float32Array(512) — same space as text embeddings

useEmbedManyImages

Batch image embedding for indexing image collections.

import { useEmbedManyImages } from '@localmode/react';

const { data, execute } = useEmbedManyImages({ model });
await execute([imageUrl1, imageUrl2, imageUrl3]);
// data.embeddings = [Float32Array, Float32Array, Float32Array]

useReindex

Re-embed all documents in a VectorDB when switching embedding models. Supports progress tracking, cancellation, and resume.

import { useReindex } from '@localmode/react';

const { progress, isReindexing, execute, cancel } = useReindex({
  db: vectorDB,
  oldModel: transformers.embedding('Xenova/all-MiniLM-L6-v2'),
  newModel: transformers.embedding('Xenova/bge-small-en-v1.5'),
});

// progress = { completed: 500, total: 1000, percentage: 50 }

For multimodal embedding details, see Multimodal Embeddings. For drift detection, see Embedding Drift Detection.

Showcase Apps

AppDescriptionLinks
Semantic SearchFull-text semantic search with useSemanticSearchDemo · Source
Product SearchProduct catalog search with useSemanticSearchDemo · Source

On this page