LocalMode
React

Overview

React hooks for local-first AI — useChat, useEmbed, useClassify, and more.

@localmode/react

React hooks for local-first AI. Embed, chat, classify, transcribe, and more — with built-in loading states, error handling, and cancellation.

Features

  • 39 hooks — One for each AI capability in @localmode/core plus agents, import/export, multimodal, utility, batch, and pipeline hooks
  • StreaminguseChat with real-time message updates and persistence
  • List accumulationuseOperationList for building result lists from repeated operations
  • Batch processinguseBatchOperation for concurrent, useSequentialBatch for sequential with progress
  • Cancellation — Every hook supports AbortSignal-based cancellation
  • SSR-safe — No-op during server rendering for Next.js compatibility
  • Provider-agnostic — Works with any @localmode provider (transformers, webllm, chrome-ai)
  • Zero deps — Only peer dependencies on react and @localmode/core
  • PipelineusePipeline for composing multi-step workflows

Installation

bash pnpm install @localmode/react @localmode/core
bash npm install @localmode/react @localmode/core
bash yarn add @localmode/react @localmode/core
bash bun add @localmode/react @localmode/core

Quick Start

Embed text

import { useEmbed } from '@localmode/react';
import { transformers } from '@localmode/transformers';

const model = transformers.embedding('Xenova/all-MiniLM-L6-v2');

function EmbedDemo() {
  const { data, isLoading, execute } = useEmbed({ model });

  return (
    <div>
      <button onClick={() => execute('Hello world')} disabled={isLoading}>
        {isLoading ? 'Embedding...' : 'Embed'}
      </button>
      {data && <p>Dimensions: {data.embedding.length}</p>}
    </div>
  );
}

Chat with an LLM

import { useChat } from '@localmode/react';
import { webllm } from '@localmode/webllm';

const model = webllm.languageModel('Llama-3.2-1B-Instruct-q4f16_1-MLC');

function ChatDemo() {
  const { messages, isStreaming, send, cancel } = useChat({ model });

  return (
    <div>
      {messages.map((m) => (
        <div key={m.id}><b>{m.role}:</b> {m.content}</div>
      ))}
      <button onClick={() => send('What is LocalMode?')}>Send</button>
      {isStreaming && <button onClick={cancel}>Cancel</button>}
    </div>
  );
}

Classify text

import { useClassify } from '@localmode/react';
import { transformers } from '@localmode/transformers';

const model = transformers.classifier('Xenova/distilbert-base-uncased-finetuned-sst-2-english');

function ClassifyDemo() {
  const { data, isLoading, execute } = useClassify({ model });

  return (
    <div>
      <button onClick={() => execute('I love this product!')}>Classify</button>
      {data && <p>{data.label} ({(data.score * 100).toFixed(1)}%)</p>}
    </div>
  );
}

All Hooks

HookDomainWraps
useEmbedEmbeddingsembed()
useEmbedManyEmbeddingsembedMany()
useSemanticSearchEmbeddingssemanticSearch()
useSemanticChunkRAGsemanticChunk() — embedding-aware chunking
useEmbedImageMultimodalembedImage() — CLIP cross-modal
useEmbedManyImagesMultimodalembedManyImages() — batch image embedding
useChatGenerationstreamText() with message state
useGenerateTextGenerationgenerateText()
useGenerateObjectGenerationgenerateObject() — typed JSON output
useClassifyClassificationclassify()
useClassifyZeroShotClassificationclassifyZeroShot()
useExtractEntitiesNERextractEntities()
useTranscribeAudiotranscribe()
useSynthesizeSpeechAudiosynthesizeSpeech()
useCaptionImageVisioncaptionImage()
useDetectObjectsVisiondetectObjects()
useClassifyImageVisionclassifyImage()
useClassifyImageZeroShotVisionclassifyImageZeroShot()
useSegmentImageVisionsegmentImage()
useExtractImageFeaturesVisionextractImageFeatures()
useImageToImageVisionimageToImage() / upscaleImage()
useTranslateTexttranslate()
useSummarizeTextsummarize()
useExtractTextOCRextractText()
useFillMaskNLPfillMask()
useAnswerQuestionQAanswerQuestion()
useAskDocumentDocument QAaskDocument()
useAgentAgentscreateAgent() + runAgent() — ReAct loop with tools
useImportExportImport/ExportimportFrom(), exportToCSV(), exportToJSONL()
useModelStatusUtilityModel readiness
useCapabilitiesUtilitydetectCapabilities()
useNetworkStatusUtilityOnline/offline
useStorageQuotaUtilityStorage quota
useSemanticCacheUtilitySemantic cache lifecycle and stats
useReindexUtilityEmbedding drift re-embedding with progress
useVoiceRecorderUtilityMediaRecorder lifecycle
useInferenceQueueUtilityPriority-based task scheduling
useModelLoaderUtilityModel cache downloads, prefetch, evict
usePipelinePipelineMulti-step workflows
useBatchOperationBatchConcurrent batch processing
useOperationListBatchAccumulate results into a list
useSequentialBatchBatchSequential processing with progress
toAppErrorUtilityConvert Error to AppError shape

Next Steps

On this page