Image Features
Extract feature vectors for image similarity search with SigLIP.
Extract dense feature vectors from images using SigLIP models. These vectors enable image similarity search, duplicate detection, and cross-modal (text-to-image) search.
For full API reference (extractImageFeatures(), options, result types, and custom providers), see the Core Vision guide.
See it in action
Try Duplicate Finder and Smart Gallery for working demos.
Recommended Models
| Model | Size | Dimensions | Use Case |
|---|---|---|---|
Xenova/siglip-base-patch16-224 | ~400MB | 768 | Image search, text-to-image matching |
onnx-community/dinov2-base-ONNX | ~350MB | 768 | Self-supervised image features, similarity |
SigLIP models encode both images and text into the same vector space, enabling text-to-image search.
Use transformers.imageFeatures() for images and transformers.embedding() with the same model
for text queries.
Duplicate Detection Example
Based on the Duplicate Finder showcase app:
import { transformers } from '@localmode/transformers';
import { extractImageFeatures, cosineSimilarity } from '@localmode/core';
const model = transformers.imageFeatures('Xenova/siglip-base-patch16-224');
const SIMILARITY_THRESHOLD = 0.85;
async function findDuplicates(images: string[]) {
// Extract features for all images
const allFeatures: Float32Array[] = [];
for (const img of images) {
const { features } = await extractImageFeatures({
model,
image: img,
abortSignal: controller.signal,
});
allFeatures.push(features);
}
// Compare all pairs
const duplicates: [number, number, number][] = [];
for (let i = 0; i < allFeatures.length; i++) {
for (let j = i + 1; j < allFeatures.length; j++) {
const similarity = cosineSimilarity(allFeatures[i], allFeatures[j]);
if (similarity > SIMILARITY_THRESHOLD) {
duplicates.push([i, j, similarity]);
}
}
}
return duplicates;
}Semantic Image Search with VectorDB
Based on the Smart Gallery and Product Search showcase apps:
import { createVectorDB, embed, extractImageFeatures } from '@localmode/core';
import { transformers } from '@localmode/transformers';
const imageModel = transformers.imageFeatures('Xenova/siglip-base-patch16-224');
const textModel = transformers.embedding('Xenova/siglip-base-patch16-224');
const db = await createVectorDB({
name: 'gallery',
dimensions: 768,
storage: 'memory',
});
// Index images by their features
async function indexImage(id: string, imageDataUrl: string) {
const { features } = await extractImageFeatures({
model: imageModel,
image: imageDataUrl,
});
await db.add({ id, vector: features, metadata: { fileName: id } });
}
// Search by text query (cross-modal)
async function searchByText(query: string, topK = 10) {
const { embedding } = await embed({ model: textModel, value: query });
return db.search(embedding, { k: topK });
}Best Practices
Image Features Tips
- Use SigLIP for search — SigLIP vectors work across text and images in the same space
- Store in VectorDB — Use
createVectorDBwithstorage: 'memory'for fast search - Cosine similarity — Use
cosineSimilarity()from core to compare feature vectors - 768 dimensions — SigLIP-Base produces 768-dimensional vectors
Showcase Apps
| App | Description | Links |
|---|---|---|
| Duplicate Finder | Extract image features for near-duplicate detection | Demo · Source |
| Smart Gallery | Image feature extraction for gallery organization | Demo · Source |
| Product Search | Visual feature extraction for product matching | Demo · Source |