Don't believe us? Play around with some of these models:
Context:
Question:
Answer:
npm i @xenova/transformers
Alternatively, you can use it in a
<script>
tag from a CDN, for example:
<!-- Using jsDelivr -->
<script src="https://cdn.jsdelivr.net/npm/@xenova/transformers/dist/transformers.min.js"></script>
<!-- or UNPKG -->
<script src="https://www.unpkg.com/@xenova/transformers/dist/transformers.min.js"></script>
from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
pipe = pipeline('sentiment-analysis')
out = pipe('I love transformers!')
# [{'label': 'POSITIVE', 'score': 0.999806941}]
Python (original)
import { pipeline } from "@xenova/transformers";
// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');
let out = await pipe('I love transformers!');
// [{'label': 'POSITIVE', 'score': 0.999817686}]
JavaScript (ours)
// Use a different model for sentiment-analysis
let pipe = await pipeline('sentiment-analysis', 'nlptown/bert-base-multilingual-uncased-sentiment');
import { env } from "@xenova/transformers";
// Use a different host for models.
// - `remoteURL` defaults to use the HuggingFace Hub
// - `localURL` defaults to '/models/onnx/quantized/'
env.remoteURL = 'https://www.example.com/';
env.localURL = '/path/to/models/';
// Set whether to use remote or local models. Defaults to true.
// - If true, use the path specified by `env.remoteURL`.
// - If false, use the path specified by `env.localURL`.
env.remoteModels = false;
// Set parent path of .wasm files. Defaults to use a CDN.
env.onnx.wasm.wasmPaths = '/path/to/files/';
We currently support the following tasks and models, which can be used with the pipeline function.
distilbert-base-uncased-finetuned-sst-2-english
,
nlptown/bert-base-multilingual-uncased-sentiment
,
distilgpt2
.
For more information, check out the
Text Classification docs.
distilbert-base-cased-distilled-squad
,
distilbert-base-uncased-distilled-squad
.
For more information, check out the
Question Answering docs.
xlm-roberta-base
, albert-large-v2
,
albert-base-v2
, distilroberta-base
, roberta-base
,
bert-base-cased
, bert-base-uncased
,
bert-base-multilingual-uncased
, bert-base-multilingual-cased
,
distilbert-base-cased
, distilbert-base-uncased
.
For more information, check out the
Language Modelling docs.
t5-small
,
t5-base
,
t5-v1_1-small
,
t5-v1_1-base
,
facebook/bart-large-cnn
,
sshleifer/distilbart-cnn-6-6
,
sshleifer/distilbart-cnn-12-6
.
For more information, check out the
Summarization docs.
t5-small
,
t5-base
,
t5-v1_1-small
,
t5-v1_1-base
.
For more information, check out the
Translation docs.
google/flan-t5-small
,
google/flan-t5-base
,
t5-small
,
t5-base
,
google/t5-v1_1-small
,
google/t5-v1_1-base
,
google/mt5-small
,
facebook/bart-large-cnn
,
sshleifer/distilbart-cnn-6-6
,
sshleifer/distilbart-cnn-12-6
.
For more information, check out the
Text Generation docs.
gpt2
,
distilgpt2
,
EleutherAI/gpt-neo-125M
,
Salesforce/codegen-350M-mono
,
Salesforce/codegen-350M-multi
,
Salesforce/codegen-350M-nl
.
For more information, check out the
Text Generation docs.
openai/whisper-tiny.en
,
openai/whisper-tiny
,
openai/whisper-small.en
,
openai/whisper-small
,
openai/whisper-base.en
,
openai/whisper-base
.
For more information, check out the
Automatic Speech Recognition docs.
nlpconnect/vit-gpt2-image-captioning
.
For more information, check out the
Image-to-Text docs.
google/vit-base-patch16-224
.
For more information, check out the
Image Classification docs.
openai/clip-vit-base-patch16
,
openai/clip-vit-base-patch32
.
For more information, check out the
Zero-Shot Image Classification.
facebook/detr-resnet-50
,
facebook/detr-resnet-101
.
For more information, check out the
Object detection docs.
sentence-transformers/all-MiniLM-L6-v2
,
sentence-transformers/all-MiniLM-L12-v2
,
sentence-transformers/all-distilroberta-v1
,
sentence-transformers/paraphrase-albert-base-v2
,
sentence-transformers/paraphrase-albert-small-v2
.
For more information, check out the
Embeddings docs.
The following model types are supported:
AutoModelForMaskedLM
),
question answering
(AutoModelForQuestionAnswering
), and
sequence classification
(AutoModelForSequenceClassification
).
For more information, check out the BERT docs.
(AutoModelForMaskedLM)
.
For more information, check out the ALBERT docs.
(AutoModelForMaskedLM)
,
question answering
(AutoModelForQuestionAnswering)
, and
sequence classification
(AutoModelForSequenceClassification)
.
For more information, check out the DistilBERT
docs.
(AutoModelForSeq2SeqLM)
.
For more information, check out the T5 docs.
(AutoModelForSeq2SeqLM)
.
For more information, check out the T5v1.1 docs.
(AutoModelForSeq2SeqLM)
.
For more information, check out the FLAN-T5 docs.
(AutoModelForSeq2SeqLM)
.
For more information, check out the mT5 docs.
(AutoModelForCausalLM)
.
For more information, check out the
GPT2 docs or
DistilGPT2 docs.
(AutoModelForCausalLM)
.
For more information, check out the
GPT Neo docs.
(AutoModelForSeq2SeqLM)
.
For more information, check out the BART docs.
(AutoModelForCausalLM)
.
For more information, check out the
CodeGen docs.
(AutoModelForSeq2SeqLM)
.
For more information, check out the Whisper docs.
(AutoModel)
.
For more information, check out the CLIP
docs.
(AutoModelForImageClassification)
.
For more information, check out the Vision Transformer
docs.
(AutoModelForVision2Seq)
.
For more information, check out the Vision
Encoder Decoder Models docs.
(AutoModelForObjectDetection)
.
For more information, check out the DETR docs.
Don't see your model type or task supported? Raise an
issue on GitHub, and if there's
enough demand, we will add it!
We use ONNX Runtime to run the
models in the browser, so you must first convert your PyTorch model to ONNX (which can be done using
our
conversion
script).