liquid

LiquidAI: LFM2.5-1.2B-Instruct (free)

by liquid

OpenRouter

About

LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime support.

Specifications

Context Length 33K
Max Output Tokens -
Modality text->text
Input text
Output text
Supported Parameters frequency_penalty, max_tokens, min_p, presence_penalty, repetition_penalty, seed, stop, temperature, top_k, top_p
Content Moderation No

Error Rate

Based on feedback reported by Free LLM Router users. Error rate reflects the percentage of requests that encountered issues such as rate limiting, unavailability, or errors. Learn more.

Loading chart...

Availability

Available 41 of 41 tracked days. Daily snapshots show whether this model was accessible as a free model on OpenRouter.

FebMar
ModelDays181920212223242526272812345678910111213141516171819
LiquidAI: LFM2.5-1.2B-Instruct (free)30

More from Liquid

Frequently Asked Questions

Is LiquidAI: LFM2.5-1.2B-Instruct (free) free to use?

Yes, LiquidAI: LFM2.5-1.2B-Instruct (free) is completely free to use through OpenRouter. You can access it via the Free LLM Router API at no cost.

What is the context window for LiquidAI: LFM2.5-1.2B-Instruct (free)?

LiquidAI: LFM2.5-1.2B-Instruct (free) supports a context window of 33K tokens (32,768 tokens).