Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE, SwiGLU, RMSNorm, and GQA attention with support for up to 128K tokens using YaRN-based extrapolation. It is trained on a large corpus of source code, synthetic data, and text-code grounding, providing robust performance across programming languages and agentic coding workflows. This model is part of the Qwen2.5-Coder family and offers strong compatibility with tools like vLLM for efficient deployment. Released under the Apache 2.0 license.
Context Window
33k tokens
Pricing (Input / Output)
$0.000029999999999999997 / $0.00008999999999999999 per 1M
Architecture
transformer
Modality
text->text
curl -X POST https://api.neuralhub.xyz/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer NEURALHUB_API_KEY" \
-d '{
"model": "qwen/qwen2.5-coder-7b-instruct",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "" }
],
"temperature": 0.7,
"max_tokens": 500,
"top_p": 0.9
}'The API returns an OpenAI-compatible response. Example:
{
"id": "chatcmpl-<uuid>",
"object": "chat.completion",
"created": 1765590287,
"model": "qwen/qwen2.5-coder-7b-instruct",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The answer to life, the universe, and everything is famously 42..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 26,
"completion_tokens": 169,
"total_tokens": 195
}
}