A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through heterogeneous MoE structures and modality-isolated routing. Supporting an extensive 131K token context length, the model achieves efficient inference via multi-expert parallel collaboration and quantization, while advanced post-training techniques including SFT, DPO, and UPO ensure optimized performance across diverse applications with specialized routing and balancing losses for superior task handling.
curl -X POST https://api.neuralhub.xyz/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "baidu/ernie-4.5-21b-a3b",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "" }
],
"temperature": 0.7,
"max_tokens": 500,
"top_p": 0.9
}'{
"id": "chatcmpl-<uuid>",
"object": "chat.completion",
"created": 1768454224,
"model": "baidu/ernie-4.5-21b-a3b",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The answer to life, the universe, and everything is famously 42..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 26,
"completion_tokens": 169,
"total_tokens": 195
}
}