X

Overview

MODEL

qwen-alibaba/qwen-3

Qwen 3 is the latest large reasoning model developed by Alibaba company. It surpass multiple baselines on coding, math and surpass SOTA model performance on multiple benchmarks. It is said to be released by May, 2025. # Qwen3 Qwen Chat   |    Hugging Face | ModelScope   | Paper | Blog | Documentation Demo   | WeChat (微信)   | Discord   Visit our Hugging Fac

gemini-google/nano-banana

## Google Gemini Flash Nano Banana Today, we’re excited to introduce Gemini 2.5 Flash Image (aka nano-banana), our state-of-the-art image generation and editing model. This update enables you to blend multiple images into a single image, maintain character consistency for rich storytelling, make targeted transformations using natural language, and use Gemini's world knowledge to generate and ed

qwen-alibaba/qwen3-235b-a22b

Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: # Qwen3-235B-A22B ## Qw

qwen-alibaba/qwen3-coder-480b-a35b-instruct

# Qwen3-Coder-480B-A35B-Instruct ## Highlights Today, we're announcing **Qwen3-Coder**, our most agentic code model to date. **Qwen3-Coder** is available in multiple sizes, but we're excited to introduce its most powerful variant first: **Qwen3-Coder-480B-A35B-Instruct**. featuring the following key enhancements: - **Significant Performance** among open models on **Agentic Cod

meituan-longcat/longcat-flash-chat

--- license: mit library_name: LongCat-Flash-Chat pipeline_tag: text-generation tags: - transformers --- # LongCat-Flash-Chat ## Model Introduction We introduce LongCat-Flash, a powerful and efficient language model with 560 billion total parameters, featuring an innovative Mixture-of

claude-anthropic/claude-opus-4

Claude Opus 4 is the Hybrid reasoning model that pushes the frontier for coding and AI agents, featuring a 200K context window Claude Opus 4 is our most intelligent model to date, pushing the frontier in coding, agentic search, and creative writing. We’ve also made it possible to run Claude Code in the background, enabling developers to assign long-running coding tasks for Opus to handle indepe

claude-anthropic/claude-4

Anthropic launched the next generation of Claude models today—Opus 4 and Sonnet 4—designed for coding, advanced reasoning, and the support of the next generation of capable, autonomous AI agents. Claude 4 hybrid reasoning models let customers choose between near-instant responses and deeper reasoning. Claude 4 models offer improvements in coding, with Opus 4 as the “world’s best coding model

qwen-alibaba/qwen3-0-6b

Qwen3-0.6B has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Number of Parameters: 0.6B Number of Paramaters (Non-Embedding): 0.44B Number of Layers: 28 Number of Attention Heads (GQA): 16 for Q and 8 for KV Context Length: 32,768 # Qwen3-0.6B ## Qwen3 Highlights Qwen3 is the latest generation of large language models

grok4-xai/grok-4

Grok 4 is the latest released model by XAI. It surpasses multiple benchmarks and are trained using corpus from x/twitter.

qwen-alibaba/qwen3-32b

Qwen3-32B has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Number of Parameters: 32.8B Number of Paramaters (Non-Embedding): 31.2B Number of Layers: 64 Number of Attention Heads (GQA): 64 for Q and 8 for KV Context Length: 32,768 natively and 131,072 tokens with YaRN. # Qwen3-32B ## Qwen3 Highlights Qwen3 is the late

Deepseek R2 is the latest large reasoning model developped by the Deepseek company. It surpasses multiple baselines on coding, math benchmarks and lower the training as well as the inference cost by 95%. It is said to be released by May, 2025.

DeepSeek V3 0324 is the latest generation LLM developed by the Deepseek company. It is reported to surpass multiple baselines.

moonshotai/kimi-k2-instruct

  Tech Blog     |       Paper Link (coming soon) ## 1. Model Introduction Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding t

deepseek/deepseek-prover-v2-671b

DeepSeek-Prover-V2 is an open-source large language model designed for formal theorem proving in Lean 4, with initialization data collected through a recursive theorem proving pipeline powered by DeepSeek-V3. The cold-start training procedure begins by prompting DeepSeek-V3 to decompose complex problems into a series of subgoals. The proofs of resolved subgoals are synthesized into a chain-of-thou

claude-anthropic/claude-sonnet-4

Hybrid reasoning model with superior intelligence for high-volume use cases, and 200K context window Claude Sonnet 4 improves on Claude Sonnet 3.7 across a variety of areas, especially coding. It offers frontier performance that’s practical for most AI use cases, including user-facing AI assistants and high-volume tasks. Claude Sonnet 3.7 is the first hybrid reasoning model and our most inte

chatgpt-openai/sora-2-model

## OpenAI just released their flagship video and audio model Sora 2 You can download the apps from app store: https://apps.apple.com/us/app/sora-by-openai/id6744034028 ## Introduction The original Sora model⁠ from February 2024 was in many ways the GPT‑1 moment for video—the first time video generation started to seem like it was working, and simple behaviors like object permanence emer

claude-anthropic/claude-sonnet-4-5

## Main Features It's the strongest model for building complex agents. It’s the best model at using computers. And it shows substantial gains in reasoning and math. Code is everywhere. It runs every application, spreadsheet, and software tool you use. Being able to use those tools and reason through hard problems is how modern work gets done. Claude Sonnet 4.5 makes this possible. We're r

Meet Qwen3-VL — the most powerful vision-language model in the Qwen series to date. This generation delivers comprehensive upgrades across the board: superior text understanding & generation, deeper visual perception & reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities. Available in Dense and MoE architectures th

qwen-alibaba/qwen3-14b

Qwen3 14B has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 14.8B - Number of Paramaters (Non-Embedding): 13.2B - Number of Layers: 40 - Number of Attention Heads (GQA): 40 for Q and 8 for KV - Context Length: 32,768 natively and . # Qwen3-14B ## Qwen3 Highlights Qwen3 is the latest generati

qwen-alibaba/qwen3-8b

Qwen3-8B has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Number of Parameters: 8.2B Number of Paramaters (Non-Embedding): 6.95B Number of Layers: 36 Number of Attention Heads (GQA): 32 for Q and 8 for KV Context Length: 32,768 natively and 131,072 tokens with YaRN. # Qwen3-8B ## Qwen3 Highlights Qwen3 is the latest

REASON

qwen-alibaba/qwen-3

Qwen 3 is the latest large reasoning model developed by Alibaba company. It surpass multiple baselines on coding, math and surpass SOTA model performance on multiple benchmarks. It is said to be released by May, 2025. # Qwen3 Qwen Chat   |    Hugging Face | ModelScope   | Paper | Blog | Documentation Demo   | WeChat (微信)   | Discord   Visit our Hugging Fac

Deepseek R2 is the latest large reasoning model developped by the Deepseek company. It surpasses multiple baselines on coding, math benchmarks and lower the training as well as the inference cost by 95%. It is said to be released by May, 2025.

claude-anthropic/claude-opus-4

Claude Opus 4 is the Hybrid reasoning model that pushes the frontier for coding and AI agents, featuring a 200K context window Claude Opus 4 is our most intelligent model to date, pushing the frontier in coding, agentic search, and creative writing. We’ve also made it possible to run Claude Code in the background, enabling developers to assign long-running coding tasks for Opus to handle indepe

qwen-alibaba/qwen3-32b

Qwen3-32B has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Number of Parameters: 32.8B Number of Paramaters (Non-Embedding): 31.2B Number of Layers: 64 Number of Attention Heads (GQA): 64 for Q and 8 for KV Context Length: 32,768 natively and 131,072 tokens with YaRN. # Qwen3-32B ## Qwen3 Highlights Qwen3 is the late

deepseek/deepseek-prover-v2-671b

DeepSeek-Prover-V2 is an open-source large language model designed for formal theorem proving in Lean 4, with initialization data collected through a recursive theorem proving pipeline powered by DeepSeek-V3. The cold-start training procedure begins by prompting DeepSeek-V3 to decompose complex problems into a series of subgoals. The proofs of resolved subgoals are synthesized into a chain-of-thou

claude-anthropic/claude-4

Anthropic launched the next generation of Claude models today—Opus 4 and Sonnet 4—designed for coding, advanced reasoning, and the support of the next generation of capable, autonomous AI agents. Claude 4 hybrid reasoning models let customers choose between near-instant responses and deeper reasoning. Claude 4 models offer improvements in coding, with Opus 4 as the “world’s best coding model

DeepSeek V3 0324 is the latest generation LLM developed by the Deepseek company. It is reported to surpass multiple baselines.

qwen-alibaba/qwen3-0-6b

Qwen3-0.6B has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Number of Parameters: 0.6B Number of Paramaters (Non-Embedding): 0.44B Number of Layers: 28 Number of Attention Heads (GQA): 16 for Q and 8 for KV Context Length: 32,768 # Qwen3-0.6B ## Qwen3 Highlights Qwen3 is the latest generation of large language models

claude-anthropic/claude-sonnet-4

Hybrid reasoning model with superior intelligence for high-volume use cases, and 200K context window Claude Sonnet 4 improves on Claude Sonnet 3.7 across a variety of areas, especially coding. It offers frontier performance that’s practical for most AI use cases, including user-facing AI assistants and high-volume tasks. Claude Sonnet 3.7 is the first hybrid reasoning model and our most inte

grok4-xai/grok-4

Grok 4 is the latest released model by XAI. It surpasses multiple benchmarks and are trained using corpus from x/twitter.

qwen-alibaba/qwen3-14b

Qwen3 14B has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 14.8B - Number of Paramaters (Non-Embedding): 13.2B - Number of Layers: 40 - Number of Attention Heads (GQA): 40 for Q and 8 for KV - Context Length: 32,768 natively and . # Qwen3-14B ## Qwen3 Highlights Qwen3 is the latest generati

qwen-alibaba/qwen3-1-7b

Qwen3-1.7B has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Number of Parameters: 1.7B Number of Paramaters (Non-Embedding): 1.4B Number of Layers: 28 Number of Attention Heads (GQA): 16 for Q and 8 for KV Context Length: 32,768 # Qwen3-1.7B ## Qwen3 Highlights Qwen3 is the latest generation of large language models in

qwen-alibaba/qwen3-4b

Qwen3-4B has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Number of Parameters: 4.0B Number of Paramaters (Non-Embedding): 3.6B Number of Layers: 36 Number of Attention Heads (GQA): 32 for Q and 8 for KV Context Length: 32,768 natively and 131,072 tokens with YaRN. # Qwen3-4B ## Qwen3 Highlights Qwen3 is the latest gen

qwen-alibaba/qwen3-8b

Qwen3-8B has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Number of Parameters: 8.2B Number of Paramaters (Non-Embedding): 6.95B Number of Layers: 36 Number of Attention Heads (GQA): 32 for Q and 8 for KV Context Length: 32,768 natively and 131,072 tokens with YaRN. # Qwen3-8B ## Qwen3 Highlights Qwen3 is the latest

CODING AGENT

Loading...

AGENT

Loading...

VIDEO GENERATOR

Loading...

IMAGE GENERATOR

Loading...

REASONING

Loading...

Write Your Review

Detailed Ratings

Upload Pictures and Videos