LogoLogo
WebsiteSubstrate Block ExplorerEVM Block ExplorerFaucet
  • Learn
    • Architecture
    • Staking
      • Proof of stake
    • Smart Contracts
    • Accounts
    • Infrastructure
      • Nodes
      • Agents
      • Models
      • IPFS
    • Security
      • OPOC
      • TSS
      • IPFS Integrity
      • Model Updates Integrity
    • Fees
    • Finney Testnet RPC Endpoints
  • Build
    • Address format
    • ink! Environment
    • Wasm Smart Contracts
      • Smart Contract Stack
      • Domain-Specific Languages (DSLs)
      • ink! Development
      • ask! Development
      • Basic ink! Contract
    • EVM Smart Contracts
      • Introduction to EVM Smart Contracts
      • HardHat
      • Your first EVM Smart Contract
      • Debug EVM Transactions
      • Precompiles
        • SR25519
        • Substrate ECDSA
        • XC20
    • Run a node
      • Run an archive node
        • Binary
      • Run a full node
      • Become a validator
        • Learn about Validators
        • Validator requirements
        • Spin up a validator
        • Set your identity
    • Build an Agent
      • Introduction
      • Development
      • Installing WASP
      • Agents API Reference
      • Available AI Models
Powered by GitBook
On this page
  • Available AI Models
  • Overview
  • Current Model Lineup
  • Detailed Model Insights
  • More models coming soon...
Export as PDF
  1. Build
  2. Build an Agent

Available AI Models

PreviousAgents API Reference

Last updated 4 months ago

Available AI Models

Overview

provides access to cutting-edge AI models through a simple and efficient interface. Each model is carefully selected to offer high-performance.

Current Model Lineup

ID
Name
Model
Type
Parameters

1

Qwen2.5

Qwen/Qwen2.5-32B-Instruct-GPTQ-Int4

Language Model

32 Billion

Detailed Model Insights

Qwen2.5-32B-Instruct

The Qwen2.5 model represents the latest advancement in large language model technology. Developed by Alibaba's Qwen team, this model brings several key innovations:

  • High-Performance Instruction Following: Specifically designed to understand and execute complex instructions with remarkable accuracy.

  • Efficient Quantization: Utilizing GPTQ (Generative Pretrained Transformer Quantization) with INT4 precision, the model maintains high performance while reducing computational requirements.

  • Broad Capability Range: Excels in tasks such as:

    • Natural language understanding

    • Text generation

    • Contextual reasoning

    • Multilingual communication

Technical Specifications:

  • Model Size: 32 Billion parameters

  • Quantization: INT4

  • Optimization: GPTQ

  • Primary Use: Instruction-based AI interactions

More models coming soon...


Last Updated: February 2025

WASP