← Aptos Intelligence Deep Dive

How Aptos Minted 1M NFTs in 90 Seconds — Aggregators Explained

By aptos-labs Apr 8, 2026 Feature Progress Importance 9/10 Source
aggregatornftperformance

How Aptos Minted 1 Million NFTs in 90 Seconds — Full Technical Breakdown

In November 2023, during Aptos Previewnet, Aptos Labs demonstrated minting 1 million NFTs in approximately 90 seconds, sustaining over 10,000 NFTs per second. Then 5 million in ~8 minutes. Made possible by a fundamental VM-level primitive called Aggregators — one of the most technically interesting innovations in Aptos's architecture.

Why Sequential NFT Minting Collapses Throughput

To understand why this is hard, you need to understand Block-STM's parallel execution model.

Block-STM executes all transactions in a block speculatively in parallel. Each transaction records what it reads (read set) and what it writes (write set). After execution, a validation phase checks for conflicts: if transaction B read a value that transaction A wrote, B must be re-executed with the updated value. This is MVCC — Multi-Version Concurrency Control.

Now imagine minting NFTs with sequential names: "NFT #1", "NFT #2", "NFT #3"...

Every mint transaction must:

  1. Read the current total_supply counter from the collection object
  2. Compute token_name = "NFT #" + total_supply
  3. Increment total_supply by 1
  4. Create the token with that name

Step 1 (read) and Step 3 (write) on the same counter create a read-modify-write dependency. Every single mint transaction reads and writes the same global counter. Block-STM detects this as a conflict for every transaction pair — and re-executes them all sequentially. Instead of running 10,000 mints in parallel across CPU cores, you get 10,000 sequential operations. Throughput collapses completely.

This is the same reason Ethereum NFT launches are painful. Gas wars during hot mints, crashes, failed transactions — it's not just network congestion. It's a fundamental architectural constraint when all transactions touch the same counter.

Aggregator v1 — The First Solution (AIP-11, October 2022)

Aptos introduced Aggregators in October 2022 as AIP-11. The core insight: what if a counter could accumulate additions without requiring reads?

// Module: 0x1::aggregator
// aggregator.move

struct Aggregator has store {
    handle: address,
    key: address,
    limit: u128,
}

/// Add value to aggregator. Does NOT read current value.
public native fun add(aggregator: &mut Aggregator, value: u128);

/// Subtract value. Does NOT read current value.
public native fun sub(aggregator: &mut Aggregator, value: u128);

/// Read the current value — forces serialization point
public native fun read(aggregator: &Aggregator): u128;

/// Destroy the aggregator
public native fun destroy(aggregator: Aggregator);

The magic is in add(): it does NOT read the current value of the counter. Internally, the VM maintains a delta — "this transaction wants to add 1" — and applies all deltas atomically at commit time. Multiple transactions calling add() on the same aggregator are NOT in conflict, because none of them are reading the counter.

Aggregator v1 was initially used for the APT total supply counter — tracking the global coin supply without forcing sequential writes. But it had a critical limitation: you couldn't use the aggregated value within the same transaction that modifies it to derive names or IDs, because reading during execution would reintroduce the conflict.

Aggregator v2 and AggregatorSnapshot — The Full Solution (AIP-47, Q1 2024)

AIP-47 introduced aggregator_v2 with a crucial new primitive: AggregatorSnapshot.

// Module: 0x1::aggregator_v2
// aggregator_v2.move

struct Aggregator<IntElement: copy + drop> has store {
    value: IntElement,
    max_value: IntElement,
}

/// Snapshot — captures the value at commit time, not execution time
struct AggregatorSnapshot<IntElement: copy + drop> has store, drop {
    value: IntElement,
}

/// DerivedStringSnapshot — a string built from a snapshot at commit time
struct DerivedStringSnapshot has store, drop {
    value: String,
    padding: vector<u8>,
}

/// Increment — parallel safe, no read
public native fun add<IntElement: copy + drop>(
    aggregator: &mut Aggregator<IntElement>,
    value: IntElement
);

/// Create a snapshot — does NOT expose the value during execution
/// The actual value is substituted at commit time
public native fun snapshot<IntElement: copy + drop>(
    aggregator: &Aggregator<IntElement>
): AggregatorSnapshot<IntElement>;

/// Derive a string from a snapshot with prefix and suffix
/// e.g., snapshot of 42 with prefix "NFT #" → "NFT #42" at commit
public native fun derive_string_concat<IntElement: copy + drop>(
    snapshot: &AggregatorSnapshot<IntElement>,
    prefix: String,
    suffix: String,
): DerivedStringSnapshot;

/// Read the snapshot value — forces serialization, use carefully
public native fun read_snapshot<IntElement: copy + drop>(
    snapshot: &AggregatorSnapshot<IntElement>
): IntElement;

The key innovation: snapshot() captures a "promise" of the value — not the value itself. During parallel execution, no actual number is read. At commit time, after all deltas are applied and the final counter value is known, the VM substitutes the real numbers into every snapshot. The DerivedStringSnapshot is then resolved into actual strings like "NFT #1", "NFT #2", etc.

The Complete NFT Minting Pattern

Here's exactly how a parallel-safe NFT collection looks in Move with aggregator_v2:

module collection_addr::parallel_nft {
    use aptos_framework::object::{Self, Object};
    use aptos_token_objects::collection;
    use aptos_token_objects::token;
    use aptos_std::aggregator_v2::{Self, Aggregator, AggregatorSnapshot};
    use std::string::{Self, String};
    use std::signer;

    /// Stored at the collection creator's address
    struct MintState has key {
        /// Parallel-safe counter — no read needed during mint
        supply_counter: Aggregator<u64>,
        /// Reference to the collection object
        collection: Object<collection::Collection>,
    }

    /// Create the collection
    public entry fun create_collection(creator: &signer, name: String) {
        let collection = collection::create_unlimited_collection(
            creator,
            string::utf8(b"A parallel NFT collection"),
            name,
            option::none(),
            string::utf8(b"https://example.com"),
        );
        move_to(creator, MintState {
            supply_counter: aggregator_v2::create_aggregator(u64::MAX),
            collection,
        });
    }

    /// Mint one NFT — fully parallel-safe
    public entry fun mint(user: &signer, creator_addr: address) acquires MintState {
        let state = borrow_global_mut<MintState>(creator_addr);

        // 1. Snapshot BEFORE incrementing — captures current position
        //    without reading the actual value (no conflict!)
        let snapshot: AggregatorSnapshot<u64> = aggregator_v2::snapshot(
            &state.supply_counter
        );

        // 2. Increment — parallel safe, no read
        aggregator_v2::add(&mut state.supply_counter, 1);

        // 3. Derive token name from snapshot — resolved at commit time
        //    At commit: snapshot → actual number → "NFT #42"
        let token_name = aggregator_v2::derive_string_concat(
            &snapshot,
            string::utf8(b"NFT #"),
            string::utf8(b""),
        );

        // 4. Mint the token — name will be correct at commit
        token::create(
            // creator signer (via resource account)
            user,
            collection::name(state.collection),
            string::utf8(b""),  // description
            token_name,          // DerivedStringSnapshot — resolved at commit
            option::none(),      // royalty
            string::utf8(b"https://example.com/nft"),
        );
    }
}

Steps 1-4 can run for thousands of users simultaneously. No transaction conflicts on the supply counter. Block-STM executes all of them in parallel. At commit time, the VM resolves all the snapshots to their actual values (1, 2, 3, ... 1,000,000) and builds all the token names atomically.

Why This Is Fundamentally Different From Ethereum Batching

Ethereum developers often work around sequential minting by batching transactions off-chain: one relayer sends a single transaction that mints 100 NFTs at once. This reduces the number of transactions but doesn't change the fundamental architecture — you're still reading and writing the same counter, just doing it less often. It's a band-aid.

Aptos aggregators eliminate the bottleneck at the execution engine level:

ApproachWhereMechanismConflict?
Ethereum sequential mintEVMRead-modify-write counterAll transactions conflict
Ethereum batchingClient-sideBundle N mints into 1 txReduces tx count, same conflict within tx
Aptos aggregator_v2Move VMDelta accumulation, snapshot resolutionNo conflicts — fully parallel

The aggregator approach means 10,000 separate mint transactions from 10,000 different users can all execute simultaneously with zero conflicts. No bundling required. No relayer. Each user sends their own transaction and gets parallelism for free.

The Original 1M NFT Demo — What Actually Happened

The demonstration occurred during Aptos Previewnet in November 2023 (November 6–21). This was an internal Aptos Labs scalability test, not a named external NFT project, designed to validate AIP-43 (Digital Assets / Token Objects v2) and AIP-47 (Aggregator v2) before mainnet enablement in Q1 2024.

Official numbers:

Gas costs were not disclosed for the Previewnet demo. The test used the new Token Objects (v2) standard, not Token v1. AIP-43 and AIP-47 were enabled on mainnet in Q1 2024.

The Math Today: What Would 1M NFTs Cost in April 2026?

The Aptos infrastructure has changed significantly since the 2023 demo:

ComponentNov 2023 (Previewnet)Apr 2026 (Mainnet)
ConsensusJolteon (pre-Raptr)Baby Raptr + Velociraptr
Block time~200ms<50ms (40% reduction from Velociraptr)
ExecutionBlock-STM v1Block-STM v2 (ramping)
Token standardEarly Token ObjectsToken Objects v2 (AIP-43, ~10x cheaper)
Sustained TPS10,000+ (demo)~20,000 mainnet
Research peak TPS1.033M (single-node cluster)

Estimated cost for 1 million NFT mints today:

Compare to Ethereum: a single NFT mint during a hot launch typically costs $20–$500 in gas. Minting 1 million NFTs on Ethereum would be practically impossible during peak demand — even with batching, the coordination overhead and gas costs would be prohibitive.

The Three Aggregator Modules

ModulePath in aptos-corePurpose
aggregatoraptos-move/framework/aptos-stdlib/sources/aggregator/aggregator.movev1 — parallel-safe u128 counters. Used for APT total supply.
aggregator_v2aptos-move/framework/aptos-stdlib/sources/aggregator/aggregator_v2.movev2 — generic Aggregator<T>, AggregatorSnapshot, DerivedStringSnapshot. Used for NFT names.
aggregator_factoryaptos-move/framework/aptos-stdlib/sources/aggregator/aggregator_factory.moveCreates aggregator instances. Manages the underlying storage handles.

IMPORTANT: Two Completely Different Things Called "Aggregator"

There is significant naming confusion in the Aptos ecosystem. Make sure you know which one is being discussed:

1. Transaction Aggregators (what enables 1M NFT minting)

2. Marketplace Aggregators (a different thing entirely)

What Aggregators Enable Beyond NFTs

Transaction aggregators are not just for NFTs. Any use case that requires a global parallel-safe counter benefits:

The APT coin supply itself uses Aggregator v1 — every time APT is staked, unstaked, burned, or created, a parallel-safe aggregator tracks the total supply without any of these operations conflicting with each other.


Minting 20 Million NFTs on Aptos: Technical Capacity Analysis


Executive Summary

Minting 20 million NFTs on Aptos is feasible today in approximately 17-25 minutes at sustained mainnet throughput. With all planned upgrades deployed, the same operation could complete in under 30 seconds at theoretical maximum capacity. This document provides detailed calculations for every scenario, identifies bottlenecks at each layer, and includes a practical implementation guide.

ScenarioTPSTime to Mint 20MEstimated Total Cost (APT)
Current mainnet (conservative)15,00022.2 min3,900
Current mainnet (sustained)20,00016.7 min3,900
Current mainnet (peak, aggregators optimized)30,00011.1 min3,900
Full Raptr + Block-STM v2 + Zaptos100,0003.3 min1,100-1,650
With Shardines (conservative)500,00040 sec660-1,100
With Shardines (theoretical max)1,000,00020 sec440-880

Part 1: Current Infrastructure Analysis (April 2026)

1.1 Consensus Layer: Baby Raptr + Quorum Store

Current deployment: Baby Raptr is live on mainnet (~95% complete). It merges the previously separate Jolteon consensus and Quorum Store logic into a unified protocol.

Block timing:

Transactions per block:

Consensus throughput for NFT minting:

Verdict: Consensus is NOT the bottleneck for 20M NFT mints at current throughput levels. Baby Raptr can sustain the ordering rate needed.

1.2 Execution Layer: Block-STM v1

Current deployment: Block-STM v1 is the production execution engine. Block-STM v2 is at ~60% development (behind a config.local.blockstm_v2 flag).

How Block-STM handles NFT mints:

Each NFT mint transaction touches several state locations:

The supply counter problem and aggregators:

The critical bottleneck for parallel NFT minting is the collection supply counter. Every mint increments a shared counter, creating a serial dependency chain if handled naively. Block-STM would detect read-write conflicts on this counter and force sequential re-executions.

Aptos solves this with aggregator_v2 / delayed fields (DelayedFieldID and DelayedChange in the execution output):

With aggregators properly used, the supply counter is effectively eliminated as a conflict source.

Remaining execution bottlenecks:

Even with aggregators, several per-transaction operations create work:

OperationCost CategoryConflict Potential
Collection supply counterAggregator (delta)None (resolved)
New object creationUnique writeNone (each token gets unique address)
Token metadata writeUnique writeNone
Creator sequence numberPer-senderConflicts if single sender
Event emissionAppend-onlyLow (event accumulator)
Move bytecode executionCPUParallelizable

The sender sequence number is a critical remaining bottleneck if all 20M mints come from a single account. In practice, you must use multiple sender accounts or fee payer / sponsored transactions to avoid serialization on the sender's sequence number.

Per-transaction gas cost estimate:

An NFT mint using Token Objects v2 (aptos_token_objects) involves:

Estimated total: ~3,200-4,200 gas units per mint

At the standard gas unit price of 100 Octas (0.000001 APT per gas unit):

Using the observed 0.000195 APT (ambassador contract, 195 gas units) figure:

CPU utilization:

Block-STM dispatches transactions to a rayon thread pool (executor_thread_pool: Arc). Modern validator nodes typically run 32-64 CPU cores. For non-conflicting NFT mints (with aggregators), Block-STM achieves near-linear scaling up to the core count:

1.3 Storage Layer: Jellyfish Merkle Tree

Current deployment: Storage sharding is deployed on mainnet (~95%). The JMT is partitioned across 16 shards within a single node.

State write amplification per NFT:

Each new NFT creates a new leaf in the Jellyfish Merkle Tree. The write path involves:

Write amplification per NFT:

For 20M NFTs:

Total storage impact: ~80-140 GB for the complete 20M NFT mint operation.

Hot state caching (recent work by wqfish):

Recent commits show active hot state optimization:

For a sustained 20M mint operation, the hot state cache would cover:

The hot state cache significantly reduces RocksDB read I/O during execution, though write I/O remains the primary storage bottleneck.

Disk I/O as bottleneck:

With storage sharding across 16 JMT shards:

Verdict: Storage is a secondary bottleneck. The 16-shard JMT with hot state caching can sustain the required write rate, though long-running mints may see some performance degradation as RocksDB compaction catches up.

1.4 Network Layer

Transaction size for an NFT mint:

A typical Token Objects v2 mint transaction contains:

Total serialized transaction size: ~400-700 bytes (reference: historical data showed ~700 bytes per transaction for Aptos network messages)

Bandwidth calculation:

At 20,000 TPS with ~600 bytes average per transaction:

Modern validators have 1-10 Gbps network connections. 20 MB/s = 160 Mbps, well within capacity.

Verdict: Network is NOT a bottleneck.

1.5 Current Infrastructure: Time Calculations

Scenario A: Conservative sustained (15,000 TPS)

Scenario B: Observed sustained (20,000 TPS)

Scenario C: Peak with optimized aggregators (30,000 TPS)

Cost calculation:

Storage growth:


Part 2: Full Stack Upgrade Analysis

2.1 Full Raptr (Prefix Consensus with Decoupled Voting)

Status: Next phase after Baby Raptr (TBD deployment)

Improvements:

Expected improvements for NFT minting:

Impact on 20M mint: Consensus moves from "not a bottleneck" to "definitively not a bottleneck." The improvement unlocks higher TPS if execution and storage can keep up.

2.2 Block-STM v2

Status: ~60% development, behind blockstm_v2 config flag

Improvements:

Expected per-transaction improvements:

Impact on 20M mint: Execution throughput could increase from ~30K effective TPS to ~50-60K TPS for optimized NFT workloads.

2.3 MonoMove VM

Status: Early Prototype (active development by vgao1996, georgemitenkov, calintat)

Architecture:

Expected improvements for NFT minting:

Impact on 20M mint:

2.4 Zaptos (Optimistic Pipelining)

Status: Designed, implementation in progress

Three optimistic techniques:

Latency formula:

Impact on 20M mint:

2.5 Shardines (Internal Validator Sharding)

Status: Storage sharding deployed (~95%), execution sharding and consensus sharding are in design/development

Three-layer sharding architecture:

- Dynamic partitioner analyzes incoming batches and assigns to execution shards based on access patterns

- Each shard runs its own Block-STM instance

- For NFT minting: all mints go to different objects (unique addresses), so they partition cleanly across shards

- The shared collection object (supply counter via aggregator) can be handled by cross-shard delta aggregation

- Target: multiple Block-STM instances running in parallel within a single validator

- Multiple data dissemination shards handle transaction propagation in parallel

- Each shard obtains independent Proof-of-Store certificates

- A consensus coordinator orders metadata from all shards

Performance targets:

Impact on 20M mint:

2.6 Archon (Proxy-Primary Coordination)

Status: Architecture-level concept

Archon introduces a proxy-primary coordination model where:

Impact on 20M mint: Marginal improvement to sustained throughput (~5-10%) by reducing validator load. Primary benefit is operational, not throughput.

2.7 Full-Stack Calculations

Scenario D: Full Raptr + Block-STM v2 + Zaptos (Conservative, 100K TPS)

Assumptions:

Calculations:

Scenario E: With Shardines, Conservative (500K TPS)

Assumptions:

Calculations:

Scenario F: Theoretical Maximum (1M TPS)

Assumptions:

Calculations:

Comparative Summary:

Upgrade ComponentTPS MultiplierLatency ImpactGas Cost Impact
Full Raptr3-5x consensus ceiling-40% block timeNone
Block-STM v22-3x executionMarginalNone
MonoMove VM2-5x executionFaster per-tx-30 to -50%
Zaptos1.2-1.3x effective-40% end-to-endNone
Shardines (execution)4-16x (with shard count)NoneNone
Shardines (consensus)4-8x disseminationNoneNone
Archon1.05-1.1xMarginalNone

Part 3: Practical Guide to Minting 20 Million NFTs

3.1 Move Module Design

The collection contract must use the aggregator pattern (via aptos_token_objects) to avoid supply counter conflicts.

module deployer::mass_mint {
    use aptos_framework::object;
    use aptos_token_objects::collection;
    use aptos_token_objects::token;
    use aptos_token_objects::royalty;
    use std::option;
    use std::string::{Self, String};
    use std::signer;

    /// The collection resource, stored at the deployer's address.
    struct MintConfig has key {
        collection_name: String,
        base_uri: String,
        /// Using object::ExtendRef allows the contract to mint
        /// without requiring the original creator signer each time.
        extend_ref: object::ExtendRef,
    }

    /// Initialize the collection. Called once by the deployer.
    /// The collection internally uses aggregator_v2 for the supply counter,
    /// which is the default behavior in aptos_token_objects::collection.
    public entry fun create_collection(
        creator: &signer,
        description: String,
        name: String,
        base_uri: String,
        max_supply: u64,  // Set to 20,000,000
    ) {
        let royalty = royalty::create(5, 100, signer::address_of(creator));
        let constructor_ref = collection::create_fixed_collection(
            creator,
            description,
            max_supply,
            name,
            option::some(royalty),
            base_uri,
        );
        let extend_ref = object::generate_extend_ref(&constructor_ref);
        move_to(creator, MintConfig {
            collection_name: name,
            base_uri,
            extend_ref,
        });
    }

    /// Mint a single NFT. Designed to be called in parallel
    /// by multiple sender accounts (via fee payer pattern).
    /// Each call creates one token object at a unique address.
    public entry fun mint(
        _minter: &signer,
        creator_addr: address,
        token_name: String,
        token_description: String,
        token_uri: String,
    ) acquires MintConfig {
        let config = borrow_global<MintConfig>(creator_addr);
        let creator_signer = object::generate_signer_for_extending(
            &config.extend_ref
        );
        let _constructor_ref = token::create_numbered_token(
            &creator_signer,
            config.collection_name,
            token_description,
            token_name,
            string::utf8(b""),  // name_with_index_prefix
            option::none(),     // royalty override
            token_uri,
        );
        // Token is created at a deterministic address.
        // The collection supply counter is updated via aggregator
        // (delta operation, no read-write conflict).
    }
}

Key design decisions:

3.2 Collection Setup

   aptos move publish --named-addresses deployer=default
   aptos move run \
     --function-id deployer::mass_mint::create_collection \
     --args 'string:My Collection' 'string:Collection Name' \
     'string:https://assets.example.com/' 'u64:20000000'
   aptos account list --query resources --account deployer

3.3 Transaction Submission Strategy

The single-sender problem: If all 20M transactions use one sender, sequence numbers serialize execution. Each transaction must wait for the previous one's sequence number to commit.

Solution: Multi-sender parallel submission

Use N sender accounts, each submitting 20M/N transactions:

Sender CountTxns per SenderSequence Number OverheadEffective Parallelism
120,000,000Fully serialized1x
102,000,000Manageable~10x
100200,000Low~100x
1,00020,000Negligible~1,000x

Recommended: 100-1,000 sender accounts for current mainnet.

Fee payer pattern: Use a single funding account as a fee payer with orderless (nonce-based) transactions from the minting accounts. AIP-123 orderless transactions allow parallel submission without sequence number coordination.

Transaction generation pipeline:

[Metadata Generator] --> [Transaction Builder] --> [Signer Pool] --> [RPC Submitter Pool]
     (20M items)         (batch of 1000)         (100 signers)      (10-50 RPC connections)

3.4 Client Infrastructure Requirements

Hardware for the minting client:

ComponentMinimumRecommended
CPU8 cores16+ cores
RAM16 GB32 GB
Network100 Mbps1 Gbps
Storage50 GB SSD100 GB NVMe

RPC endpoints:

- 10-20 fullnode RPC endpoints (self-hosted or from different providers)

- Or use the Aptos transaction submission service if available

- Each endpoint handles ~1,000-2,000 TPS of submission

Recommended RPC strategy:

Software architecture:

┌─────────────────────────────────────────────────────┐
│                   Orchestrator                       │
│  - Tracks progress (which tokens minted)            │
│  - Manages sender account sequence numbers          │
│  - Handles retries and failures                     │
│  - Monitors mempool backpressure                    │
└───────────┬─────────────┬──────────────┬────────────┘
            │             │              │
     ┌──────▼──────┐ ┌────▼──────┐ ┌────▼──────┐
     │ Submitter 1 │ │Submitter 2│ │Submitter N│
     │ (10 senders)│ │(10 senders│ │(10 senders│
     │ RPC Pool A  │ │ RPC Pool B│ │ RPC Pool C│
     └─────────────┘ └───────────┘ └───────────┘

3.5 Cost Estimation Worksheet

Cost ItemUnit CostQuantityTotal
Gas fees (current)0.000195 APT20,000,0003,900 APT ($3,315 at $0.85/APT)
Gas fees (w/ MonoMove)0.00006 APT20,000,0001,200 APT
Sender account creation~0.001 APT100-1,0000.1-1 APT
Sender account funding(refundable)100-1,000~100 APT float
RPC infrastructure~$500/mo5-10 nodes$2,500-5,000/mo
Minting client servers~$200/mo2-3$400-600/mo
Metadata storage (IPFS/Arweave)~$0.001/item20,000,000$20,000
**Total (current, at $0.85/APT)****~$23,500-24,000**
**Total (with MonoMove, at $0.85/APT)****~$21,700-22,200**

Note: The dominant cost is metadata hosting (IPFS/Arweave), not on-chain gas. If using centralized storage for metadata URIs, the cost drops significantly.

3.6 Monitoring and Verification

During minting:

- Monitor committed_transactions vs submitted_transactions counter

- Track pending mempool size: if growing, reduce submission rate (backpressure)

- Target: submitted - confirmed gap < 5,000 transactions

- SEQUENCE_NUMBER_TOO_OLD: Sender sequence number already used; refetch and retry

- SEQUENCE_NUMBER_TOO_NEW: Gap in sequence; fill in missing transactions

- INSUFFICIENT_BALANCE_FOR_TRANSACTION_FEE: Refund sender accounts

- TRANSACTION_EXPIRED: Increase expiration time or submit faster

- Target error rate: < 0.1%

   # Monitor chain TPS via indexer or API
   curl https://fullnode.mainnet.aptoslabs.com/v1/ | jq '.ledger_version'
   # Sample every second, compute delta = realized TPS
   aptos move view \
     --function-id 0x4::collection::count \
     --args 'address:<collection_address>'

Post-minting verification:

- Token metadata (name, description, URI) is correct

- Token is owned by the intended recipient

- Token belongs to the correct collection

3.7 Common Pitfalls and How to Avoid Them

Pitfall 1: Single-sender sequence number bottleneck

Pitfall 2: Not using aggregator-based supply tracking

Pitfall 3: Overloading a single RPC endpoint

Pitfall 4: Transaction expiration during backpressure

Pitfall 5: Insufficient gas estimation

Pitfall 6: Metadata URI availability

Pitfall 7: State growth exceeding validator resources

Pitfall 8: Duplicate token names/URIs


Appendix A: Key Data Sources

MetricValueSource
Mainnet sustained TPS~20,000Architecture overview, mainnet observations
Block-STM benchmark TPS>160,000PPoPP 2023 paper, architecture overview
Raptr benchmark TPS>250,000Architecture overview (global-scale experiments)
Shardines target (non-conflicting)>1,000,000Architecture overview
Shardines target (conflicting)>500,000Architecture overview
Block close time~250msArchitecture overview performance table
Baby Raptr hop reduction6 to 4 hopsArchitecture overview
Zaptos latency reduction40%Architecture overview
Standard transaction size limit64 KBTransaction states documentation
Epoch duration7,200 seconds (2 hours)Architecture overview
JMT shard count16Storage subsystem documentation
Block-STM v2 progress~60%Feature progress tracker
MonoMove progressEarly prototypeFeature progress tracker
Current node versionv1.43.2 (mainnet)GitHub releases (April 2026)

Appendix B: Timeline Sensitivity

The calculations in Part 2 depend on upgrades that have no confirmed deployment dates:

UpgradeEarliest RealisticConfidence
Full RaptrLate 2026Medium
Block-STM v2Mid-Late 2026Medium-High (60% done)
MonoMove VM2027+Low (early prototype)
ZaptosLate 2026 - Early 2027Medium
Execution Sharding (Shardines)2027+Low (design phase)
Consensus Sharding (Shardines)2027+Low (design phase)

For planning purposes: the current infrastructure (Part 1) numbers are what you can rely on today. The 100K TPS scenario (Full Raptr + Block-STM v2 + Zaptos) is the most likely near-term upgrade path. The 500K-1M TPS scenarios (Shardines) are longer-term aspirational targets.

Appendix C: Comparison with Other Chains

For context, minting 20M NFTs on other major blockchains:

ChainPractical TPSTime for 20MApprox. Cost
**Aptos (current)**20,00017 min$3,315
**Aptos (full upgrades)**500K-1M20-40 sec$340-935
Solana3,000-5,000*67-111 min$10,000-20,000
Ethereum L115-307.7-15.4 days$50M+
Ethereum L2 (Arbitrum)1,000-2,0002.8-5.6 hours$200,000-500,000
Sui10,000-20,00017-33 min$15,000-30,000

*Solana practical TPS limited by vote transactions consuming ~50% of block space and priority fee market congestion.

Aptos is uniquely positioned for this workload due to: (1) aggregator-based supply counters eliminating the primary parallelization bottleneck, (2) Block-STM's speculative execution enabling near-linear scaling with core count, and (3) the Shardines roadmap promising horizontal scaling within a single validator cluster.


ELI5 — Explain Like I'm 5

The Big Picture: Why Is This Hard?

Imagine a popular concert where 10,000 people want to buy tickets at the same time. Each ticket needs a unique number. The problem: everyone has to ask "what's the last ticket number?" before they can buy the next one. If 10,000 people ask that question simultaneously, you only have one answer at a time — so everyone ends up waiting in line anyway. That's exactly what happens on most blockchains when thousands of people try to mint NFTs at once.

What Aptos Invented: The Magic Ticket Machine

Aptos built a special kind of counter called an Aggregator. Here's the magic: instead of asking "what's the current number?", each person just says "add 1 to whatever the total is." The machine collects all these "add 1" requests from all 10,000 people at once, and at the very end — after everyone has submitted — it hands out the real numbers in order.

Nobody had to wait. Nobody conflicted with anyone else. And everyone got their correct, unique ticket number.

The AggregatorSnapshot Trick

But wait — NFT names need to say "NFT #42" or "NFT #7,893". How do you build that name if you don't know your number yet? That's what AggregatorSnapshot solves. Think of it like a placeholder receipt: "your number will be filled in here." The blockchain hands you a blank receipt, you do all your work with it, and at the very last moment — when all the counting is done — it fills in your actual number everywhere it appears.

The November 2023 Demo

Aptos proved this worked by minting 1 million unique NFTs in about 90 seconds on their test network. Then 5 million in about 8 minutes. That's over 10,000 NFTs per second, sustained. For comparison, during a big NFT launch on Ethereum, people often pay $50–$500 in fees just to mint a single NFT, and the whole thing crashes anyway.

What It Costs Today

On Aptos mainnet today, minting 1 million NFTs would take about 33–50 seconds (even faster than 2023 thanks to infrastructure improvements) and cost about 110 APT in total — roughly $0.001 per NFT at current prices. The entire infrastructure — faster blocks, better parallel execution, cheaper transactions — has improved dramatically since 2023.

Don't Confuse the Two "Aggregators"

There's an unfortunate naming collision. When people say "aggregator" on Aptos, they might mean:

  • Transaction aggregators (what this page is about): built into the Move VM, enable parallel minting
  • Marketplace aggregators: separate apps that pull NFT listings from multiple marketplaces so you can compare prices, like a Kayak for NFTs

These are completely unrelated things with the same name.

What You Learned

Aptos built a counter that thousands of people can increment at the exact same time without any of them conflicting with each other. This is possible because the counter doesn't need to be read during the process — only at the very end. This one innovation unlocks massively parallel NFT minting, DeFi operations, gaming, and more. It's built into the Move VM itself, not a workaround on top of it.

So How Fast and Cheap Can We Mint 20 Million NFTs?

We tracked down the exact contract from the 1M NFT demo — it's called ambassador::ambassador, and it uses 195 gas units per mint. That's the real number from the actual code in aptos-core.

ScenarioTimeCost (APT)Cost (USD at $10)
Today (20K TPS, demo contract)16.7 min3,900 APT$3,315
Today (20K TPS, minimal NFT)16.7 min2,000 APT$1,700
Full Raptr + Block-STM v23.3 min~2,000-3,900 APT$1,700-3,315
With Shardines40 sec~2,000-3,900 APT$1,700-3,315
Theoretical max (1M TPS)20 sec~2,000-3,900 APT$1,700-3,315

For comparison: Minting 20 million NFTs on Ethereum would cost roughly $400 million in gas and take 15+ days. On Aptos it costs $20-39K and takes 17 minutes — dropping to 20 seconds with full upgrades.

The demo contract wasn't even minimal — it included property maps, rank/level tracking, burn refs, and soulbound restrictions. A stripped-down Token Object costs only ~100 gas units (vs 195 for ambassador). So the floor cost is about $20K for 20M NFTs.

The key trick: You need 100-1,000 separate sender accounts submitting in parallel, because even though the NFT supply counter doesn't bottleneck (aggregators), each sender's account sequence number still increments sequentially.


Related Systems

Block-STMToken ObjectsAIP-43AIP-47

Other Deep Dives


View this report interactively with Advanced / ELI5 tabs at https://aptos-intelligence.vercel.app/#aggregator-nft-deep-dive. Plain-text version: /reports/aggregator-nft-deep-dive.txt.