πŸ—ƒ Fuel Indexer

The Fuel indexer is a standalone service that can be used to index various components of the blockchain. These indexable components include blocks, transactions, receipts, and state within the Fuel network, allowing for high-performance read-only access to the blockchain for advanced dApp use-cases.

By using a combination of Fuel-flavored GraphQL schema, a SQL backend, and indexers written in Rust, users of the Fuel indexer can get started creating production-ready backends for their dApps, meant to go fast πŸš—πŸ’¨.

For those wanting to build dApp backends right away, feel free to check out the Quickstart. And for those willing to contribute to the Fuel indexer project, please feel free to read our contributor guidelines as well as the For Contributors chapter of the book.

Architecture

diagram

The Fuel indexer is meant to run alongside a Fuel node and a database. Generally, the typical flow of information through the indexer is as follows:

  1. A Sway smart contract emits receipts during its execution on the Fuel node.
  2. Blocks, transactions, and receipts from the node are monitored by the Fuel indexer service and checked for specific user-defined event types.
  3. When a specific event type is found, an indexer executes the corresponding handler from its module.
  4. The handler processes the event and stores the indexed information in the database.
  5. A dApp queries for blockchain data by using the indexer's GraphQL API endpoint, which fetches the desired information from the corresponding index in the database and returns it to the user.

Dependencies

To run the Fuel indexer, you'll need to install a few dependencies on your system:

If you don't want to install a database directly onto your system, you can use Docker to run it as an isolated container. You can install it by following the install instructions. For reference purposes, we provide a docker compose file that runs a Postgres database and the Fuel indexer service.

Note for Apple Silicon macOS users: Using the Fuel indexer through Docker on Apple Silicon systems is currently not supported. We're working to bring support to these systems.

Also, it's assumed that you have the Rust programming language installed on your system. If that is not the case, please refer to the Rust installation instructions for more information.

fuelup

We strongly recommend that you use the Fuel indexer through forc, the Fuel orchestrator. You can get forc (and other Fuel components) by way of fuelup, the Fuel toolchain manager. Install fuelup by running the following command, which downloads and runs the installation script.

curl --proto '=https' --tlsv1.2 -sSf https://install.fuel.network/fuelup-init.sh | sh

After fuelup has been installed, the forc index command and fuel-indexer binaries will be available on your system.

Database

At this time, the Fuel indexer requires the use of a database. We currently support a single database option: PostgresSQL. PostgreSQL is a database solution with a complex feature set and requires a database server.

PostgreSQL

Note: The following explanation is for demonstration purposes only. For an even faster setup on some platforms, you can use the forc index postgres command. A production setup should use secure users, permissions, and passwords.

macOS

On macOS systems, you can install PostgreSQL through Homebrew. If it isn't present on your system, you can install it according to the instructions. Once installed, you can add PostgreSQL to your system by running brew install postgresql. You can then start the service through brew services start postgresql. You'll need to create a database for your index data, which you can do by running createdb [DATABASE_NAME]. You may also need to create the postgres role; you can do so by running createuser -s postgres.

Linux

For Linux-based systems, the installation process is similar. First, you should install PostgreSQL according to your distribution's instructions. Once installed, there should be a new postgres user account; you can switch to that account by running sudo -i -u postgres. After you have switched accounts, you may need to create a postgres database role by running createuser --interactive. You will be asked a few questions; the name of the role should be postgres and you should elect for the new role to be a superuser. Finally, you can create a database by running createdb [DATABASE_NAME].

In either case, your PostgreSQL database should now be accessible at postgres://postgres@localhost:5432/[DATABASE_NAME].

WASM

Two additonal cargo components will be required to build your indexers: wasm-snip and the wasm32-unknown-unknown target.

As of this writing, there is a small bug in newly built Fuel indexer WASM modules that produces a WASM runtime error due an errant upstream dependency. For now, you can use wasm-snip to remove the errant symbols from the WASM module. An example can be found in the related script here.

wasm-snip

To install the wasm-snip:

cargo install wasm-snip

wasm32 target

To install the wasm32-unknown-unknown target via rustup:

rustup target add wasm32-unknown-unknown

Note for Apple Silicon macOS users: Due to the default architecture-specific libraries that are shipped with macOS, you may have trouble building WASM binaries. Please refer to this section of the Module page for more information.

Quickstart

In this tutorial you will:

  1. Bootstrap your development environment.
  2. Create, build, and deploy an index to an indexer service hooked up to Fuel's beta-3 testnet.
  3. Query the indexer service for indexed data using GraphQL.

1. Setting up your environment

In this Quickstart, we'll use Docker's Compose to spin up a Fuel indexer service with a PostgreSQL database backend. We will also use Fuel's toolchain manager fuelup in order to install the forc-index binary that we'll use to develop our index.

1.1 Install fuelup

To install fuelup with the default features/options, use the following command, which downloads the fuelup installation script and runs it interactively.

curl \
  --proto '=https' \
  --tlsv1.2 -sSf https://fuellabs.github.io/fuelup/fuelup-init.sh | sh

If you require a non-default fuelup installation, please read the fuelup installation docs.

1.2 WebAssembly (WASM) Setup

Indexers are typically compiled to WASM and thus you'll need to have the proper WASM compilation target available on your system. You can install it through rustup:

rustup target add wasm32-unknown-unknown

Additionally, you'll need the wasm-snip utility in order to shrink the WASM binary size and cut out errant symbols. You can install it through cargo:

cargo install wasm-snip

2. Using the forc-index plugin

The primary means of interfacing with the Fuel indexer for index development is the forc-index CLI tool. forc-index is a forc plugin specifically created to interface with the Fuel indexer service. Since we already installed fuelup in a previous step 1.1, we should be able to check that our forc-index binary was successfully installed and added to our PATH.

which forc-index
/Users/me/.fuelup/bin/forc-index

IMPORTANT: fuelup will install several binaries from the Fuel ecosystem and add them into your PATH, including the fuel-indexer binary. The fuel-indexer binary is the primary binary that users can use to spin up a Fuel indexer service.

which fuel-indexer
/Users/me/.fuelup/bin/fuel-indexer

2.1 Check for components

Once the forc-index plugin is installed, let's go ahead and see what indexer components we have installed.

Many of these components are required for development work (e.g., fuel-core, psql) but some are even required for non-development usage as well (e.g., wasm-snip, fuelup).

forc index check
+--------+------------------------+---------------------------------------------------------+
| Status |       Component        |                         Details                         |
+--------+------------------------+---------------------------------------------------------+
|   ⛔️   | fuel-indexer binary    |  Can't locate fuel-indexer.                             |
+--------+------------------------+---------------------------------------------------------+
|   βœ…   | fuel-indexer service   |  Local service found: PID(63967) | Port(29987).         |
+--------+------------------------+---------------------------------------------------------+
|   βœ…   | psql                   |  /usr/local/bin/psql                                    |
+--------+------------------------+---------------------------------------------------------+
|   βœ…   | fuel-core              |  /Users/me/.cargo/bin/fuel-core                         |
+--------+------------------------+---------------------------------------------------------+
|   βœ…   | docker                 |  /usr/local/bin/docker                                  |
+--------+------------------------+---------------------------------------------------------+
|   ⛔️   | fuelup                 |  Can't locate fuelup.                                   |
+--------+------------------------+---------------------------------------------------------+
|   βœ…   | wasm-snip              |  /Users/me/.cargo/bin/wasm-snip                         |
+--------+------------------------+---------------------------------------------------------+
|   ⛔️   | forc-postgres          |  Can't locate fuelup.                                   |
+--------+------------------------+---------------------------------------------------------+
|   βœ…   | rustc                  |  /Users/me/.cargo/bin/rustc                             |
+--------+------------------------+---------------------------------------------------------+
|   βœ…   | forc-wallet            |  /Users/me/.cargo/bin/forc-wallet                       |
+--------+------------------------+---------------------------------------------------------+

2.2 Setup a Database and Start the Indexer Service

To quickly setup and bootstrap the PostgreSQL database that we'll need, we'll use forc index and its forc index postgres subcommand.

We can quickly create a bootstrapped database and start the Fuel indexer service by running the following command:

IMPORTANT: Ensure that any local PostgreSQL instance that is running on port 5432 is stopped.

forc index start \
    --embedded-database
    --fuel-node-host node-beta-2.fuel.network \
    --fuel-node-port 80

You should see output indicating the successful creation of a database and start of the indexer service; there may be much more content in your session, but it should generally contain output similar to the following lines:

πŸ“¦ Downloading, unpacking, and bootstrapping database...

β–Ήβ–Ήβ–Έβ–Ήβ–Ή ⏱  Setting up database...

πŸ’‘ Creating database at 'postgres://postgres:postgres@localhost:5432/postgres'

βœ… Successfully created database at 'postgres://postgres:postgres@localhost:5432/postgres'.

βœ… Successfully started database at 'postgres://postgres:postgres@localhost:5432/postgres'.

βœ… Successfully started the indexer service.

You can Ctrl+C to exit the forc index start process, and your indexer service and database should still be running in the background.

2.3 Creating a new indexer

Now that we have our development environment set up, the next step is to create an indexer.

forc index new hello-indexer --namespace my_project && cd hello-indexer

The namespace of your project is a required option. You can think of a namespace as your organization name or company name. Your project might contain one or many indexers all under the same namespace.

forc index new hello-indexer --namespace my_project

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—         β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘         β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘         β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—   β•šβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•
β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•‘         β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•   β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
β–ˆβ–ˆβ•‘     β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—    β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘
β•šβ•β•      β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β•    β•šβ•β•β•šβ•β•  β•šβ•β•β•β•β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•

An easy-to-use, flexible indexing service built to go fast. πŸš—πŸ’¨

----

Read the Docs:
- Fuel Indexer: https://github.com/FuelLabs/fuel-indexer
- Fuel Indexer Book: https://fuellabs.github.io/fuel-indexer/latest
- Sway Book: https://fuellabs.github.io/sway/latest
- Rust SDK Book: https://fuellabs.github.io/fuels-rs/latest

Join the Community:
- Follow us @SwayLang: https://twitter.com/fuellabs_
- Ask questions in dev-chat on Discord: https://discord.com/invite/xfpK4Pe

Report Bugs:
- Fuel Indexer Issues: https://github.com/FuelLabs/fuel-indexer/issues/new

Take a quick tour.
`forc index check`
    List indexer components.
`forc index new`
    Create a new indexer.
`forc index init`
    Create a new indexer in an existing directory.
`forc index start`
    Start a local indexer service.
`forc index build`
    Build your indexer.
`forc index deploy`
    Deploy your indexer.
`forc index remove`
    Stop a running indexer.
`forc index revert`
    Revert a deployed indexer.
`forc index auth`
    Authenticate against an indexer service.

IMPORTANT: If you want more details on how this indexer works, check out our block explorer indexer example.

2.4 Deploying our indexer

At this point, we have a brand new indexer that will index some blocks and transactions. And with our database and Fuel indexer service up and running, all that's left is to build and deploy the indexer in order to see it in action. but now we need to build and deploy it in order to see it in action.

forc index deploy

If all goes well, you should see the following:

β–Ήβ–Ήβ–Ήβ–Ήβ–Ή ⏰ Building...                         Finished dev [unoptimized + debuginfo] target(s) in 0.96s
β–ͺβ–ͺβ–ͺβ–ͺβ–ͺ βœ… Build succeeded.                    Deploying indexer
β–ͺβ–ͺβ–ͺβ–ͺβ–ͺ βœ… Successfully deployed indexer.

3. Querying for data

With our indexer deployed, we should be able to query for newly indexed data after a few seconds.

Below, we write a simple GraphQL query that simply returns a few fields from all transactions that we've indexed.

curl -X POST http://localhost:29987/api/graph/my_project/hello_indexer \
   -H 'content-type: application/json' \
   -d '{"query": "query { tx { id hash block }}", "params": "b"}' \
| json_pp
[
   {
      "block" : 7017844286925529648,
      "hash" : "fb93ce9519866676813584eca79afe2d98466b3e2c8b787503b76b0b4718a565",
      "id" : 7292230935510476086,
   },
   {
      "block" : 3473793069188998756,
      "hash" : "5ea2577727aaadc331d5ae1ffcbc11ec4c2ba503410f8edfb22fc0a72a1d01eb",
      "id" : 4136050720295695667,
   },
   {
      "block" : 7221293542007912803,
      "hash" : "d2f638c26a313c681d75db2edfbc8081dbf5ecced87a41ec4199d221251b0578",
      "id" : 4049687577184449589,
   },
]

Finished! πŸ₯³

Congrats, you just created, built, and deployed your first indexer on the world's fastest execution layer. For more detailed info on how the Fuel indexer service works, make sure you read the book.

Starting the Fuel Indexer

Using CLI options

Standalone binary for the fuel indexer service.

USAGE:
    fuel-indexer run [OPTIONS]

OPTIONS:
        --auth-enabled
            Require users to authenticate for some operations.

        --auth-strategy <AUTH_STRATEGY>
            Authentication scheme used.

    -c, --config <FILE>
            Indexer service config file.

        --database <DATABASE>
            Database type. [default: postgres] [possible values: postgres]

        --embedded-database
            Automatically create and start database using provided options or defaults.

        --fuel-node-host <FUEL_NODE_HOST>
            Host of the running Fuel node. [default: localhost]

        --fuel-node-port <FUEL_NODE_PORT>
            Listening port of the running Fuel node. [default: 4000]

        --graphql-api-host <GRAPHQL_API_HOST>
            GraphQL API host. [default: localhost]

        --graphql-api-port <GRAPHQL_API_PORT>
            GraphQL API port. [default: 29987]

    -h, --help
            Print help information

        --jwt-expiry <JWT_EXPIRY>
            Amount of time (seconds) before expiring token (if JWT scheme is specified).

        --jwt-issuer <JWT_ISSUER>
            Issuer of JWT claims (if JWT scheme is specified).

        --jwt-secret <JWT_SECRET>
            Secret used for JWT scheme (if JWT scheme is specified).

        --local-fuel-node
            Start a local Fuel node.

        --log-level <LOG_LEVEL>
            Log level passed to the Fuel Indexer service. [default: info] [possible values: info,
            debug, error, warn]

    -m, --manifest <FILE>
            Indexer config file.

        --max-body-size <MAX_BODY_SIZE>
            Max body size for GraphQL API requests. [default: 5242880]

        --metrics
            Use Prometheus metrics reporting.

        --postgres-database <POSTGRES_DATABASE>
            Postgres database.

        --postgres-host <POSTGRES_HOST>
            Postgres host.

        --postgres-password <POSTGRES_PASSWORD>
            Postgres password.

        --postgres-port <POSTGRES_PORT>
            Postgres port.

        --postgres-user <POSTGRES_USER>
            Postgres username.

        --run-migrations
            Run database migrations before starting service.

        --stop-idle-indexers
            Prevent indexers from running without handling any blocks.

    -v, --verbose
            Enable verbose logging.

    -V, --version
            Print version information

Using a configuration file

# # The following is an example Fuel indexer configuration file.
# #
# # This configuration spec is intended to be used for a single instance
# # of a Fuel indexer node or service.
# #
# # For more info on how the Fuel indexer works, read the book: https://fuellabs.github.io/fuel-indexer/master/
# # or specifically read up on these configuration options: https://fuellabs.github.io/fuel-indexer/master/getting-started/configuration.html

# # Use Prometheus metrics reporting.
# metrics: true

# # Prevent indexers from running without handling any blocks.
# stop_idle_indexers: true

# # Run database migrations before starting service.
# run_migrations: true
#
# # Enable verbose logging.
# verbose: false

# # Start a local Fuel node.
# local_fuel_node: false

# # ***********************
# # Fuel Node configuration
# # ************************

# fuel_node:

#   # Host of the running Fuel node.
#   host: localhost

#   # Listening port of the running Fuel node.
#   port: 4000

# # *************************
# # GraphQL API configuration
# # *************************

# graphql_api:
#   # GraphQL API host.
#   host: localhost

#   # GraphQL API port.
#   port: 29987

#   # Max body size for GraphQL API requests.
#   max_body_size: "5242880"

# # ******************************
# # Database configuration options
# # ******************************

# database:

#   postgres:
#     # Postgres username.
#     user: postgres

#     # Postgres database.
#     database: postgres

#     # Postgres password.
#     password: password

#     # Postgres host.
#     host: localhost

#     # Postgres port.
#     port: 5432

# # ******************************
# # Indexer service authentication
# # ******************************

# authentication:
#   # Require users to authenticate for some operations.
#   enabled: false

#   # Which authentication scheme to use.
#   strategy: jwt

#   # Secret used if JWT authentication is specified.
#   jwt_secret: abcdefghijklmnopqrstuvwxyz1234567890*

#   # JWT issuer if JWT authentication is specified.
#   # jwt_issuer: FuelLabs

#   # Amount of time (seconds) before expiring token if JWT authentication is specified.
#   # jwt_expiry: 2592000

Hello World

Below is a simple "Hello World" Sway contract that we want to index. This contract has a function called new_greeting that logs a Greeting and a Person.

contract;

use std::logging::log;

struct Person {
    name: str[32],
}

struct Greeting {
    id: u64,
    greeting: str[32],
    person: Person,
}

abi Greet {
    fn new_greeting(id: u64, greeting: str[32], person_name: str[32]);
}

impl Greet for Contract {
    fn new_greeting(id: u64, greeting: str[32], person_name: str[32]) {
        log(Greeting{ id, greeting, person: Person{ name: person_name }});
    }
}

We can define our schema like this in the schema file:

schema {
    query: QueryRoot
}

type QueryRoot {
    greeting: Greeting
    salutation: Salutation
}

# Calling this `Greeter` so as to not clash with `Person` in the contract
type Greeter {
    id: ID!
    name: Charfield!
    first_seen: UInt8!
    last_seen: UInt8!
    visits: Blob!
}

# Calling this `Salutation` so as to not clash with `Greeting` in the contract
type Salutation {
    id: ID!
    message_hash: Bytes32!
    message: Charfield!
    greeter: Greeter!
    first_seen: UInt8!
    last_seen: UInt8!
}

Now that our schema is defined, here is how we can implement the WASM module in our lib.rs file:

//! A "Hello World" type of program for the Fuel Indexer service.
//!
//! Build this example's WASM module using the following command. Note that a
//! wasm32-unknown-unknown target will be required.
//!
//! ```bash
//! cargo build -p hello-indexer --release --target wasm32-unknown-unknown
//! ```
//!
//! Start a local test Fuel node
//!
//! ```bash
//! cargo run --bin fuel-node
//! ```
//!
//! With your database backend set up, now start your fuel-indexer binary using the
//! assets from this example:
//!
//! ```bash
//! cargo run --bin fuel-indexer -- run --manifest examples/hello-world/hello-indexer/hello_indexer.manifest.yaml
//! ```
//!
//! Now trigger an event.
//!
//! ```bash
//! cargo run --bin hello-bin
//! ```

extern crate alloc;
use fuel_indexer_macros::indexer;
use fuel_indexer_plugin::prelude::*;

#[indexer(manifest = "examples/hello-world/hello-indexer/hello_indexer.manifest.yaml")]
mod hello_world_indexer {

    fn index_logged_greeting(event: Greeting, block: BlockData) {
        // Since all events require a u64 ID field, let's derive an ID using the
        // name of the person in the Greeting
        let greeter_name = trim_sized_ascii_string(&event.person.name);
        let greeting = trim_sized_ascii_string(&event.greeting);
        let greeter_id = first8_bytes_to_u64(&greeter_name);

        // Here we 'get or create' a Salutation based on the ID of the event
        // emitted in the LogData receipt of our smart contract
        let salutation = match Salutation::load(event.id) {
            Some(mut g) => {
                // If we found an event, let's use block height as a proxy for time
                g.last_seen = block.height;
                g
            }
            None => {
                // If we did not already have this Saluation stored in the database. Here we
                // show how you can use the Charfield type to store strings with length <= 255
                let message = format!("{} πŸ‘‹, my name is {}", &greeting, &greeter_name);

                Salutation {
                    id: event.id,
                    message_hash: first32_bytes_to_bytes32(&message),
                    message,
                    greeter: greeter_id,
                    first_seen: block.height,
                    last_seen: block.height,
                }
            }
        };

        // Here we do the same with Greeter that we did for Saluation -- if we have an event
        // already saved in the database, load it and update it. If we do not have this Greeter
        // in the database then create one
        let greeter = match Greeter::load(greeter_id) {
            Some(mut g) => {
                g.last_seen = block.height;
                g
            }
            None => Greeter {
                id: greeter_id,
                first_seen: block.height,
                name: greeter_name,
                last_seen: block.height,

                // Here we show an example of an arbtrarily sized Blob type. These Blob types
                // support data up to 10485760 bytes in length
                visits: vec![1u8, 2, 3, 4, 5, 6, 7, 8],
            },
        };

        // Both entity saves will occur in the same transaction
        salutation.save();
        greeter.save();
    }
}

Block Explorer

Below is an example of a rudimentary block explorer backend implementation that demonstrates how to leverage basic Fuel indexer abstractions in order to build a cool dApp backend.

//! A rudimentary block explorer implementation demonstrating how blocks, transactions,
//! contracts, and accounts can be persisted into the database.
//!
//! Build this example's WASM module using the following command. Note that a
//! wasm32-unknown-unknown target will be required.
//!
//! ```bash
//! cargo build -p explorer-indexer --release --target wasm32-unknown-unknown
//! ```
//!
//! With your database backend set up, now start your fuel-indexer binary using the
//! assets from this example:
//!
//! ```bash
//! cargo run --bin fuel-indexer -- run --manifest examples/block-explorer/explorer-indexer/explorer_indexer.manifest.yaml
//! ```

extern crate alloc;
use fuel_indexer_macros::indexer;
use fuel_indexer_plugin::prelude::*;
use std::collections::HashSet;

// We'll pass our manifest to our #[indexer] attribute. This manifest contains
// all of the relevant configuration parameters in regard to how our index will
// work. In the fuel-indexer repository, we use relative paths (starting from the
// fuel-indexer root) but if you're building an index outside of the fuel-indexer
// project you'll want to use full/absolute paths.
#[indexer(
    manifest = "examples/block-explorer/explorer-indexer/explorer_indexer.manifest.yaml"
)]
mod explorer_index {
    // When specifying args to your handler functions, you can either use types defined
    // in your ABI JSON file, or you can use native Fuel types. These native Fuel types
    // include various `Receipt`s, as well as more comprehensive data, in the form of
    // blocks `BlockData` and transactions `TransactionData`. A list of native Fuel
    // types can be found at:
    //
    //  https://github.com/FuelLabs/fuel-indexer/blob/master/fuel-indexer-schema/src/types/fuel.rs#L28
    fn index_explorer_data(block_data: BlockData) {
        let mut block_gas_limit = 0;

        // Convert the deserialized block `BlockData` struct that we get from our Fuel node, into
        // a block entity `Block` that we can persist to the database. The `Block` type below is
        // defined in our schema/explorer.graphql and represents the type that we will
        // save to our database.
        let producer = block_data.producer.unwrap_or(Bytes32::zeroed());

        let block = Block {
            id: first8_bytes_to_u64(block_data.id),
            height: block_data.height,
            producer,
            hash: block_data.id,
            timestamp: block_data.time,
            gas_limit: block_gas_limit,
        };

        // Now that we've created the object for the database, let's save it.
        block.save();

        // Keep track of some Receipt data involved in this transaction.
        let mut accounts = HashSet::new();
        let mut contracts = HashSet::new();

        for tx in block_data.transactions.iter() {
            let mut tx_amount = 0;
            let mut tokens_transferred = Vec::new();

            // `Transaction::Script`, `Transaction::Create`, and `Transaction::Mint`
            // are unused but demonstrate properties like gas, inputs,
            // outputs, script_data, and other pieces of metadata. You can access
            // properties that have the corresponding transaction `Field` traits
            // implemented; examples below.
            match &tx.transaction {
                #[allow(unused)]
                Transaction::Script(t) => {
                    Logger::info("Inside a script transaction. (>^β€Ώ^)>");

                    let gas_limit = t.gas_limit();
                    let gas_price = t.gas_price();
                    let maturity = t.maturity();
                    let script = t.script();
                    let script_data = t.script_data();
                    let receipts_root = t.receipts_root();
                    let inputs = t.inputs();
                    let outputs = t.outputs();
                    let witnesses = t.witnesses();

                    let json = &tx.transaction.to_json();
                    block_gas_limit += gas_limit;
                }
                #[allow(unused)]
                Transaction::Create(t) => {
                    Logger::info("Inside a create transaction. <(^.^)>");

                    let gas_limit = t.gas_limit();
                    let gas_price = t.gas_price();
                    let maturity = t.maturity();
                    let salt = t.salt();
                    let bytecode_length = t.bytecode_length();
                    let bytecode_witness_index = t.bytecode_witness_index();
                    let inputs = t.inputs();
                    let outputs = t.outputs();
                    let witnesses = t.witnesses();
                    let storage_slots = t.storage_slots();
                    block_gas_limit += gas_limit;
                }
                #[allow(unused)]
                Transaction::Mint(t) => {
                    Logger::info("Inside a mint transaction. <(^β€Ώ^<)");

                    let tx_pointer = t.tx_pointer();
                    let outputs = t.outputs();
                }
            }

            for receipt in &tx.receipts {
                // You can handle each receipt in a transaction `TransactionData` as you like.
                //
                // Below demonstrates how you can use parts of a receipt `Receipt` in order
                // to persist entities defined in your GraphQL schema, to the database.
                match receipt {
                    #[allow(unused)]
                    Receipt::Call { id, .. } => {
                        contracts.insert(Contract {
                            id: *id,
                            last_seen: 0,
                        });
                    }
                    #[allow(unused)]
                    Receipt::ReturnData { id, .. } => {
                        contracts.insert(Contract {
                            id: *id,
                            last_seen: 0,
                        });
                    }
                    #[allow(unused)]
                    Receipt::Transfer {
                        id,
                        to,
                        asset_id,
                        amount,
                        ..
                    } => {
                        contracts.insert(Contract {
                            id: *id,
                            last_seen: 0,
                        });

                        let transfer = Transfer {
                            id: first8_bytes_to_u64(bytes32_from_inputs(
                                id,
                                [id.to_vec(), to.to_vec(), asset_id.to_vec()].concat(),
                            )),
                            contract_id: *id,
                            receiver: *to,
                            amount: *amount,
                            asset_id: *asset_id,
                        };

                        transfer.save();
                        tokens_transferred.push(asset_id.to_string());
                    }
                    #[allow(unused)]
                    Receipt::TransferOut {
                        id,
                        to,
                        amount,
                        asset_id,
                        ..
                    } => {
                        contracts.insert(Contract {
                            id: *id,
                            last_seen: 0,
                        });

                        accounts.insert(Account {
                            id: *to,
                            last_seen: 0,
                        });

                        tx_amount += amount;
                        let transfer_out = TransferOut {
                            id: first8_bytes_to_u64(bytes32_from_inputs(
                                id,
                                [id.to_vec(), to.to_vec(), asset_id.to_vec()].concat(),
                            )),
                            contract_id: *id,
                            receiver: *to,
                            amount: *amount,
                            asset_id: *asset_id,
                        };

                        transfer_out.save();
                    }
                    #[allow(unused)]
                    Receipt::Log { id, rb, .. } => {
                        contracts.insert(Contract {
                            id: *id,
                            last_seen: 0,
                        });
                        let log = Log {
                            id: first8_bytes_to_u64(bytes32_from_inputs(
                                id,
                                u64::to_le_bytes(*rb).to_vec(),
                            )),
                            contract_id: *id,
                            rb: *rb,
                        };

                        log.save();
                    }
                    #[allow(unused)]
                    Receipt::LogData { id, .. } => {
                        contracts.insert(Contract {
                            id: *id,
                            last_seen: 0,
                        });

                        Logger::info("LogData types are unused in this example. (>'')>");
                    }
                    #[allow(unused)]
                    Receipt::ScriptResult { result, gas_used } => {
                        let result: u64 = match result {
                            ScriptExecutionResult::Success => 1,
                            ScriptExecutionResult::Revert => 2,
                            ScriptExecutionResult::Panic => 3,
                            ScriptExecutionResult::GenericFailure(_) => 4,
                        };
                        let r = ScriptResult {
                            id: first8_bytes_to_u64(bytes32_from_inputs(
                                &[0u8; 32],
                                u64::to_be_bytes(result).to_vec(),
                            )),
                            result,
                            gas_used: *gas_used,
                        };
                        r.save();
                    }
                    #[allow(unused)]
                    Receipt::MessageOut {
                        sender,
                        recipient,
                        amount,
                        ..
                    } => {
                        tx_amount += amount;
                        accounts.insert(Account {
                            id: *sender,
                            last_seen: 0,
                        });
                        accounts.insert(Account {
                            id: *recipient,
                            last_seen: 0,
                        });

                        Logger::info("LogData types are unused in this example. (>'')>");
                    }
                    _ => {
                        Logger::info("This type is not handled yet.");
                    }
                }
            }

            // Persist the transaction to the database via the `Tx` object defined in the GraphQL schema.
            let tx_entity = Tx {
                block: block.id,
                hash: tx.id,
                timestamp: block.timestamp,
                id: first8_bytes_to_u64(tx.id),
                value: tx_amount,
                status: tx.status.clone().into(),
                tokens_transferred: Json(
                    serde_json::to_value(tokens_transferred)
                        .unwrap()
                        .to_string(),
                ),
            };

            tx_entity.save();
        }

        // Save all of our accounts
        for account in accounts.iter() {
            account.save();
        }

        // Save all of our contracts
        for contract in contracts.iter() {
            contract.save();
        }
    }
}

Once blocks have been added to the database by the indexer, you can query for them by using a query similar to the following:

curl -X POST http://localhost:29987/api/graph/fuel_examples/explorer_indexer \
   -H 'content-type: application/json' \
   -d '{"query": "query { block { id height timestamp }}", "params": "b"}' \
| json_pp
[
   {
      "height" : 1,
      "id" : "f169a30cfcbf1eebd97a07b19de98e4b38a4367b03d1819943be41744339d38a",
      "timestamp" : 1668710162
   },
   {
      "height" : 2,
      "id" : "a8c554758f78fe73054405d38099f5ad21a90c05206b5c6137424985c8fd10c7",
      "timestamp" : 1668710163
   },
   {
      "height" : 3,
      "id" : "850ab156ddd9ac9502768f779936710fd3d792e9ea79bc0e4082de96450b5174",
      "timestamp" : 1668710312
   },
   {
      "height" : 4,
      "id" : "19e19807c6988164b916a6877fe049d403d55a07324fa883cb7fa5cdb33438e2",
      "timestamp" : 1668710313
   },
   {
      "height" : 5,
      "id" : "363af43cfd2a6d8af166ee46c15276b24b130fc6a89ce7b3c8737d29d6d0e1bb",
      "timestamp" : 1668710314
   }
]

A Fuel Indexer Project

Use Cases

The Fuel indexer project can currently be used in a number of different ways:

  • as tooling to compile arbitrary indicies
  • as a standalone service
  • as a part of a Fuel project, alongside other components of the Fuel ecosystem (e.g. Sway)

We'll describe these three different implementations below.

As tooling for compiling indexers

The Fuel indexer provides functionality to make it easy to build and compile abitrary indexers by using forc index. For info on how to use indexer tooling to compile arbitrary indexers, check out our Quickstart; additionally, you can read through our examples for a more in-depth exploration of how to compile indexers.

As a standalone service

You can also start the Fuel indexer as a standalone binary that connects to a Fuel node to monitor the Fuel blockchain for new blocks and transactions. To do so, run the requisite database migrations, adjust the configuration to connect to a Fuel node, and start the service.

As part of a Fuel project

Finally, you can run the Fuel indexer as part of a project that uses other components of the Fuel ecosystem, such as Sway. The convention for a Fuel project layout including an indexer is as follows:

.
β”œβ”€β”€ contracts
β”‚Β Β  └── hello-contract
β”‚Β Β      β”œβ”€β”€ Forc.toml
β”‚Β Β      └── src
β”‚Β Β          └── main.sw
β”œβ”€β”€ frontend
β”‚Β Β  └── index.html
└── indexer
    └── hello-indexer
        β”œβ”€β”€ Cargo.toml
        β”œβ”€β”€ hello_indexer.manifest.yaml
        β”œβ”€β”€ schema
        β”‚Β Β  └── hello_indexer.schema.graphql
        └── src
            └── lib.rs

An Indexer Project at a Glance

Every Fuel indexer project requires three components:

  • a Manifest describing indexer metadata
  • a Schema containing models for the data you want to index
  • a Module which houses the logic for creating and saving the aforementioned data models

Manifest

A manifest serves as the YAML configuration file for a given indexer. A proper manifest has the following structure:

namespace: fuel
identifier: index1
abi: path/to/my/contract-abi.json
contract_id: "0x39150017c9e38e5e280432d546fae345d6ce6d8fe4710162c2e3a95a6faff051"
graphql_schema: path/to/my/schema.graphql
start_block: 1564
module:
  wasm: path/to/my/wasm_module.wasm
report_metrics: true

namespace

The namespace is the topmost organizational level of an indexer. You can think of different namespaces as separate and distinct collections comprised of indexers. A namespace is unique to a given indexer operator -- i.e., indexer operators will not be able to support more than one namespace of the same name.

identifier

The identifier field is used to (quite literally) identify the given indexer. If a namespace describes a collection of indexers, then an identifier describes a unique indexer inside that collection. As an example, if a provided namespace is "fuel" and a provided identifier is "index1", then the full identifier for the given indexer will be fuel.index1.

abi

The abi option is used to provide a link to the Sway JSON application binary interface (ABI) that is generated when you build your Sway project. This generated ABI contains all types, type IDs, logged types, and message types used in your Sway contract.

contract_id

The contract_id specifies the particular contract to which you would like an indexer to subscribe. Setting this field to an empty string will index events from any contract that is currently executing on the network.

Important: Contract IDs are unique to the content of a contract. If you are subscribing to a certain contract and then the contract itself is changed or updated, you will need to change the contract_id field of the manifest to the new ID.

graphql_schema

The graphql_schema field contains the file path pointing to the corresponding GraphQL schema for a given indexer. This schema file holds the structures of the data that will eventually reside in your database. You can read more about the format of the schema file here.

Important: The objects defined in your GraphQL schema are called 'entities'. These entities are what will be eventually be stored in the database.

start_block

The start_block field indicates the block height after which you'd like your indexer to start indexing events.

module

The module field contains a file path that points to code that will be run as an executor inside of the indexer. There are two available options for modules/execution: wasm and native. Note that when specifying a wasm module, the provided path must lead to a compiled WASM binary.

Important: At this time, wasm is the preferred method of execution.

report_metrics

The report_metrics field indicates whether to report Prometheus metrics to the Fuel backend.

resumable

The resumable field contains a boolean value and specifies whether the indexer should synchronise with the latest block if it has fallen out of sync.

GraphQL Schema

The GraphQL schema is a required component of the Fuel indexer. When data is indexed into the database, the actual values that are persisted to the database will be values created using the data structures defined in the schema.

In its most basic form, a Fuel indexer GraphQL schema should have a schema definition that contains a defined query root. The rest of the implementation is up to you. Here's an example of a well-formed schema:

schema {
    query: QueryRoot
}

type QueryRoot {
    thing1: FirstThing
    thing2: SecondThing
}

type FirstThing {
    id: ID!
    value: UInt8!
}

type SecondThing {
    id: ID!
    optional_value: UInt8
    timestamp: Timestamp!
}

The types you see above (e.g., ID, UInt8, etc) are Fuel abstractions that were created to more seamlessly integrate with the Fuel VM and are not native to GraphQL. A deeper explanation on these types can be found in the Types section.

Important: It is up to developers to manage their own unique IDs for each type, meaning that a data structure's ID field needs to be manually generated prior to saving it to the database. This generation can be as simple or complex as you want in order to fit your particular situation; the only requirement is that the developer implement their own custom generation. Examples can be found in the Block Explorer and Hello World sections.

Required and Optional Fields

Required fields are denoted with a ! following its type; for example, the value field of the FirstThing type is a UInt8 and is required to be present for the indexer to successfully persist the entity. If a certain piece of information is essential to your use case, then you should mark that field as required.

In contrast, optional fields are not required to be present for the indexer to persist the entity in storage. You can denote an optional field by just using the type name; for example, the optional_value field of the SecondThing type is optional, and should be a UInt8 if present. If it's possible that a value might not always exist in the data you wish to index, consider making that the corresponding field optional. In your indexer code, you will need to use the Option Rust type when assigning a value to an optional field; values that are present should be assigned after being wrapped in Some(..) while absent values should be assigned using None.

Important: The ID field is always required. An indexer will return an error if an optional value is used for the ID field.

WASM Modules

WebAssembly (WASM) modules are compiled binaries that are registered into a Fuel indexer at runtime. The WASM bytes are read in by the indexer and executors are created which will implement blocking calls to the WASM runtime.

The WASM module is generated based on your manifest, schema, and your lib.rs file.

lib.rs

You can implement the logic for handling events and saving data to the database in your lib.rs file in the src folder.

Here, you can define which functions handle different events based on the function parameters. If you add a function parameter of a certain type, the function will handle all blocks, transactions, or transaction receipts that contain a matching type.

We can look at the function below as an example:

fn index_logged_greeting(greeter: Greeting) {
    // function logic goes here
}

All transactions that have a receipt that contains data with a type of Greeting will be handled by the function.

You can learn more about what data can be indexed in the What Can I Index section.

To save an instance of a schema type in your database, you can call the save method on the instance.

instance.save();

Usage

To compile your indexer code to WASM, you'll first need to install the wasm32-unknown-unknown target platform through rustup, if you haven't done so already.

rustup add target wasm32-unknown-unknown

After that, you would compile your indexer code by navigating to the root folder for your indexer code and build. An example of this can be found below:

cd /my/index-lib && cargo build --release

Notes on WASM

There are a few points that Fuel indexer users should know when using WASM:

  1. WASM modules are only used if the execution mode specified in your manifest file is wasm.

  2. Developers should be aware of what things may not work off-the-shelf in a module: file I/O, thread spawning, and anything that depends on system libraries. This is due to the technological limitations of WASM as a whole; more information can be found here.

  3. As of this writing, there is a small bug in newly built Fuel indexer WASM modules that produces a WASM runtime error due to an errant upstream dependency. For now, a quick workaround requires the use of wasm-snip to remove the errant symbols from the WASM module. More info can be found in the related script here.

  4. Users on Apple Silicon macOS systems may experience trouble when trying to build WASM modules due to its clang binary not supporting WASM targets. If encountered, you can install a binary with better support from Homebrew (brew install llvm) and instruct rustc to leverage it by setting the following environment variables:

  • AR=/opt/homebrew/opt/llvm/bin/llvm-ar
  • CC=/opt/homebrew/opt/llvm/bin/clang

What Can I Index?

You can index three main types of data from the Fuel network: blocks, transactions, and transaction receipts. You can read more about these data types below:

If you've previously built an indexer for the EVM, you may be used to only being able to index data that is emitted as an event.

However, with Fuel you can index the entire transaction, which means you can use much more than logged data, allowing you to reduce the number of logs you need in your contract.

Blocks and Transactions

You can use the BlockData and TransactionData data structures to index important information about the Fuel network for your dApp.

BlockData

pub struct BlockData {
    pub height: u64,
    pub id: Bytes32,
    pub producer: Option<Bytes32>,
    pub time: i64,
    pub transactions: Vec<TransactionData>,
}

The BlockData struct is how blocks are represented in the Fuel indexer. It contains metadata such as the ID, height, and time, as well as a list of the transactions it contains (represented by TransactionData). It also contains the public key hash of the block producer, if present.

TransactionData

pub struct TransactionData {
    pub transaction: Transaction,
    pub status: TransactionStatus,
    pub receipts: Vec<Receipt>,
    pub id: TxId,
}

The TransactionData struct contains important information about a transaction in the Fuel network. The id field is the transaction hash, which is a 32-byte string. The receipts field contains a list of Receipts, which are generated by a Fuel node during the execution of a Sway smart contract; you can find more information in the Receipts section.

Transaction

pub enum Transaction {
    Script(Script),
    Create(Create),
    Mint(Mint),
}

Transaction refers to the Fuel transaction entity and can be one of three distinct types: Script, Create, or Mint. Explaining the differences between each of the types is out of scope for the Fuel indexer; however, you can find information about the Transaction type in the Fuel specifications.

enum TransactionType : uint8 {
    Script = 0,
    Create = 1,
    Mint = 2,
}
nametypedescription
typeTransactionTypeTransaction type.
dataOne of TransactionScript, TransactionCreate, or TransactionMintTransaction data.

Transaction is invalid if:

  • type > TransactionType.Create
  • gasLimit > MAX_GAS_PER_TX
  • blockheight() < maturity
  • inputsCount > MAX_INPUTS
  • outputsCount > MAX_OUTPUTS
  • witnessesCount > MAX_WITNESSES
  • No inputs are of type InputType.Coin or InputType.Message
  • More than one output is of type OutputType.Change for any asset ID in the input set
  • Any output is of type OutputType.Change for any asset ID not in the input set
  • More than one input of type InputType.Coin for any Coin ID in the input set
  • More than one input of type InputType.Contract for any Contract ID in the input set
  • More than one input of type InputType.Message for any Message ID in the input set

When serializing a transaction, fields are serialized as follows (with inner structs serialized recursively):

  1. uint8, uint16, uint32, uint64: big-endian right-aligned to 8 bytes.
  2. byte[32]: as-is.
  3. byte[]: as-is, with padding zeroes aligned to 8 bytes.

When deserializing a transaction, the reverse is done. If there are insufficient bytes or too many bytes, the transaction is invalid.

TransactionScript

enum ReceiptType : uint8 {
    Call = 0,
    Return = 1,
    ReturnData = 2,
    Panic = 3,
    Revert = 4,
    Log = 5,
    LogData = 6,
    Transfer = 7,
    TransferOut = 8,
    ScriptResult = 9,
    MessageOut = 10,
}
nametypedescription
gasPriceuint64Gas price for transaction.
gasLimituint64Gas limit for transaction.
maturityuint32Block until which tx cannot be included.
scriptLengthuint16Script length, in instructions.
scriptDataLengthuint16Length of script input data, in bytes.
inputsCountuint8Number of inputs.
outputsCountuint8Number of outputs.
witnessesCountuint8Number of witnesses.
receiptsRootbyte[32]Merkle root of receipts.
scriptbyte[]Script to execute.
scriptDatabyte[]Script input data (parameters).
inputsInput[]List of inputs.
outputsOutput[]List of outputs.
witnessesWitness[]List of witnesses.

Given helper len() that returns the number of bytes of a field.

Transaction is invalid if:

  • Any output is of type OutputType.ContractCreated
  • scriptLength > MAX_SCRIPT_LENGTH
  • scriptDataLength > MAX_SCRIPT_DATA_LENGTH
  • scriptLength * 4 != len(script)
  • scriptDataLength != len(scriptData)

IMPORTANT:

When signing a transaction, receiptsRoot is set to zero.

When verifying a predicate, receiptsRoot is initialized to zero.

When executing a script, receiptsRoot is initialized to zero.

The receipts root receiptsRoot is the root of the binary Merkle tree of receipts. If there are no receipts, its value is set to the root of the empty tree, i.e. 0xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.

TransactionCreate

nametypedescription
gasPriceuint64Gas price for transaction.
gasLimituint64Gas limit for transaction.
maturityuint32Block until which tx cannot be included.
bytecodeLengthuint16Contract bytecode length, in instructions.
bytecodeWitnessIndexuint8Witness index of contract bytecode to create.
storageSlotsCountuint16Number of storage slots to initialize.
inputsCountuint8Number of inputs.
outputsCountuint8Number of outputs.
witnessesCountuint8Number of witnesses.
saltbyte[32]Salt.
storageSlots(byte[32], byte[32]])[]List of storage slots to initialize (key, value).
inputsInput[]List of inputs.
outputsOutput[]List of outputs.
witnessesWitness[]List of witnesses.

Transaction is invalid if:

  • Any input is of type InputType.Contract
  • Any output is of type OutputType.Contract or OutputType.Variable
  • More than one output is of type OutputType.Change with asset_id of zero
  • Any output is of type OutputType.Change with non-zero asset_id
  • It does not have exactly one output of type OutputType.ContractCreated
  • bytecodeLength * 4 > CONTRACT_MAX_SIZE
  • tx.data.witnesses[bytecodeWitnessIndex].dataLength != bytecodeLength * 4
  • bytecodeWitnessIndex >= tx.witnessesCount
  • The keys of storageSlots are not in ascending lexicographic order
  • The computed contract ID (see below) is not equal to the contractID of the one OutputType.ContractCreated output
  • storageSlotsCount > MAX_STORAGE_SLOTS
  • The Sparse Merkle tree root of storageSlots is not equal to the stateRoot of the one OutputType.ContractCreated output

Creates a contract with contract ID as computed here.

TransactionMint

The transaction is created by the block producer and is not signed. Since it is not usable outside of block creation or execution, all fields must be fully set upon creation without any zeroing.

nametypedescription
txPointerTXPointerThe location of the Mint transaction in the block.
outputsCountuint8Number of outputs.
outputsOutput[]List of outputs.

Transaction is invalid if:

  • Any output is not of type OutputType.Coin
  • Any two outputs have the same asset_id
  • txPointer is zero or doesn't match the block.

TransactionStatus

pub enum TransactionStatus {
    Failure {
        block_id: String,
        time: DateTime<Utc>,
        reason: String,
    },
    SqueezedOut {
        reason: String,
    },
    Submitted {
        submitted_at: DateTime<Utc>,
    },
    Success {
        block_id: String,
        time: DateTime<Utc>,
    },
}

TransactionStatus refers to the status of a Transaction in the Fuel network.

Receipts

Every transaction in the Fuel network contains a list of receipts with information about that transaction, including what contract function was called, logged data, data returned from a function, etc.

There are several types of receipts that can be attached to a transaction and indexed. You can learn more about each of these in the sections below.

Call

use fuel_types::{AssetId, ContractId};
pub struct Call {
    pub contract_id: ContractId,
    pub to: ContractId,
    pub amount: u64,
    pub asset_id: AssetId,
    pub gas: u64,
    pub fn_name: String,
}

You can handle functions that produce a Call receipt type by adding a parameter with the type abi::Call.

fn handle_log(call: abi::Call) {
  // handle all functions that produce a Call receipt
}

Log

use fuel_types::ContractId;
pub struct Log {
    pub contract_id: ContractId,
    pub ra: u64,
    pub rb: u64,
}
  • A Log receipt is generated when calling log() on a non-reference types in a Sway contracts.
    • Specifically bool, u8, u16, u32, and u64.
  • The ra field includes the value being logged while rb may include a non-zero value representing a unique ID for the log instance.
  • Read more about Log in the Fuel protocol ABI spec

You can handle functions that produce a Log receipt type by adding a parameter with the type abi::Log.

fn handle_log(log: abi::Log) {
  // handle all functions that produce a log receipt
}

LogData

use fuel_types::ContractId;
pub struct LogData {
    pub contract_id: ContractId,
    pub data: Vec<u8>,
    pub rb: u64,
    pub len: u64,
    pub ptr: u64,
}
  • A LogData receipt is generated when calling log() in a Sway contract on a reference type; this includes all types except non-reference types.
  • The data field will include the logged value as a hexadecimal.
    • The rb field will contain a unique ID that can be used to look up the logged data type.
  • Read more about LogData in the Fuel protocol ABI spec

You can handle functions that produce a LogData receipt type by using the logged type as a function parameter.

Note: the example below will run both when the type MyStruct is logged as well as when MyStruct is returned from a function.

fn handle_log_data(data: MyStruct) {
  // handle the logged data
}

MessageOut

use fuel_types::{MessageId, Bytes32, Address};
pub struct MessageOut {
    pub message_id: MessageId,
    pub sender: Address,
    pub recipient: Address,
    pub amount: u64,
    pub nonce: Bytes32,
    pub len: u64,
    pub digest: Bytes32,
    pub data: Vec<u8>,
}
  • A MessageOut receipt is generated as a result of the send_typed_message() Sway method in which a message is sent to a recipient address along with a certain amount of coins.
  • The data field supports data of an arbitrary type T and will be decoded by the indexer upon receipt.
  • Read more about MessageOut in the Fuel protocol ABI spec

You can handle functions that produce a MessageOut receipt type by adding a parameter with the type abi::MessageOut.

fn handle_message_out(message_out: abi::MessageOut) {
  // handle the message out
}

Panic

use fuel_types::ContractId;
pub struct Panic {
    pub contract_id: ContractId, 
    pub reason: u32, 
}
  • A Panic receipt is produced when a Sway smart contract call fails for a reason that doesn't produce a revert.
  • The reason field records the reason for the panic, which is represented by a number between 0 and 255. You can find the mapping between the values and their meanings here in the FuelVM source code.
  • Read more about Panic in the Fuel Protocol spec
  • You can handle functions that could produce a Panic receipt by adding a parameter with the type abi::Panic.
fn handle_panic(panic: abi::Panic) {
  // handle the panic 
}

Return

use fuel_types::ContractId;
pub struct Return {
    pub contract_id: ContractId,
    pub val: u64,
    pub pc: u64,
    pub is: u64,
}

You can handle functions that produce a Return receipt type by adding a parameter with the type abi::Return.

fn handle_return(data: abi::Return) {
  // handle the returned data
}

ReturnData

use fuel_types::ContractId;
pub struct ReturnData {
    id: ContractId,
    data: Vec<u8>,
}

You can handle functions that produce a ReturnData receipt type by using the returned type as a function parameter.

Note: the example below will run both when the type MyStruct is logged as well as when MyStruct is returned from a function.

fn handle_return_data(data: MyStruct) {
  // handle the returned data
}

Revert

use fuel_types::ContractId;
pub struct Revert {
    pub contract_id: ContractId,
    pub error_val: u64,
  }
  • A Revert receipt is produced when a Sway smart contract function call fails.
  • The table below lists possible reasons for the failure and their values.
  • The error_val field records these values, enabling your indexer to identify the specific cause of the reversion.
ReasonValue
FailedRequire0
FailedTransferToAddress1
FailedSendMessage2
FailedAssertEq3
FailedAssert4

You can handle functions that could produce a Revert receipt by adding a parameter with the type abi::Revert.

fn handle_revert(revert: abi::Revert) {
  // handle the revert 
}

ScriptResult

pub struct ScriptResult {
    pub result: u64,
    pub gas_used: u64,
}

You can handle functions that produce a ScriptResult receipt type by adding a parameter with the type abi::ScriptResult.

fn handle_script_result(script_result: abi::ScriptResult) {
  // handle the script result
}

Transfer

use fuel_types::{ContractId, AssetId};
pub struct Transfer {
    pub contract_id: ContractId,
    pub to: ContractId,
    pub amount: u64,
    pub asset_id: AssetId,
    pub pc: u64,
    pub is: u64,
}
  • A Transfer receipt is generated when coins are transferred to a contract as part of a Sway contract.
  • The asset_id field contains the asset ID of the transferred coins, as the FuelVM has built-in support for working with multiple assets.
    • The pc and is fields aren't currently used for anything, but are included for completeness.
  • Read more about Transfer in the Fuel protocol ABI spec

You can handle functions that produce a Transfer receipt type by adding a parameter with the type abi::Transfer.

fn handle_transfer(transfer: abi::Transfer) {
  // handle the transfer
}

TransferOut

use fuel_types::{ContractId, AssetId, Address};
pub struct TransferOut {
    pub contract_id: ContractId,
    pub to: Address,
    pub amount: u64,
    pub asset_id: AssetId,
    pub pc: u64,
    pub is: u64,
}

You can handle functions that produce a TransferOut receipt type by adding a parameter with the type abi::TransferOut.

fn handle_transferout(transfer_out: abi::TransferOut) {
  // handle the transfer out
}

Types

Below is a mapping of GraphQL schema types to their Sway and database equivalents.

Sway TypeGraphQL Schema TypePostgres Type
u64IDbigint primary key
b256Addressvarchar(64)
str[4]Bytes4varchar(16)
str[8]Bytes8varchar(64)
str[32]Bytes32varchar(64)
str[32]AssetIdvarchar(64)
b256ContractIdvarchar(64)
str[32]Saltvarchar(64)
u32UInt4integer
u64UInt8bigint
i64Timestamptimestamp
str[]Blobbytes
str[32]MessageIdvarchar(64)
boolBooleanbool
Jsonjson
Charfieldvarchar(255)
Blobvarchar(10485760)

Example

Let's define an Event struct in a Sway contract:

struct Event {
    id: u64,
    address: Address,
    block_height: u64,
}

The corresponding GraphQL schema to mirror this Event struct would resemble:

type Event {
    id: ID!
    account: Address!
    block_height: UInt8!
}

And finally, this GraphQL schema will generate the following Postgres schema:

                                           Table "schema.event"
    Column   |     Type    | Collation | Nullable | Default | Storage  | Compression | Stats target | Description
--------------+-------------+-----------+----------+---------+----------+-------------+--------------+-------------
 id           |    bigint   |           | not null |         | plain    |             |              |
 block_height |    bigint   |           | not null |         | plain    |             |              |
 address      | varchar(64) |           | not null |         | plain    |             |              |
 object       |    bytea    |           | not null |         | extended |             |              |
Indexes:
    "event_pkey" PRIMARY KEY, btree (id)
Access method: heap

GraphQL

The Fuel indexer uses GraphQL to in order to allow users to query for indexed data. Please note that the Fuel indexer does not support the full GraphQL specification; however, we do our best to reasonably support as much as we can. In this chapter, you can find information on how to leverage our supported features to efficiently get the data you want.

GraphQL Schema

The GraphQL schema is a required component of the Fuel indexer. When data is indexed into the database, the actual values that are persisted to the database will be values created using the data structures defined in the schema.

In its most basic form, a Fuel indexer GraphQL schema should have a schema definition that contains a defined query root. The rest of the implementation is up to you. Here's an example of a well-formed schema:

schema {
    query: QueryRoot
}

type QueryRoot {
    thing1: FirstThing
    thing2: SecondThing
}

type FirstThing {
    id: ID!
    value: UInt8!
}

type SecondThing {
    id: ID!
    optional_value: UInt8
    timestamp: Timestamp!
}

The types you see above (e.g., ID, UInt8, etc) are Fuel abstractions that were created to more seamlessly integrate with the Fuel VM and are not native to GraphQL. A deeper explanation on these types can be found in the Types section.

Important: It is up to developers to manage their own unique IDs for each type, meaning that a data structure's ID field needs to be manually generated prior to saving it to the database. This generation can be as simple or complex as you want in order to fit your particular situation; the only requirement is that the developer implement their own custom generation. Examples can be found in the Block Explorer and Hello World sections.

Required and Optional Fields

Required fields are denoted with a ! following its type; for example, the value field of the FirstThing type is a UInt8 and is required to be present for the indexer to successfully persist the entity. If a certain piece of information is essential to your use case, then you should mark that field as required.

In contrast, optional fields are not required to be present for the indexer to persist the entity in storage. You can denote an optional field by just using the type name; for example, the optional_value field of the SecondThing type is optional, and should be a UInt8 if present. If it's possible that a value might not always exist in the data you wish to index, consider making that the corresponding field optional. In your indexer code, you will need to use the Option Rust type when assigning a value to an optional field; values that are present should be assigned after being wrapped in Some(..) while absent values should be assigned using None.

Important: The ID field is always required. An indexer will return an error if an optional value is used for the ID field.

Directives

Per GraphQL: A directive is a keyword preceded by a @ character (optionally followed by a list of named arguments) which can appear after almost any form of syntax in the GraphQL query or schema languages.

As of this writing, the list of supported Fuel GraphQL schema directives includes:

  • @indexed
  • @unique
  • @join

@indexed

The @indexed directive adds a database index to the underlying column for the indicated field of that type. Generally, a database index is a data structure that allows you to quickly locate data without having to search each row in a database table.

schema {
    query: QueryRoot
}

type QueryRoot {
    book: Book
    library: Library
}

type Book {
    id: ID!
    name: Bytes8! @indexed
}

type Library {
    id: ID!
    book: Book!
}

In this example, a single BTREE INDEX constraint will be created on the book table's name column, which allows for faster lookups on that field.

Important: At the moment, database index constraint support is limited to BTREE in Postgres with ON DELETE, and ON UPDATE actions not being supported.

@unique

The @unique directive adds a UNIQUE database constraint to the underlying database column for the indicated field of that type. A constraint specifies a rule for the data in a table and can be used to limit the type of data that can be placed in the table. In the case of a column with a UNIQUE constraint, all values in the column must be different.

schema {
    query: QueryRoot
}

type QueryRoot {
    book: Book
    library: Library
}

type Book {
    id: ID!
    name: Bytes8! @unique
}

type Library {
    id: ID!
    book: Book!
}

A UNIQUE constraint will be created on the book table's name column, ensuring that no books can share the same name.

Important: When using explict or implicit foreign keys, it is required that the reference column name in your foreign key relationship be unique. ID types are by default unique, but all other types will have to be explicitly specified as being unique via the @unique directive.

@join

The @join directive is used to relate a field in one type to others by referencing fields in another type. You can think of it as a link between two tables in your database. The field in the referenced type is called a foreign key and it is required to be unique.

schema {
    query: QueryRoot
}

type QueryRoot {
    book: Book
    library: Library
}

type Book {
    id: ID!
    name: Bytes8! @unique
}

type Library {
    id: ID!
    book: Book! @join(on:name)
}

A foreign key constraint will be created on library.book that references book.name, which relates the Books in a Library to the underlying Book table.

GraphQL API Server

  • The fuel-indexer-api-server crate of the Fuel indexer contains a standalone GraphQL API server that acts as a queryable endpoint on top of the database.
  • Note that the main fuel-indexer binary of the indexer project also contains a queryable GraphQL API endpoint.

The fuel-indexer-api-server crate offers a standalone GraphQL API endpoint, whereas the GraphQL endpoint offered in fuel-indexer is bundled with other Fuel indexer functionality (e.g., execution, handling, data-layer contruction, etc).

Usage

To run the standalone Fuel indexer GraphQL API server using a configuration file:

fuel-indexer-api-server run --config config.yaml

In the above example, config.yaml is based on the default service configuration file.

Options

USAGE:
    fuel-indexer-api-server run [OPTIONS]

OPTIONS:
        --auth-enabled
            Require users to authenticate for some operations.

        --auth-strategy <AUTH_STRATEGY>
            Authentication scheme used.

    -c, --config <CONFIG>
            API server config file.

        --database <DATABASE>
            Database type. [default: postgres] [possible values: postgres]

        --fuel-node-host <FUEL_NODE_HOST>
            Host of the running Fuel node. [default: localhost]

        --fuel-node-port <FUEL_NODE_PORT>
            Listening port of the running Fuel node. [default: 4000]

        --graphql-api-host <GRAPHQL_API_HOST>
            GraphQL API host. [default: localhost]

        --graphql-api-port <GRAPHQL_API_PORT>
            GraphQL API port. [default: 29987]

    -h, --help
            Print help information

        --jwt-expiry <JWT_EXPIRY>
            Amount of time (seconds) before expiring token (if JWT scheme is specified).

        --jwt-issuer <JWT_ISSUER>
            Issuer of JWT claims (if JWT scheme is specified).

        --jwt-secret <JWT_SECRET>
            Secret used for JWT scheme (if JWT scheme is specified).

        --max-body-size <MAX_BODY_SIZE>
            Max body size for GraphQL API requests. [default: 5242880]

        --metrics
            Use Prometheus metrics reporting.

        --postgres-database <POSTGRES_DATABASE>
            Postgres database.

        --postgres-host <POSTGRES_HOST>
            Postgres host.

        --postgres-password <POSTGRES_PASSWORD>
            Postgres password.

        --postgres-port <POSTGRES_PORT>
            Postgres port.

        --postgres-user <POSTGRES_USER>
            Postgres username.

        --run-migrations
            Run database migrations before starting service.

    -V, --version
            Print version information

    -v, --verbose
            Enable verbose logging.

Database

The Fuel indexer uses PostgreSQL as the primary database. We're open to supporting other storage solutions in the future.

In this chapter, you can find information regarding how your data should be structured for use in the Fuel indexer:

  • Foreign Keys
    • How foreign keys are handled in the Fuel indexer.
  • ⚠️ IDs
    • Explains some conventions surrounding the usage of ID types

Foreign Keys

  • The Fuel indexer service supports foreign key constraints and relationships using a combination of GraphQL schema and a database.
  • There are two types of uses for foreign keys - implicit and explicit.

IMPORTANT:

Implicit foreign keys do not require a @join directive. When using implicit foreign key references, merely add the referenced object as a field type (shown below). A lookup will automagically be done to add a foreign key constraint using this object's' id field.

Note that implicit foreign key relationships only use the id field on the referenced table. If you plan to use implicit foreign keys, the object being referenced must have an id field.

In contrast, explicit foreign keys do require a @join directive. Explicit foreign key references work similarly to implicit foreign keys; however, when using explicit foreign key references, you must add a @join directive after your object type. This @join directive includes the field in your foreign object that you would like to reference (shown below).

Let's learn how to use each foreign key type by looking at some GraphQL schema examples.

Usage

Implicit foreign keys

schema {
    query: QueryRoot
}

type QueryRoot {
    book: Book
    library: Library
}

type Book {
    id: ID!
    name: Bytes8!
}

type Library {
    id: ID!
    book: Book!
}

Implicit foreign key breakdown

Given the above schema, two entities will be created: a Book entity, and a Library entity. As you can see, we add the Book entity as an attribute on the Library entity, thus conveying that we want a one-to-many or one-to-one relationship between Library and Book. This means that for a given Library, we may also fetch one or many Book entities. It also means that the column library.book will be an integer type that references book.id.

Explicit foreign keys

schema {
    query: QueryRoot
}

type QueryRoot {
    book: Book
    library: Library
}

type Book {
    id: ID!
    name: Bytes8! @unique
}

type Library {
    id: ID!
    book: Book! @join(on:name)
}

Explicit foreign key breakdown

For the most part, this works the same way as implicit foreign key usage. However, as you can see, instead of implicitly using book.id as the reference column for our Book object, we're instead explicitly specifying that we want book.name to serve as our foreign key. Also, please note that since we're using book.name in our foreign key constraint, that column is required to be unique (via the @unique directive).

ID Types

There are a few important things related to the use of IDs.

Every GraphQL type defined in your schema file is required to have an id field.

  • This field must be called id
  • The type of this id field must be a u64
    • You typically want to use the ID type for these id fields

Why must every field have an ID?

Since the Fuel Indexer uses WASM runtimes to index events, a foreign function interface (FFI) is needed to call in and out of the runtime. When these calls out of the runtime are made, a pointer is passed back to the indexer service to indicate the memory location for the id of the type/object/entity being saved.

Is this liable to change in the future?

Yes, ideally we'd like ID's to be of any type, and we plan to work towards this in the future. πŸ‘

Fuel Indexer Plugins

  • forc index
    • A Forc plugin used to interact with a Fuel Indexer service.
  • forc index postgres
    • A subcommand of the forc index plugin that allows for bootstrapping and management of an embedded Postgres database.

forc index

forc index is the recommended method for end users to interact with the Fuel indexer. After you have installed fuelup, you can run the forc index help command in your terminal to view the available commands.

forc index help
USAGE:
    forc-index <SUBCOMMAND>

OPTIONS:
    -h, --help       Print help information
    -V, --version    Print version information

SUBCOMMANDS:
    build     Build an indexer
    check     Get status checks on all indexer components
    deploy    Deploy an indexer asset bundle to a remote or locally running indexer server
    help      Print this message or the help of the given subcommand(s)
    init      Create a new indexer project in the current directory
    new       Create a new indexer project in a new directory
    remove    Stop and remove a running indexer
    revert    Revert a running indexer to its previous version
    start     Start a local indexer service

forc index init

Create a new indexer project in the current directory.

forc index init --namespace fuel
Create a new indexer project in the current directory

USAGE:
    forc-index init [OPTIONS] --namespace <NAMESPACE>

OPTIONS:
        --absolute-paths           Resolve indexer asset filepaths using absolute paths.
    -h, --help                     Print help information
        --name <NAME>              Name of indexer.
        --namespace <NAMESPACE>    Namespace to which indexer belongs.
        --native                   Initialize an indexer with native execution enabled.
    -p, --path <PATH>              Path at which to create indexer.
    -v, --verbose                  Enable verbose output.

forc index new

Create a new indexer project in a new directory.

forc index new --namespace fuel --path /home/fuel/projects
USAGE:
    forc-index new [OPTIONS] --namespace <NAMESPACE> <PATH>

ARGS:
    <PATH>    Path at which to create indexer

OPTIONS:
        --absolute-paths           Resolve indexer asset filepaths using absolute paths.
    -h, --help                     Print help information
        --name <NAME>              Name of indexer.
        --namespace <NAMESPACE>    Namespace to which indexer belongs.
        --native                   Whether to initialize an indexer with native execution enabled.
    -v, --verbose <verbose>        Enable verbose output. [default: true]

forc index check

Check to see which indexer components you have installed.

forc index check
USAGE:
    forc-index check [OPTIONS]

OPTIONS:
        --grpahql-api-port <GRPAHQL_API_PORT>
            Port at which to detect indexer service API is running. [default: 29987]

    -h, --help
            Print help information

        --url <URL>
            URL at which to find indexer service. [default: http://localhost:29987]

You can expect the command output to look something like this example in which the requisite components are installed but the indexer service is not running:

➜  forc index check

❌ Could not connect to indexer service: error sending request for url (http://localhost:29987/api/health): error trying to connect: tcp connect error: Connection refused (os error 61)

+--------+------------------------+----------------------------------------------------------------------------+
| Status |       Component        |                                  Details                                   |
+--------+------------------------+----------------------------------------------------------------------------+
|   βœ…   | fuel-indexer binary    |  Found 'fuel-indexer' at '/Users/me/.fuelup/bin/fuel-indexer'              |
+--------+------------------------+----------------------------------------------------------------------------+
|   ⛔️   | fuel-indexer service   |  Failed to detect a locally running fuel-indexer service at Port(29987).   |
+--------+------------------------+----------------------------------------------------------------------------+
|   βœ…   | psql                   |  Found 'psql' at '/usr/local/bin/psql'                                     |
+--------+------------------------+----------------------------------------------------------------------------+
|   βœ…   | fuel-core              |  Found 'fuel-core' at '/Users/me/.fuelup/bin/fuel-core'                    |
+--------+------------------------+----------------------------------------------------------------------------+
|   βœ…   | docker                 |  Found 'docker' at '/usr/local/bin/docker'                                 |
+--------+------------------------+----------------------------------------------------------------------------+
|   βœ…   | fuelup                 |  Found 'fuelup' at '/Users/me/.fuelup/bin/fuelup'                          |
+--------+------------------------+----------------------------------------------------------------------------+
|   βœ…   | wasm-snip              |  Found 'wasm-snip' at '/Users/me/.cargo/bin/wasm-snip'                     |
+--------+------------------------+----------------------------------------------------------------------------+

forc index build

Build an indexer.

forc index build --release
USAGE:
    forc-index build [OPTIONS]

OPTIONS:
    -h, --help                       Print help information
        --locked                     Ensure that the Cargo.lock file is up-to-date.
    -m, --manifest <MANIFEST>        Manifest file name of indexer being built.
        --native                     Building for native execution.
    -p, --path <PATH>                Path to the indexer project.
        --profile <PROFILE>          Build with the given profile.
    -r, --release                    Build optimized artifacts with the release profile.
        --target <TARGET>            Target at which to compile. [default: wasm32-unknown-unknown]
        --target-dir <TARGET_DIR>    Directory for all generated artifacts and intermediate files.
    -v, --verbose                    Enable verbose output.

forc index start

Start a local Fuel Indexer service.

forc index start
USAGE:
    forc-index start [OPTIONS]

OPTIONS:
        --auth-enabled
            Require users to authenticate for some operations.

        --auth-strategy <AUTH_STRATEGY>
            Authentication scheme used.

    -c, --config <FILE>
            Indexer service config file.

        --database <DATABASE>
            Database type. [default: postgres] [possible values: postgres]

        --embedded-database
            Automatically create and start database using provided options or defaults.

        --fuel-node-host <FUEL_NODE_HOST>
            Host of the running Fuel node. [default: localhost]

        --fuel-node-port <FUEL_NODE_PORT>
            Listening port of the running Fuel node. [default: 4000]

        --graphql-api-host <GRAPHQL_API_HOST>
            GraphQL API host. [default: localhost]

        --graphql-api-port <GRAPHQL_API_PORT>
            GraphQL API port. [default: 29987]

    -h, --help
            Print help information

        --jwt-expiry <JWT_EXPIRY>
            Amount of time (seconds) before expiring token (if JWT scheme is specified).

        --jwt-issuer <JWT_ISSUER>
            Issuer of JWT claims (if JWT scheme is specified).

        --jwt-secret <JWT_SECRET>
            Secret used for JWT scheme (if JWT scheme is specified).

        --log-level <LOG_LEVEL>
            Log level passed to the Fuel Indexer service. [default: info] [possible values: info,
            debug, error, warn]

    -m, --manifest <FILE>
            Index config file.

        --max-body-size <MAX_BODY_SIZE>
            Max body size for GraphQL API requests. [default: 5242880]

        --metrics
            Use Prometheus metrics reporting.

        --postgres-database <POSTGRES_DATABASE>
            Postgres database.

        --postgres-host <POSTGRES_HOST>
            Postgres host.

        --postgres-password <POSTGRES_PASSWORD>
            Postgres password.

        --postgres-port <POSTGRES_PORT>
            Postgres port.

        --postgres-user <POSTGRES_USER>
            Postgres username.

        --run-migrations
            Run database migrations before starting service.

        --stop-idle-indexers
            Prevent indexers from running without handling any blocks.

    -V, --version
            Print version information

        --verbose
            Enable verbose logging.

forc index deploy

Deploy an indexer to an indexer service.

forc index deploy --url https://indexer.fuel.network
USAGE:
    forc-index deploy [OPTIONS]

OPTIONS:
        --auth <AUTH>                Authentication header value.
    -h, --help                       Print help information
        --locked                     Ensure that the Cargo.lock file is up-to-date.
    -m, --manifest <MANIFEST>        Path to the manifest of indexer project being deployed.
        --native                     Building for native execution.
    -p, --path <PATH>                Path to the indexer project.
        --profile <PROFILE>          Build with the given profile.
    -r, --release                    Build optimized artifacts with the release profile.
        --skip-build                 Do not build before deploying.
        --target <TARGET>            Target at which to compile. [default: wasm32-unknown-unknown]
        --target-dir <TARGET_DIR>    Directory for all generated artifacts and intermediate files.
        --url <URL>                  URL at which to deploy indexer assets. [default:
                                     http://localhost:29987]
    -v, --verbose                    Enable verbose logging.

forc index remove

Stop and remove a running indexer.

forc index remove --url https://indexer.fuel.network
USAGE:
    forc-index remove [OPTIONS]

OPTIONS:
        --auth <AUTH>            Authentication header value.
    -h, --help                   Print help information
    -m, --manifest <MANIFEST>    Path to the manifest of the indexer project being removed.
    -p, --path <PATH>            Path to the indexer project.
        --url <URL>              URL at which indexer is deployed. [default: http://localhost:29987]
    -v, --verbose                Enable verbose output.

forc index auth

Authenticate against an indexer operator.

IMPORTANT: There must be an indexer service running at --url in order for this to work.

forc index auth --account 0
USAGE:
    forc-index auth [OPTIONS] --account <ACCOUNT>

OPTIONS:
        --account <ACCOUNT>    Index of account to use for signing.
    -h, --help                 Print help information
        --url <URL>            URL at which to deploy indexer assets. [default:
                               http://localhost:29987]
    -v, --verbose              Verbose output.

forc index revert

Revert the running indexer to the previous version.

forc index revert
USAGE:
    forc-index revert [OPTIONS]

OPTIONS:
        --auth <AUTH>            Authentication header value.
    -h, --help                   Print help information
    -m, --manifest <MANIFEST>    Path to the manifest of the indexer project being reverted.
    -p, --path <PATH>            Path of indexer project.
        --url <URL>              URL at which indexer is deployed. [default: http://localhost:29987]
    -v, --verbose                Enable verbose output.

forc index postgres

forc index postgres is provided as a way to simplify the setup and management of an embedded Postgres database. After you have installed fuelup, you can run the forc index postgres help command in your terminal to view the available commands.

forc index postgres help
USAGE:
    forc-index postgres <SUBCOMMAND>

OPTIONS:
    -h, --help       Print help information
    -V, --version    Print version information

SUBCOMMANDS:
    create    Create a new database
    drop      Drop a database
    help      Print this message or the help of the given subcommand(s)
    start     Start PostgreSQL with a database
    stop      Stop PostgreSQL

forc index postgres create

Create a new database.

forc index postgres create example_database
USAGE:
    forc-index postgres create [OPTIONS] <NAME>

ARGS:
    <NAME>    Name of database.

OPTIONS:
        --auth-method <AUTH_METHOD>
            Authentication method. [default: plain] [possible values: plain, md5, scram-sha-256]

    -c, --config <CONFIG>
            Fuel indexer configuration file.

        --database-dir <DATABASE_DIR>
            Where to store the PostgreSQL database.

    -h, --help
            Print help information

        --migration-dir <MIGRATION_DIR>
            The directory containing migration scripts.

    -p, --password <PASSWORD>
            Database password. [default: postgres]

    -p, --port <PORT>
            Port to use. [default: 5432]

        --persistent
            Do not clean up files and directories on database drop.

        --postgres-version <POSTGRES_VERSION>
            PostgreSQL version to use. [default: v14] [possible values: v15, v14, v13, v12, v11,
            v10, v9]

        --start
            Start the PostgreSQL instance after creation.

        --timeout <TIMEOUT>
            Duration to wait before terminating process execution for pg_ctl start/stop and initdb.

    -u, --user <USER>
            Database user. [default: postgres]

forc index postgres start

Start PostgreSQL with a database.

forc index postgres start example_database
USAGE:
    forc-index postgres start [OPTIONS] <NAME>

ARGS:
    <NAME>    Name of database.

OPTIONS:
    -c, --config <CONFIG>                Fuel indexer configuration file.
        --database-dir <DATABASE_DIR>    Where the PostgreSQL database is stored.
    -h, --help                           Print help information

forc index postgres stop

Stop PostgreSQL.

forc index postgres stop example_database
USAGE:
    forc-index postgres stop [OPTIONS] <NAME>

ARGS:
    <NAME>    Name of database.

OPTIONS:
    -c, --config <CONFIG>                Fuel indexer configuration file.
        --database-dir <DATABASE_DIR>    Where the PostgreSQL database is stored.
    -h, --help                           Print help information

forc index postgres drop

Drop a database.

forc index postgres drop example_database
USAGE:
    forc-index postgres drop [OPTIONS] <NAME>

ARGS:
    <NAME>    Name of database.

OPTIONS:
    -c, --config <CONFIG>
            Fuel indexer configuration file.

        --database-dir <DATABASE_DIR>
            Where the PostgreSQL database is stored.

    -h, --help
            Print help information

        --remove-persisted
            Remove all database files that might have been persisted to disk.

Authentication

The Fuel indexer's authentication functionality offers users a range of options for verifying their identity. The system supports any arbitrary authentication scheme (in theory); however, in practice the service defaults to JWT authentication due to its stateless nature and popularity. To authenticate using JWT, users ask an index operator for a nonce, sign that nonce with their wallet, then send both the nonce and signature to the indexer operator for verification. Once the signature is confirmed as valid, a valid JWT is produced and returned to the user, and the user is authenticated.

It is important to note that authentication is disabled by default. However, if authentication is enabled, users will need to authenticate before performing operations that involve modifying the state of the service, such as uploading, stopping, or reverting indexers. The new authentication functionality offers a flexible and secure way for users to authenticate and perform operations that affect the service's state.

Usage

Below is a demonstration of basic JWT authentication using an indexer operator at "https://indexer.fuel.network"

forc index auth --url https://indexer.fuel.network:29987

You will first be prompted for the password for your wallet:

Please enter your password:

After successfully entering your wallet password you should be presented with your new JWT token.

βœ… Successfully authenticated at https://indexer.fuel.network:29987/api/auth/signature.

Token: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiODNlNjhiOTFmNDhjYWM4M....

Use this token in your Authorization headers when making requests for operations such as uploading indexers, stopping indexers, and other operations that mutate state in this way.

For Contributors

Thanks for your interest in contributing to the Fuel indexer! Below we've compiled a list of sections that you may find useful as you work on a potential contribution:

Dependencies

fuelup

We use fuelup in order to get the binaries produced by services in the Fuel ecosystem. Fuelup will install binaries related to the Fuel node, the Fuel indexer, the Fuel orchestrator (forc), and other components. fuelup can be downloaded here.

docker

We use Docker to produce reproducible environments for users that may be concerned with installing components with large sets of dependencies (e.g. Postgres). Docker can be downloaded here.

Database

At this time, the Fuel indexer requires the use of a database. We currently support a single database option: Postgres. PostgreSQL is a database solution with a complex feature set and requires a database server.

PostgreSQL

Note: The following explanation is for demonstration purposes only. A production setup should use secure users, permissions, and passwords.

On macOS systems, you can install PostgreSQL through Homebrew. If it isn't present on your system, you can install it according to the instructions. Once installed, you can add PostgreSQL to your system by running brew install postgresql. You can then start the service through brew services start postgresql. You'll need to create a database for your indexed data, which you can do by running createdb [DATABASE_NAME]. You may also need to create the postgres role; you can do so by running createuser -s postgres.

For Linux-based systems, the installation process is similar. First, you should install PostgreSQL according to your distribution's instructions. Once installed, there should be a new postgres user account; you can switch to that account by running sudo -i -u postgres. After you have switched accounts, you may need to create a postgres database role by running createuser --interactive. You will be asked a few questions; the name of the role should be postgres and you should elect for the new role to be a superuser. Finally, you can create a database by running createdb [DATABASE_NAME].

In either case, your PostgreSQL database should now be accessible at postgres://postgres@localhost:5432/[DATABASE_NAME].

SQLx

After setting up your database, you should install sqlx-cli in order to run migrations for your indexer service. You can do so by running cargo install sqlx-cli --features postgres. Once installed, you can run the migrations by running the following command after changing DATABASE_URL to match your setup.

Building from Source

Clone repository

git clone git@github.com:FuelLabs/fuel-indexer.git && cd fuel-indexer/

Run migrations

Postgres migrations

cd packages/fuel-indexer-database/postgres
DATABASE_URL=postgres://postgres@localhost sqlx migrate run

Start the service

cargo run --bin fuel-indexer run

You can also start the service with a fresh local node for development purposes:

cargo run --features fuel-core-lib --bin fuel-indexer run

If no configuration file or other options are passed, the service will default to a postgres://postgres@localhost database connection.

Testing

Fuel indexer tests are currently broken out by a database feature flag. In order to run tests with a Postgres backend, use --features postgres.

Further, the indexer uses end-to-end (E2E) tests. In order to trigger these end-to-end tests, you'll want to use the e2e features flag: --features e2e.

All end-to-end tests also require the use of a database feature. For example, to run the end-to-end tests with a Posgres backend, use --features e2e,postgres.

Default tests

cargo test --locked --workspace --all-targets

End-to-end tests

cargo test --locked --workspace --all-targets --features e2e,postgres

trybuild tests

For tests related to the meta-programming used in the Fuel indexer, we use trybuild.

RUSTFLAGS='-D warnings' cargo test -p fuel-indexer-macros --locked

Contributing to Fuel Indexer

Thanks for your interest in contributing to Fuel Indexer! This document outlines some the conventions on building, running, and testing Fuel Indexer.

Fuel Indexer has many dependent repositories. If you need any help or mentoring getting started, understanding the codebase, or anything else, please ask on our Discord.

Code Standards

We use an RFC process to maintain our code standards. They currently live in the RFC repo: https://github.com/FuelLabs/rfcs/tree/master/text/code-standards

Building and setting up a development workspace

Fuel Core is mostly written in Rust, but includes components written in C++ (RocksDB). We are currently using the latest Rust stable toolchain to build the project. But for rustfmt, we use Rust nightly toolchain because it provides more code style features(you can check rustfmt.toml).

Prerequisites

To build Fuel Core you'll need to at least have the following installed:

  • git - version control
  • rustup - Rust installer and toolchain manager
  • clang - Used to build system libraries (required for rocksdb).
  • postgresql/libpq - Used for Postgres backend.

See the README.md for platform specific setup steps.

Getting the repository

Future instructions assume you are in this repository

git clone https://github.com/FuelLabs/fuel-indexer
cd fuel-indexer

Configuring your Rust toolchain

rustup is the official toolchain manager for Rust.

We use some additional components such as clippy and rustfmt, to install those:

rustup component add clippy
rustup component add rustfmt

Fuel Indexer also uses a few other tools installed via cargo

cargo install sqlx-cli
cargo install wasm-snip

Building and testing

Fuel Indexer's two primary crates are fuel-indexer and fuel-indexer-api-server.

You can build Fuel Indexer:

cargo build -p fuel-indexer -p fuel-indexer-api-server

This command will run cargo build and also dump the latest schema into /assets/ folder.

Linting is done using rustfmt and clippy, which are each separate commands:

cargo fmt --all --check
cargo clippy --all-features --all-targets -- -D warnings

The test suite follows the Rust cargo standards. The GraphQL service will be instantiated by Tower and will emulate a server/client structure.

Testing is simply done using Cargo:

RUSTFLAGS='-D warnings' SQLX_OFFLINE=1 cargo test --locked --all-targets --all-features

Build Options

For optimal performance, we recommend using native builds. The generated binary will be optimized for your CPU and may contain specific instructions supported only in your hardware.

To build, run:

cargo build --release --bin fuel-indexer

The generated binary will be located in ./target/release/fuel-indexer

Build issues

  • Due to dependencies on external components such as RocksDb, build times can be large without caching. We currently use sccache
cargo build -p fuel-indexer --no-default-features

Contribution flow

This is a rough outline of what a contributor's workflow looks like:

  • Make sure what you want to contribute is already tracked as an issue. We may discuss the problem and solution in the issue. ⚠️ DO NOT submit PRs that do not have an associated issue ⚠️
  • Create a Git branch from where you want to base your work.
    • Most work is usually branched off of master
    • Give your branch a name related to the work you're doing
  • Write code, add test cases, and commit your work.
  • Run tests and make sure all tests pass.
  • Your commit message should be formatted as [commit type]: [short commit blurb]
    • Examples:
      • If you fixed a bug, your message is fix: database locking issue
      • If you added new functionality, your message would be enhancement: i add something super cool
      • If you just did a chore your message is: chore: i did somthing not fun
    • Keeping commit messages short and consistent helps users parse release notes
  • Push up your branch to Github then (on the right hand side of the Github UI):
    • Assign yourself as the owner of the PR
    • Add any and all necessary labels to your PR
    • Link the issue your PR solves, to your PR
  • If you are part of the FuelLabs Github org, please open a PR from the repository itself.
  • Otherwise, push your changes to a branch in your fork of the repository and submit a pull request.
    • Make sure mention the issue, which is created at step 1, in the commit message.
  • Your PR will be reviewed and some changes may be requested.
    • Once you've made changes, your PR must be re-reviewed and approved.
    • If the PR becomes out of date, you can use GitHub's 'update branch' button.
    • If there are conflicts, you can merge and resolve them locally. Then push to your PR branch.
      • Any changes to the branch will require a re-review.
  • Our CI (Github Actions) automatically tests all authorized pull requests.
  • Use Github to merge the PR once approved.

Commit categories

  • bug: If fixing broken functionality
  • enhancement: If adding new functionality
  • chore: If finishing valuable work (that's no fun!)
  • testing: If only updating/writing tests
  • docs: If just updating docs
  • feat: If adding a non-trivial new feature
  • There will be categories not covered in this doc - use your best judgement!

Thanks for your contributions!

Finding something to work on

For beginners, we have prepared many suitable tasks for you. Checkout our Good First Issues for a list.

If you are planning something that relates to multiple components or changes current behaviors, make sure to open an issue to discuss with us before continuing.

Release Schedule

https://semver.org/

Major releases

  • E.g., v2.0.0 -> v3.0.0
  • Major releases of large features and breaking changes
  • Cadence: TBD - as needed

Minor releases

  • E.g., v0.3.0 -> v0.4.0
  • General releases of new functionality, fixes, and some breaking changes
  • Cadence: Every other week, Tuesday morning 11am EST

Patch releases

  • E.g., v0.1.3 -> v0.1.4
  • Releases for bug fixes and time sensitive improvements
  • Cadence: Ad-hoc as needed throughout the week

Glossary

Here is a list of terms and their definitions in order to help users properly understand certain concepts about the Fuel indexer.

  • asset: a component that is used to create and operate an indexer
  • executor: an async task run by an indexer
  • index/indices: data produced by an indexer
  • indexer service: a service that runs one or more indexers
  • indexer: an abstraction that takes data from Fuel virtual machine and produces indices