π Fuel Indexer
The Fuel indexer is a standalone service that can be used to index various components of the blockchain. These indexable components include blocks, transactions, receipts, and state within a Fuel network, allowing for high-performance read-only access to the blockchain for advanced dApp use-cases.
By using a combination of Fuel-flavored GraphQL schema, a SQL backend, and indices written in Rust, users of the Fuel indexer can get started creating production-ready backends for their dApps, meant to go fast ππ¨.
Feel free to check out Quickstart for those wanting to build dApp backends right away. And for those willing to contribute to the Fuel indexer project, please feel free to read our contributor guidelines and the For Contributors section of the book.
Architecture

The Fuel indexer is meant to run alongside a Fuel node and a database. Generally, the typical flow of information through the indexer is as follows:
- A Sway smart contract emits receipts during its execution on the Fuel node.
- Blocks, transactions, and receipts from the node are monitored by the Fuel indexer service and checked for specific user-defined event types.
- When a specific event type is found, the indexer executes the corresponding handler from an index module.
- The handler processes the event and stores the index information in the database.
- A dApp queries for blockchain data by using the indexer's GraphQL API endpoint, which fetches the desired information from the corresponding index in the database and returns it to the user.
fuelup
We strongly recommend that you use the Fuel indexer through forc
, the Fuel orchestrator. You can get forc
(and other Fuel components) by way of fuelup
, the Fuel toolchain manager. Install fuelup
by running the following command, which downloads and runs the installation script.
curl \
--proto '=https' \
--tlsv1.2 -sSf \
https://fuellabs.github.io/fuelup/fuelup-init.sh | sh
After fuelup
has been installed, the forc index
command and fuel-indexer
binaries will be available on your system.
fuelup
We strongly recommend that you use the Fuel indexer through forc
, the Fuel orchestrator. You can get forc
(and other Fuel components) by way of fuelup
, the Fuel toolchain manager. Install fuelup
by running the following command, which downloads and runs the installation script.
curl \
--proto '=https' \
--tlsv1.2 -sSf \
https://fuellabs.github.io/fuelup/fuelup-init.sh | sh
After fuelup
has been installed, the forc index
command and fuel-indexer
binaries will be available on your system.
Database
At this time, the Fuel indexer requires the use of a database. We currently support a single database option: PostgresSQL. PostgreSQL is a database solution with a complex feature set and requires a database server.
PostgreSQL
Note: The following explanation is for demonstration purposes only. A production setup should use secure users, permissions, and passwords.
macOS
On macOS systems, you can install PostgreSQL through Homebrew. If it isn't present on your system, you can install it according to the instructions. Once installed, you can add PostgreSQL to your system by running brew install postgresql
. You can then start the service through brew services start postgresql
. You'll need to create a database for your index data, which you can do by running createdb [DATABASE_NAME]
. You may also need to create the postgres
role; you can do so by running createuser -s postgres
.
Linux
For Linux-based systems, the installation process is similar. First, you should install PostgreSQL according to your distribution's instructions. Once installed, there should be a new postgres
user account; you can switch to that account by running sudo -i -u postgres
. After you have switched accounts, you may need to create a postgres
database role by running createuser --interactive
. You will be asked a few questions; the name of the role should be postgres
and you should elect for the new role to be a superuser. Finally, you can create a database by running createdb [DATABASE_NAME]
.
In either case, your PostgreSQL database should now be accessible at postgres://postgres@127.0.0.1:5432/[DATABASE_NAME]
.
WASM
Two additonal cargo components will be required to build your indexers: wasm-snip
and the wasm32-unknown-unknown
target.
As of this writing, there is a small bug in newly built Fuel indexer WASM modules that produces a WASM runtime error due an errant upstream dependency. For now, you can use
wasm-snip
to remove the errant symbols from the WASM module. An example can be found in the related script here.
wasm-snip
To install the wasm-snip
:
cargo install wasm-snip
wasm32
target
To install the wasm32-unknown-unknown
target via rustup
:
rustup target add wasm32-unknown-unknown
Docker
If you don't want to install the Fuel indexer or its dependencies directly onto your system, you can use Docker to run it as an isolated container. You can install it by following the install instructions. For reference purposes, we provide a docker compose
file that runs a Postgres database and the Fuel indexer service.
Quickstart
In this tutorial you will:
- Bootstrap your development environment.
- Create, build, and deploy an index to an indexer service hooked up to Fuel's
beta-2
testnet. - Query the indexer service for indexed data using GraphQL.
1. Setting up your environment
In this Quickstart, we'll use Docker's Compose to spin up a Fuel indexer service with a PostgreSQL database backend. We will also use Fuel's toolchain manager fuelup
in order to install the forc-index
binary that we'll use to develop our index.
1.1 Install fuelup
To Install fuelup with the default features/options, use the following command, which downloads the fuelup installation script and runs it interactively.
curl \
--proto '=https' \
--tlsv1.2 -sSf https://fuellabs.github.io/fuelup/fuelup-init.sh | sh
If you require a non-default
fuelup
installation, please read thefuelup
installation docs.
2. Using the forc-index
plugin
- The primary means of interfacing with the Fuel indexer for index development is the
forc-index
CLI tool. forc-index
is aforc
plugin specifically created to interface with the Fuel indexer service.- Since we already installed
fuelup
in a previous step [1.1], we should be able to check that ourforc-index
binary was successfully installed and added to ourPATH
.
which forc-index
/Users/me/.fuelup/bin/forc-index
IMPORTANT:
fuelup
will install several binaries from the Fuel ecosystem and add them into yourPATH
, including thefuel-indexer
binary. Thefuel-indexer
binary is the primary binary that users can use to spin up a Fuel indexer service.
which fuel-indexer
/Users/me/.fuelup/bin/fuel-indexer
2.1 Check for components
Once the forc-index
plugin is installed, let's go ahead and see what indexer components we have installed.
Many of these components are required for development work (e.g.,
fuel-core
,psql
) but some are even required for non-development usage as well (e.g.,wasm-snip
,fuelup
).
forc index check
+--------+------------------------+---------------------------------------------------------+
| Status | Component | Details |
+--------+------------------------+---------------------------------------------------------+
| β
| fuel-indexer binary | /Users/rashad/.fuelup/bin/fuel-indexer |
+--------+------------------------+---------------------------------------------------------+
| βοΈ | fuel-indexer service | Failed to detect service at Port(29987). |
+--------+------------------------+---------------------------------------------------------+
| β
| psql | /usr/local/bin/psql |
+--------+------------------------+---------------------------------------------------------+
| β
| fuel-core | /Users/rashad/.fuelup/bin/fuel-core |
+--------+------------------------+---------------------------------------------------------+
| β
| docker | /usr/local/bin/docker |
+--------+------------------------+---------------------------------------------------------+
| β
| fuelup | /Users/rashad/.fuelup/bin/fuelup |
+--------+------------------------+---------------------------------------------------------+
| β
| wasm-snip | /Users/rashad/.cargo/bin/wasm-snip |
+--------+------------------------+---------------------------------------------------------+
| β
| forc-postgres | /Users/rashad/.fuelup/bin/fuelup |
+--------+------------------------+---------------------------------------------------------+
| β
| rustc | /Users/rashad/.cargo/bin/rustc |
+--------+------------------------+---------------------------------------------------------+
2.2 Database setup
To quickly setup and bootstrap the PostgreSQL database that we'll need, we'll use the forc-postgres
plugin that is included in fuelup
.
IMPORTANT: Ensure that any local PostgreSQL instance that is running on port
5432
is stopped.
forc index postgres create postgres --persistent
Downloading, unpacking, and bootstrapping database.
βΉβΈβΉβΉβΉ β± Setting up database...
This user must also own the server process.
The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /Users/rashad/.fuel/indexer/postgres ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... America/New_York
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
/Users/rashad/Library/Caches/pg-embed/darwin/amd64/14.6.0/bin/pg_ctl -D /Users/rashad/.fuel/indexer/postgres -l logfile start
βΉβΉβΈβΉβΉ β± Setting up database...
π‘ Creating database at 'postgres://postgres:postgres@localhost:5432/postgres'.(clang-1200.0.32.29), 64-bit
2023-02-10 11:30:45.325 EST [30902] LOG: listening on IPv6 address "::1", port 5432
2023-02-10 11:30:45.325 EST [30902] LOG: listening on IPv4 address "127.0.0.1", port 5432
2023-02-10 11:30:45.326 EST [30902] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2023-02-10 11:30:45.328 EST [30903] LOG: database system was shut down at 2023-02-10 11:30:45 EST
2023-02-10 11:30:45.331 EST [30902] LOG: database system is ready to accept connections
done
server started
2023-02-10 11:30:45.421 EST [30910] ERROR: database "postgres" already exists
2023-02-10 11:30:45.421 EST [30910] STATEMENT: CREATE DATABASE "postgres"
CREATE DATABASE "postgres"; rows affected: 0, rows returned: 0, elapsed: 325.683Β΅s
Default database postgres already exists.
Writing PgEmbedConfig to "/Users/rashad/.fuel/indexer/postgres/postgres-db.json"
βͺβͺβͺβͺβͺ β± Setting up database...
β
Successfully created database at 'postgres://postgres:postgres@localhost:5432/postgres'.
2023-02-10 11:30:45.424 EST [30902] LOG: received fast shutdown request
2023-02-10 11:30:45.424 EST [30902] LOG: aborting any active transactions
2023-02-10 11:30:45.424 EST [30902] LOG: background worker "logical replication launcher" (PID 30909) exited with exit code 1
2023-02-10 11:30:45.424 EST [30904] LOG: shutting down
2023-02-10 11:30:45.428 EST [30902] LOG: database system is shut down
waiting for server to shut down.... done
server stopped
Then we can start our database with
forc index postgres start postgres
Using database directory at "/Users/rashad/.fuel/indexer/postgres"
Starting PostgreSQL.
waiting for server to start....2023-02-09 16:11:37.360 EST [86873] LOG: starting PostgreSQL 14.6 on x86_64-apple-darwin20.6.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bit
2023-02-09 16:11:37.362 EST [86873] LOG: listening on IPv6 address "::1", port 5432
2023-02-09 16:11:37.362 EST [86873] LOG: listening on IPv4 address "127.0.0.1", port 5432
2023-02-09 16:11:37.362 EST [86873] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2023-02-09 16:11:37.365 EST [86874] LOG: database system was shut down at 2023-02-09 16:11:25 EST
2023-02-09 16:11:37.368 EST [86873] LOG: database system is ready to accept connections
done
server started
select exists(SELECT 1 from β¦; rows affected: 0, rows returned: 1, elapsed: 2.860ms
select
exists(
SELECT
1
from
pg_database
WHERE
datname = $1
)
β
Successfully started database at 'postgres://postgres:postgres@localhost:5432/postgres'.
2023-02-09 16:11:37.460 EST [86881] LOG: could not receive data from client: Connection reset by peer
You can
Ctrl+C
to exit theforc index postgres start
process, and your database should still be running in the background.
2.3 Creating a new index
Now that we have our development environment set up, the next step is to create an index.
forc index new hello-index --namespace my_project && cd hello-index
The
namespace
of your project is a required option. You can think of anamespace
as your organization name or company name. Your index project might contain one or many indices all under the samenamespace
.
forc index new hello-index --namespace my_project
βββββββββββ ββββββββββββββ βββββββ ββββββββββ βββββββββββ ββββββββββββββββββ
βββββββββββ ββββββββββββββ ββββββββ βββββββββββββββββββββββββββββββββββββββββββ
ββββββ βββ βββββββββ βββ βββββββββ ββββββ βββββββββ ββββββ ββββββ ββββββββ
ββββββ βββ βββββββββ βββ ββββββββββββββββ βββββββββ ββββββ ββββββ ββββββββ
βββ βββββββββββββββββββββββββ ββββββ ββββββββββββββββββββββββββ ββββββββββββββ βββ
βββ βββββββ ββββββββββββββββ ββββββ ββββββββββββ βββββββββββ ββββββββββββββ βββ
An easy-to-use, flexible indexing service built to go fast. ππ¨
----
Read the Docs:
- Fuel Indexer: https://github.com/FuelLabs/fuel-indexer
- Fuel Indexer Book: https://fuellabs.github.io/fuel-indexer/latest
- Sway Book: https://fuellabs.github.io/sway/latest
- Rust SDK Book: https://fuellabs.github.io/fuels-rs/latest
Join the Community:
- Follow us @SwayLang: https://twitter.com/fuellabs_
- Ask questions in dev-chat on Discord: https://discord.com/invite/xfpK4Pe
Report Bugs:
- Fuel Indexer Issues: https://github.com/FuelLabs/fuel-indexer/issues/new
Take a quick tour.
`forc index check`
List indexer components.
`forc index new`
Create a new index.
`forc index init`
Create a new index in an existing directory.
`forc index start`
Start a local indexer service.
`forc index build`
Build your index.
`forc index deploy`
Deploy your index.
`forc index remove`
Stop a running index.
IMPORTANT: If you want more details on how this index works, checkout our block explorer index example.
2.4 Deploying our index
By now we have a brand new index that will index some blocks and transactions, but now we need to build and deploy it in order to see it in action.
2.4.1 Starting an indexer service
forc index start \
--fuel-node-host node-beta-2.fuel.network \
--fuel-node-port 80
2.4.2 Deploying your index to your Fuel indexer service
With our database and Fuel indexer containers up and running, we'll deploy the index that we previously created. If all goes well, you should see the following:
forc index deploy --manifest hello_index.manifest.yaml
βΉβΉβΈβΉβΉ β° Building... Finished dev [unoptimized + debuginfo] target(s) in 0.87s
βͺβͺβͺβͺβͺ β
Build succeeded.
Deploying index at hello_index.manifest.yaml to http://127.0.0.1:29987/api/index/my_project/hello_index
βΉβΈβΉβΉβΉ π Deploying...
{
"assets": [
{
"digest": "79e74d6a7b68a35aeb9aa2dd7f6083dae5fdba5b6a2f199529b6c49624d1e27b",
"id": 1,
"index_id": 1,
"version": 1
},
{
"digest": "4415628d9ea79b3c3f1e6f02b1af3416c4d0b261b75abe3cc81b77b7902549c5",
"id": 1,
"index_id": 1,
"version": 1
},
{
"digest": "e901eba95ce8b4d1c159c5d66f24276dc911e87dbff55fb2c10d8b371528eacc",
"id": 1,
"index_id": 1,
"version": 1
}
],
"success": "true"
}
βͺβͺβͺβͺβͺ β
Successfully deployed index.
3. Querying for data
With our index deployed, after a few seconds, we should be able to query for newly indexed data.
Below, we write a simple GraphQL query that simply returns a few fields from all transactions that we've indexed.
curl -X POST http://127.0.0.1:29987/api/graph/my_project/hello_index \
-H 'content-type: application/json' \
-d '{"query": "query { tx { id hash block }}", "params": "b"}' \
| json_pp
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 364 100 287 100 77 6153 1650 --:--:-- --:--:-- --:--:-- 9100
[
{
"block" : 7017844286925529648,
"hash" : "fb93ce9519866676813584eca79afe2d98466b3e2c8b787503b76b0b4718a565",
"id" : 7292230935510476086,
},
{
"block" : 3473793069188998756,
"hash" : "5ea2577727aaadc331d5ae1ffcbc11ec4c2ba503410f8edfb22fc0a72a1d01eb",
"id" : 4136050720295695667,
},
{
"block" : 7221293542007912803,
"hash" : "d2f638c26a313c681d75db2edfbc8081dbf5ecced87a41ec4199d221251b0578",
"id" : 4049687577184449589,
},
]
Finished! π₯³
Congrats, you just created, built, and deployed your first index on the world's fastest execution layer. For more detailed info on how the Fuel indexer service works, make sure you read the book.
Starting the Fuel Indexer
Using CLI options
USAGE:
fuel-indexer run [OPTIONS]
OPTIONS:
-c, --config <CONFIG>
Indexer service config file.
--database <DATABASE>
Database type. [default: postgres] [possible values: postgres]
--fuel-node-host <FUEL_NODE_HOST>
Host of the running Fuel node. [default: 127.0.0.1]
--fuel-node-port <FUEL_NODE_PORT>
Listening port of the running Fuel node. [default: 4000]
--graphql-api-host <GRAPHQL_API_HOST>
GraphQL API host. [default: 127.0.0.1]
--graphql-api-port <GRAPHQL_API_PORT>
GraphQL API port. [default: 29987]
-h, --help
Print help information
--log-level <LOG_LEVEL>
Log level passed to the Fuel Indexer service. [default: info]
[possible values: info, debug, error, warn]
-m, --manifest <MANIFEST>
Index config file.
--metrics <metrics>
Use Prometheus metrics reporting. [default: true]
--postgres-database <POSTGRES_DATABASE>
Postgres database.
--postgres-host <POSTGRES_HOST>
Postgres host.
--postgres-password <POSTGRES_PASSWORD>
Postgres password.
--postgres-port <POSTGRES_PORT>
Postgres port.
--postgres-user <POSTGRES_USER>
Postgres username.
--run-migrations <run-migrations>
Run database migrations before starting service. [default: true]
-V, --version
Print version information
Using a configuration file
## The following is an example Fuel indexer configuration file.
##
## This configuration spec is intended to be used for a single instance
## of a Fuel indexer node or service.
## Fuel Node configuration
fuel_node:
host: 127.0.0.1
port: 4000
## GraphQL API configuration
graphql_api:
host: 127.0.0.1
port: 29987
run_migrations: false
## Database configuration options.
database:
postgres:
user: postgres
database:
password:
host: 127.0.0.1
port: 5432
metrics: true
Hello World
A "Hello World" type of program for the Fuel Indexer service.
//! A "Hello World" type of program for the Fuel Indexer service.
//!
//! Build this example's WASM module using the following command. Note that a
//! wasm32-unknown-unknown target will be required.
//!
//! ```bash
//! cargo build -p hello-index --release --target wasm32-unknown-unknown
//! ```
//!
//! Start a local test Fuel node
//!
//! ```bash
//! cargo run --bin fuel-node
//! ```
//!
//! With your database backend set up, now start your fuel-indexer binary using the
//! assets from this example:
//!
//! ```bash
//! cargo run --bin fuel-indexer -- --manifest examples/hello-world/hello_index.manifest.yaml
//! ```
//!
//! Now trigger an event.
//!
//! ```bash
//! cargo run --bin hello-bin
//! ```
extern crate alloc;
use fuel_indexer_macros::indexer;
use fuel_indexer_plugin::prelude::*;
#[indexer(manifest = "examples/hello-world/hello_index.manifest.yaml")]
mod hello_world_index {
fn index_logged_greeting(event: Greeting, block: BlockData) {
// Since all events require a u64 ID field, let's derive an ID using the
// name of the person in the Greeting
let greeter_name = trim_sized_ascii_string(&event.person.name);
let greeting = trim_sized_ascii_string(&event.greeting);
let greeter_id = first8_bytes_to_u64(&greeter_name);
// Here we 'get or create' a Salutation based on the ID of the event
// emitted in the LogData receipt of our smart contract
let greeting = match Salutation::load(event.id) {
Some(mut g) => {
// If we found an event, let's use block height as a proxy for time
g.last_seen = block.height;
g
}
None => {
// If we did not already have this Saluation stored in the database. Here we
// show how you can use the Charfield type to store strings with length <= 255
let message = format!("{} π, my name is {}", &greeting, &greeter_name);
Salutation {
id: event.id,
message_hash: first32_bytes_to_bytes32(&message),
message,
greeter: greeter_id,
first_seen: block.height,
last_seen: block.height,
}
}
};
// Here we do the same with Greeter that we did for Saluation -- if we have an event
// already saved in the database, load it and update it. If we do not have this Greeter
// in the database then create one
let greeter = match Greeter::load(greeter_id) {
Some(mut g) => {
g.last_seen = block.height;
g
}
None => Greeter {
id: greeter_id,
first_seen: block.height,
name: greeter_name,
last_seen: block.height,
},
};
// Both entity saves will occur in the same transaction
greeting.save();
greeter.save();
}
}
Block Explorer
A rudimentary block explorer backend implementation demonstrating how to leverage basic Fuel indexer abstractions in order to build a cool dApp backend.
//! A rudimentary block explorer implementation demonstrating how blocks, transactions,
//! contracts, and accounts can be persisted into the database.
//!
//! Build this example's WASM module using the following command. Note that a
//! wasm32-unknown-unknown target will be required.
//!
//! ```bash
//! cargo build -p explorer-index --release --target wasm32-unknown-unknown
//! ```
//!
//! Use the fuel-indexer testing components to start your Fuel node and web API
//!
//! ```bash
//! bash scripts/utils/start_test_components.bash
//! ```
//!
//! With your database backend set up, now start your fuel-indexer binary using the
//! assets from this example:
//!
//! ```bash
//! cargo run --bin fuel-indexer -- --manifest examples/block-explorer/manifest.yaml
//! ```
extern crate alloc;
use fuel_indexer_macros::indexer;
use fuel_indexer_plugin::prelude::*;
use std::collections::HashSet;
// We'll pass our manifest to our #[indexer] attribute. This manifest contains
// all of the relevant configuration parameters in regard to how our index will
// work. In the fuel-indexer repository, we use relative paths (starting from the
// fuel-indexer root) but if you're building an index outside of the fuel-indexer
// project you'll want to use full/absolute paths.
#[indexer(manifest = "examples/block-explorer/explorer_index.manifest.yaml")]
mod explorer_index {
// When specifying args to your handler functions, you can either use types defined
// in your ABI JSON file, or you can use native Fuel types. These native Fuel types
// include various `Receipt`s, as well as more comprehensive data, in the form of
// blocks `BlockData` and transactions `TransactionData`. A list of native Fuel
// types can be found at:
//
// https://github.com/FuelLabs/fuel-indexer/blob/master/fuel-indexer-schema/src/types/fuel.rs#L28
fn index_explorer_data(block_data: BlockData) {
let mut block_gas_limit = 0;
// Convert the deserialized block `BlockData` struct that we get from our Fuel node, into
// a block entity `Block` that we can persist to the database. The `Block` type below is
// defined in our schema/explorer.graphql and represents the type that we will
// save to our database.
//
// Note: There is no miner/producer address for blocks in this example; the producer field
// was removed from the `Block` struct as part of fuel-core v0.12.
let block = Block {
id: block_data.id,
height: block_data.height,
timestamp: block_data.time,
gas_limit: block_gas_limit,
};
// Now that we've created the object for the database, let's save it.
block.save();
// Keep track of some Receipt data involved in this transaction.
let mut accounts = HashSet::new();
let mut contracts = HashSet::new();
for tx in block_data.transactions.iter() {
let mut tx_amount = 0;
let mut tokens_transferred = Vec::new();
// `Transaction::Script`, `Transaction::Create`, and `Transaction::Mint`
// are unused but demonstrate properties like gas, inputs,
// outputs, script_data, and other pieces of metadata. You can access
// properties that have the corresponding transaction `Field` traits
// implemented; examples below.
match &tx.transaction {
#[allow(unused)]
Transaction::Script(t) => {
Logger::info("Inside a script transaction. (>^βΏ^)>");
let gas_limit = t.gas_limit();
let gas_price = t.gas_price();
let maturity = t.maturity();
let script = t.script();
let script_data = t.script_data();
let receipts_root = t.receipts_root();
let inputs = t.inputs();
let outputs = t.outputs();
let witnesses = t.witnesses();
let json = &tx.transaction.to_json();
block_gas_limit += gas_limit;
}
#[allow(unused)]
Transaction::Create(t) => {
Logger::info("Inside a create transaction. <(^.^)>");
let gas_limit = t.gas_limit();
let gas_price = t.gas_price();
let maturity = t.maturity();
let salt = t.salt();
let bytecode_length = t.bytecode_length();
let bytecode_witness_index = t.bytecode_witness_index();
let inputs = t.inputs();
let outputs = t.outputs();
let witnesses = t.witnesses();
let storage_slots = t.storage_slots();
block_gas_limit += gas_limit;
}
#[allow(unused)]
Transaction::Mint(t) => {
Logger::info("Inside a mint transaction. <(^βΏ^<)");
let tx_pointer = t.tx_pointer();
let outputs = t.outputs();
}
}
for receipt in &tx.receipts {
// You can handle each receipt in a transaction `TransactionData` as you like.
//
// Below demonstrates how you can use parts of a receipt `Receipt` in order
// to persist entities defined in your GraphQL schema, to the database.
match receipt {
#[allow(unused)]
Receipt::Call { id, .. } => {
contracts.insert(Contract {
id: *id,
last_seen: 0,
});
}
#[allow(unused)]
Receipt::ReturnData { id, .. } => {
contracts.insert(Contract {
id: *id,
last_seen: 0,
});
}
#[allow(unused)]
Receipt::Transfer {
id,
to,
asset_id,
amount,
..
} => {
contracts.insert(Contract {
id: *id,
last_seen: 0,
});
let transfer = Transfer {
id: bytes32_from_inputs(
id,
[id.to_vec(), to.to_vec(), asset_id.to_vec()].concat(),
),
contract_id: *id,
receiver: *to,
amount: *amount,
asset_id: *asset_id,
};
transfer.save();
tokens_transferred.push(asset_id.to_string());
}
#[allow(unused)]
Receipt::TransferOut {
id,
to,
amount,
asset_id,
..
} => {
contracts.insert(Contract {
id: *id,
last_seen: 0,
});
accounts.insert(Account {
id: *to,
last_seen: 0,
});
tx_amount += amount;
let transfer_out = TransferOut {
id: bytes32_from_inputs(
id,
[id.to_vec(), to.to_vec(), asset_id.to_vec()].concat(),
),
contract_id: *id,
receiver: *to,
amount: *amount,
asset_id: *asset_id,
};
transfer_out.save();
}
#[allow(unused)]
Receipt::Log { id, rb, .. } => {
contracts.insert(Contract {
id: *id,
last_seen: 0,
});
let log = Log {
id: bytes32_from_inputs(id, u64::to_le_bytes(*rb).to_vec()),
contract_id: *id,
rb: *rb,
};
log.save();
}
#[allow(unused)]
Receipt::LogData { id, .. } => {
contracts.insert(Contract {
id: *id,
last_seen: 0,
});
Logger::info("LogData types are unused in this example. (>'')>");
}
#[allow(unused)]
Receipt::ScriptResult { result, gas_used } => {
let result: u64 = match result {
ScriptExecutionResult::Success => 1,
ScriptExecutionResult::Revert => 2,
ScriptExecutionResult::Panic => 3,
ScriptExecutionResult::GenericFailure(_) => 4,
};
let r = ScriptResult {
id: bytes32_from_inputs(
&[0u8; 32],
u64::to_be_bytes(result).to_vec(),
),
result,
gas_used: *gas_used,
};
r.save();
}
#[allow(unused)]
Receipt::MessageOut {
sender,
recipient,
amount,
..
} => {
tx_amount += amount;
accounts.insert(Account {
id: *sender,
last_seen: 0,
});
accounts.insert(Account {
id: *recipient,
last_seen: 0,
});
Logger::info("LogData types are unused in this example. (>'')>");
}
_ => {
Logger::info("This type is not handled yet.");
}
}
}
// Persist the transaction to the database via the `Tx` object defined in the GraphQL schema.
let tx_entity = Tx {
block: block.id,
timestamp: block.timestamp,
id: tx.id,
value: tx_amount,
status: tx.status.clone().into(),
tokens_transferred: Json(
serde_json::to_value(tokens_transferred)
.unwrap()
.to_string(),
),
};
tx_entity.save();
}
// Save all of our accounts
for account in accounts.iter() {
account.save();
}
// Save all of our contracts
for contract in contracts.iter() {
contract.save();
}
}
}
Once blocks have been added to the database by the indexer, you can query for them by using a query similar to the following:
curl -X POST http://127.0.0.1:29987/api/graph/fuel_examples \
-H 'content-type: application/json' \
-d '{"query": "query { block { id height timestamp }}", "params": "b"}' \
| json_pp
[
{
"height" : 1,
"id" : "f169a30cfcbf1eebd97a07b19de98e4b38a4367b03d1819943be41744339d38a",
"timestamp" : 1668710162
},
{
"height" : 2,
"id" : "a8c554758f78fe73054405d38099f5ad21a90c05206b5c6137424985c8fd10c7",
"timestamp" : 1668710163
},
{
"height" : 3,
"id" : "850ab156ddd9ac9502768f779936710fd3d792e9ea79bc0e4082de96450b5174",
"timestamp" : 1668710312
},
{
"height" : 4,
"id" : "19e19807c6988164b916a6877fe049d403d55a07324fa883cb7fa5cdb33438e2",
"timestamp" : 1668710313
},
{
"height" : 5,
"id" : "363af43cfd2a6d8af166ee46c15276b24b130fc6a89ce7b3c8737d29d6d0e1bb",
"timestamp" : 1668710314
}
]
Blocks and Transactions
You can index use the BlockData
and TransactionData
data structures to index important information about the Fuel network for your dApp.
BlockData
pub struct BlockData {
pub height: u64,
pub id: Bytes32,
pub producer: Option<Bytes32>,
pub time: i64,
pub transactions: Vec<TransactionData>,
}
The BlockData
struct is how blocks are represented in the Fuel indexer. It contains metadata such as the ID, height, and time, as well as a list of the transactions it contains (represented by TransactionData
). It also contains the public key hash of the block producer, if present.
TransactionData
pub struct TransactionData {
pub transaction: Transaction,
pub status: TransactionStatus,
pub receipts: Vec<Receipt>,
pub id: TxId,
}
The TransactionData
struct contains important information about a transaction in the Fuel network. The id
field is the transaction hash, which is a 32-byte string. The receipts
field contains a list of Receipts
, which are generated by a Fuel node during the execution of a Sway smart contract; you can find more information in the Receipts section.
Transaction
pub enum Transaction {
Script(Script),
Create(Create),
Mint(Mint),
}
Transaction
refers to the Fuel transaction entity and can be one of three distinct types: Script
, Create
, or Mint
. Explaining the differences between each of the types is out of scope for the Fuel indexer; however, you can find information about the Transaction
type in the Fuel specifications.
enum TransactionType : uint8 {
Script = 0,
Create = 1,
Mint = 2,
}
name | type | description |
---|---|---|
type | TransactionType | Transaction type. |
data | One of TransactionScript, TransactionCreate, or TransactionMint | Transaction data. |
Transaction is invalid if:
type > TransactionType.Create
gasLimit > MAX_GAS_PER_TX
blockheight() < maturity
inputsCount > MAX_INPUTS
outputsCount > MAX_OUTPUTS
witnessesCount > MAX_WITNESSES
- No inputs are of type
InputType.Coin
orInputType.Message
- More than one output is of type
OutputType.Change
for any asset ID in the input set - Any output is of type
OutputType.Change
for any asset ID not in the input set - More than one input of type
InputType.Coin
for any Coin ID in the input set - More than one input of type
InputType.Contract
for any Contract ID in the input set - More than one input of type
InputType.Message
for any Message ID in the input set
When serializing a transaction, fields are serialized as follows (with inner structs serialized recursively):
uint8
,uint16
,uint32
,uint64
: big-endian right-aligned to 8 bytes.byte[32]
: as-is.byte[]
: as-is, with padding zeroes aligned to 8 bytes.
When deserializing a transaction, the reverse is done. If there are insufficient bytes or too many bytes, the transaction is invalid.
TransactionScript
enum ReceiptType : uint8 {
Call = 0,
Return = 1,
ReturnData = 2,
Panic = 3,
Revert = 4,
Log = 5,
LogData = 6,
Transfer = 7,
TransferOut = 8,
ScriptResult = 9,
MessageOut = 10,
}
name | type | description |
---|---|---|
gasPrice | uint64 | Gas price for transaction. |
gasLimit | uint64 | Gas limit for transaction. |
maturity | uint32 | Block until which tx cannot be included. |
scriptLength | uint16 | Script length, in instructions. |
scriptDataLength | uint16 | Length of script input data, in bytes. |
inputsCount | uint8 | Number of inputs. |
outputsCount | uint8 | Number of outputs. |
witnessesCount | uint8 | Number of witnesses. |
receiptsRoot | byte[32] | Merkle root of receipts. |
script | byte[] | Script to execute. |
scriptData | byte[] | Script input data (parameters). |
inputs | Input[] | List of inputs. |
outputs | Output[] | List of outputs. |
witnesses | Witness[] | List of witnesses. |
Given helper len()
that returns the number of bytes of a field.
Transaction is invalid if:
- Any output is of type
OutputType.ContractCreated
scriptLength > MAX_SCRIPT_LENGTH
scriptDataLength > MAX_SCRIPT_DATA_LENGTH
scriptLength * 4 != len(script)
scriptDataLength != len(scriptData)
IMPORTANT:
When signing a transaction,
receiptsRoot
is set to zero.When verifying a predicate,
receiptsRoot
is initialized to zero.When executing a script,
receiptsRoot
is initialized to zero.
The receipts root receiptsRoot
is the root of the binary Merkle tree of receipts. If there are no receipts, its value is set to the root of the empty tree, i.e. 0xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
.
TransactionCreate
name | type | description |
---|---|---|
gasPrice | uint64 | Gas price for transaction. |
gasLimit | uint64 | Gas limit for transaction. |
maturity | uint32 | Block until which tx cannot be included. |
bytecodeLength | uint16 | Contract bytecode length, in instructions. |
bytecodeWitnessIndex | uint8 | Witness index of contract bytecode to create. |
storageSlotsCount | uint16 | Number of storage slots to initialize. |
inputsCount | uint8 | Number of inputs. |
outputsCount | uint8 | Number of outputs. |
witnessesCount | uint8 | Number of witnesses. |
salt | byte[32] | Salt. |
storageSlots | (byte[32], byte[32]])[] | List of storage slots to initialize (key, value). |
inputs | Input[] | List of inputs. |
outputs | Output[] | List of outputs. |
witnesses | Witness[] | List of witnesses. |
Transaction is invalid if:
- Any input is of type
InputType.Contract
- Any output is of type
OutputType.Contract
orOutputType.Variable
- More than one output is of type
OutputType.Change
withasset_id
of zero - Any output is of type
OutputType.Change
with non-zeroasset_id
- It does not have exactly one output of type
OutputType.ContractCreated
bytecodeLength * 4 > CONTRACT_MAX_SIZE
tx.data.witnesses[bytecodeWitnessIndex].dataLength != bytecodeLength * 4
bytecodeWitnessIndex >= tx.witnessesCount
- The keys of
storageSlots
are not in ascending lexicographic order - The computed contract ID (see below) is not equal to the
contractID
of the oneOutputType.ContractCreated
output storageSlotsCount > MAX_STORAGE_SLOTS
- The Sparse Merkle tree root of
storageSlots
is not equal to thestateRoot
of the oneOutputType.ContractCreated
output
Creates a contract with contract ID as computed here.
TransactionMint
The transaction is created by the block producer and is not signed. Since it is not usable outside of block creation or execution, all fields must be fully set upon creation without any zeroing.
name | type | description |
---|---|---|
txPointer | TXPointer | The location of the Mint transaction in the block. |
outputsCount | uint8 | Number of outputs. |
outputs | Output[] | List of outputs. |
Transaction is invalid if:
- Any output is not of type
OutputType.Coin
- Any two outputs have the same
asset_id
txPointer
is zero or doesn't match the block.
TransactionStatus
pub enum TransactionStatus {
Failure {
block_id: String,
time: DateTime<Utc>,
reason: String,
},
SqueezedOut {
reason: String,
},
Submitted {
submitted_at: DateTime<Utc>,
},
Success {
block_id: String,
time: DateTime<Utc>,
},
}
TransactionStatus
refers to the status of a Transaction
in the Fuel network.
Blocks and Transactions
You can index use the BlockData
and TransactionData
data structures to index important information about the Fuel network for your dApp.
BlockData
pub struct BlockData {
pub height: u64,
pub id: Bytes32,
pub producer: Option<Bytes32>,
pub time: i64,
pub transactions: Vec<TransactionData>,
}
The BlockData
struct is how blocks are represented in the Fuel indexer. It contains metadata such as the ID, height, and time, as well as a list of the transactions it contains (represented by TransactionData
). It also contains the public key hash of the block producer, if present.
TransactionData
pub struct TransactionData {
pub transaction: Transaction,
pub status: TransactionStatus,
pub receipts: Vec<Receipt>,
pub id: TxId,
}
The TransactionData
struct contains important information about a transaction in the Fuel network. The id
field is the transaction hash, which is a 32-byte string. The receipts
field contains a list of Receipts
, which are generated by a Fuel node during the execution of a Sway smart contract; you can find more information in the Receipts section.
Transaction
pub enum Transaction {
Script(Script),
Create(Create),
Mint(Mint),
}
Transaction
refers to the Fuel transaction entity and can be one of three distinct types: Script
, Create
, or Mint
. Explaining the differences between each of the types is out of scope for the Fuel indexer; however, you can find information about the Transaction
type in the Fuel specifications.
enum TransactionType : uint8 {
Script = 0,
Create = 1,
Mint = 2,
}
name | type | description |
---|---|---|
type | TransactionType | Transaction type. |
data | One of TransactionScript, TransactionCreate, or TransactionMint | Transaction data. |
Transaction is invalid if:
type > TransactionType.Create
gasLimit > MAX_GAS_PER_TX
blockheight() < maturity
inputsCount > MAX_INPUTS
outputsCount > MAX_OUTPUTS
witnessesCount > MAX_WITNESSES
- No inputs are of type
InputType.Coin
orInputType.Message
- More than one output is of type
OutputType.Change
for any asset ID in the input set - Any output is of type
OutputType.Change
for any asset ID not in the input set - More than one input of type
InputType.Coin
for any Coin ID in the input set - More than one input of type
InputType.Contract
for any Contract ID in the input set - More than one input of type
InputType.Message
for any Message ID in the input set
When serializing a transaction, fields are serialized as follows (with inner structs serialized recursively):
uint8
,uint16
,uint32
,uint64
: big-endian right-aligned to 8 bytes.byte[32]
: as-is.byte[]
: as-is, with padding zeroes aligned to 8 bytes.
When deserializing a transaction, the reverse is done. If there are insufficient bytes or too many bytes, the transaction is invalid.
TransactionScript
enum ReceiptType : uint8 {
Call = 0,
Return = 1,
ReturnData = 2,
Panic = 3,
Revert = 4,
Log = 5,
LogData = 6,
Transfer = 7,
TransferOut = 8,
ScriptResult = 9,
MessageOut = 10,
}
name | type | description |
---|---|---|
gasPrice | uint64 | Gas price for transaction. |
gasLimit | uint64 | Gas limit for transaction. |
maturity | uint32 | Block until which tx cannot be included. |
scriptLength | uint16 | Script length, in instructions. |
scriptDataLength | uint16 | Length of script input data, in bytes. |
inputsCount | uint8 | Number of inputs. |
outputsCount | uint8 | Number of outputs. |
witnessesCount | uint8 | Number of witnesses. |
receiptsRoot | byte[32] | Merkle root of receipts. |
script | byte[] | Script to execute. |
scriptData | byte[] | Script input data (parameters). |
inputs | Input[] | List of inputs. |
outputs | Output[] | List of outputs. |
witnesses | Witness[] | List of witnesses. |
Given helper len()
that returns the number of bytes of a field.
Transaction is invalid if:
- Any output is of type
OutputType.ContractCreated
scriptLength > MAX_SCRIPT_LENGTH
scriptDataLength > MAX_SCRIPT_DATA_LENGTH
scriptLength * 4 != len(script)
scriptDataLength != len(scriptData)
IMPORTANT:
When signing a transaction,
receiptsRoot
is set to zero.When verifying a predicate,
receiptsRoot
is initialized to zero.When executing a script,
receiptsRoot
is initialized to zero.
The receipts root receiptsRoot
is the root of the binary Merkle tree of receipts. If there are no receipts, its value is set to the root of the empty tree, i.e. 0xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
.
TransactionCreate
name | type | description |
---|---|---|
gasPrice | uint64 | Gas price for transaction. |
gasLimit | uint64 | Gas limit for transaction. |
maturity | uint32 | Block until which tx cannot be included. |
bytecodeLength | uint16 | Contract bytecode length, in instructions. |
bytecodeWitnessIndex | uint8 | Witness index of contract bytecode to create. |
storageSlotsCount | uint16 | Number of storage slots to initialize. |
inputsCount | uint8 | Number of inputs. |
outputsCount | uint8 | Number of outputs. |
witnessesCount | uint8 | Number of witnesses. |
salt | byte[32] | Salt. |
storageSlots | (byte[32], byte[32]])[] | List of storage slots to initialize (key, value). |
inputs | Input[] | List of inputs. |
outputs | Output[] | List of outputs. |
witnesses | Witness[] | List of witnesses. |
Transaction is invalid if:
- Any input is of type
InputType.Contract
- Any output is of type
OutputType.Contract
orOutputType.Variable
- More than one output is of type
OutputType.Change
withasset_id
of zero - Any output is of type
OutputType.Change
with non-zeroasset_id
- It does not have exactly one output of type
OutputType.ContractCreated
bytecodeLength * 4 > CONTRACT_MAX_SIZE
tx.data.witnesses[bytecodeWitnessIndex].dataLength != bytecodeLength * 4
bytecodeWitnessIndex >= tx.witnessesCount
- The keys of
storageSlots
are not in ascending lexicographic order - The computed contract ID (see below) is not equal to the
contractID
of the oneOutputType.ContractCreated
output storageSlotsCount > MAX_STORAGE_SLOTS
- The Sparse Merkle tree root of
storageSlots
is not equal to thestateRoot
of the oneOutputType.ContractCreated
output
Creates a contract with contract ID as computed here.
TransactionMint
The transaction is created by the block producer and is not signed. Since it is not usable outside of block creation or execution, all fields must be fully set upon creation without any zeroing.
name | type | description |
---|---|---|
txPointer | TXPointer | The location of the Mint transaction in the block. |
outputsCount | uint8 | Number of outputs. |
outputs | Output[] | List of outputs. |
Transaction is invalid if:
- Any output is not of type
OutputType.Coin
- Any two outputs have the same
asset_id
txPointer
is zero or doesn't match the block.
TransactionStatus
pub enum TransactionStatus {
Failure {
block_id: String,
time: DateTime<Utc>,
reason: String,
},
SqueezedOut {
reason: String,
},
Submitted {
submitted_at: DateTime<Utc>,
},
Success {
block_id: String,
time: DateTime<Utc>,
},
}
TransactionStatus
refers to the status of a Transaction
in the Fuel network.
Call
use fuel_types::ContractId;
pub struct Call {
id: ContractId,
param1: u64,
}
- A
Call
receipt is generated whenever a function is called in a Sway contract. - The
param1
field holds the function selector value as a hexadecimal. - Read more about
Call
in the Fuel protocol ABI spec
Call
use fuel_types::ContractId;
pub struct Call {
id: ContractId,
param1: u64,
}
- A
Call
receipt is generated whenever a function is called in a Sway contract. - The
param1
field holds the function selector value as a hexadecimal. - Read more about
Call
in the Fuel protocol ABI spec
Log
use fuel_types::ContractId;
pub struct Log {
pub contract_id: ContractId,
pub ra: u64,
pub rb: u64,
}
- A
Log
receipt is generated when callinglog()
on a non-reference types in a Sway contracts.- Specifically
bool
,u8
,u16
,u32
, andu64
.
- Specifically
- The
ra
field includes the value being logged whilerb
may include a non-zero value representing a unique ID for thelog
instance. - Read more about
Log
in the Fuel protocol ABI spec
LogData
use fuel_types::ContractId;
pub struct LogData {
pub contract_id: ContractId,
pub data: Vec<u8>,
pub rb: u64,
pub len: u64,
pub ptr: u64,
}
- A
LogData
receipt is generated when callinglog()
in a Sway contract on a reference type; this includes all types except non-reference types. - The
data
field will include the logged value as a hexadecimal.- The
rb
field will contain a unique ID that can be used to look up the logged data type.
- The
- Read more about
LogData
in the Fuel protocol ABI spec
MessageOut
use fuel_types::{MessageId, Bytes32, Address};
pub struct MessageOut {
pub message_id: MessageId,
pub sender: Address,
pub recipient: Address,
pub amount: u64,
pub nonce: Bytes32,
pub len: u64,
pub digest: Bytes32,
pub data: Vec<u8>,
}
- A
MessageOut
receipt is generated as a result of thesend_message()
Sway method in which a message is sent to a recipient address along with a certain amount of coins. - The
data
field currently supports only a vector of non-reference types rather than something like a struct. - Read more about
MessageOut
in the Fuel protocol ABI spec
Return
use fuel_types::ContractId;
pub struct Return {
pub contract_id: ContractId,
pub val: u64,
pub pc: u64,
pub is: u64,
}
- A
Return
receipt is generated when returning a non-reference type in a Sway contract.- Specifically
bool
,u8
,u16
,u32
, andu64
.
- Specifically
- The
val
field includes the value being returned. - Read more about
Log
in the Fuel protocol ABI spec
ReturnData
use fuel_types::ContractId;
pub struct ReturnData {
id: ContractId,
data: Vec<u8>,
}
- A
ReturnData
receipt is generated when returning a reference type in a Sway contract; this includes all types except non-reference types. - The
data
field will include the returned value as a hexadecimal. - Read more about
ReturnData
in the Fuel protocol ABI spec
Transfer
use fuel_types::{ContractId, AssetId};
pub struct Transfer {
pub contract_id: ContractId,
pub to: ContractId,
pub amount: u64,
pub asset_id: AssetId,
pub pc: u64,
pub is: u64,
}
- A
Transfer
receipt is generated when coins are transferred to a contract as part of a Sway contract. - The
asset_id
field contains the asset ID of the transferred coins, as the FuelVM has built-in support for working with multiple assets.- The
pc
andis
fields aren't currently used for anything, but are included for completeness.
- The
- Read more about
Transfer
in the Fuel protocol ABI spec
TransferOut
use fuel_types::{ContractId, AssetId, Address};
pub struct TransferOut {
pub contract_id: ContractId,
pub to: Address,
pub amount: u64,
pub asset_id: AssetId,
pub pc: u64,
pub is: u64,
}
- A
TransferOut
receipt is generated when coins are transferred to an address rather than a contract. - Every other field of the receipt works the same way as it does in the
Transfer
receipt. - Read more about
TransferOut
in the Fuel protocol ABI spec
ScriptResult
pub struct ScriptResult {
pub result: u64,
pub gas_used: u64,
}
- A
ScriptResult
receipt is generated when a contract call resolves; that is, it's generated as a result of theRET
,RETD
, andRVRT
instructions. - The
result
field will contain a0
for success, and a non-zero value otherwise. - Read more about
ScriptResult
in the Fuel protocol ABI spec
A Fuel Indexer Project
Use Cases
The Fuel indexer project can currently be used in a number of different ways:
- as tooling to compile arbitrary indicies
- as a standalone service
- as a part of a Fuel project, alongside other components of the Fuel ecosystem (e.g. Sway)
We'll describe these three different implementations below.
As tooling for compiling indices
The Fuel indexer provides functionality to make it easy to build and compile abitrary indices by using forc index
. For info on how to use indexer tooling to compile arbitrary indices, check out our Quickstart; additionally, you can read through our examples for a more in-depth exploration of how to compile indices.
As a standalone service
You can also start the Fuel indexer as a standalone binary that connects to a Fuel node to monitor the Fuel blockchain for new blocks and transactions. To do so, run the requisite database migrations, adjust the configuration to connect to a Fuel node, and start the service.
As part of a Fuel project
Finally, you can run the Fuel indexer as part of a project that uses other components of the Fuel ecosystem, such as Sway. The convention for a Fuel project layout including an indexer is as follows:
.
βββ contracts
βΒ Β βββ hello-contract
βΒ Β βββ Forc.toml
βΒ Β βββ src
βΒ Β βββ main.sw
βββ frontend
βΒ Β βββ index.html
βββ indexer
βββ hello-index
βββ Cargo.toml
βββ hello_index.manifest.yaml
βββ schema
βΒ Β βββ hello_index.schema.graphql
βββ src
βββ lib.rs
An Indexer Project at a Glance
Every Fuel indexer project requires three components:
- a Manifest describing index metadata
- a Schema containing models for the data you want to index
- an Execution Module which houses the logic for creating the aforementioned data models
Manifest
A manifest serves as the YAML configuration file for a given index. A proper manifest has the following structure:
namespace: fuel
identifier: index1
abi: path/to/my/contract-abi.json
contract_id: "0x39150017c9e38e5e280432d546fae345d6ce6d8fe4710162c2e3a95a6faff051"
graphql_schema: path/to/my/schema.graphql
start_block: 1564
module:
wasm: path/to/my/wasm_module.wasm
report_metrics: true
namespace
- Think of the
namespace
as an organization identifier. If you're familiar with say, Java package naming, then think of an index'snamespace
as being its domain name. Thenamespace
is unique to a given index operator -- i.e., index operators will not be able to support more than onenamespace
of the same name.
identifier
- The
identifier
field is used to (quite literally) identify the given index. Ifnamespace
is the organization/domain name, then think ofidentifier
as the name of an index within that organization/domain. - As an example, if a provided
namespace
is"fuel"
and a providedidentifier
is"index1"
, then the unique identifier for the given index will befuel.index1
.
abi
- The
abi
option is used to provide a link to the Sway JSON application binary interface (JSON ABI) that is generated when you build your Sway project. This generated ABI contains all types, type IDs, and logged types used in your Sway contract.
contract_id
- The
contract_id
specifies which particular contract you would like your index to subscribe to.
graphql_schema
- The
graphql_schema
field contains the file path that points to the GraphQL schema for the given index. This schema file holds the structures of the data that will eventually reside in your database. You can read more about the format of the schema file here.
Important: The objects defined in your GraphQL schema are called 'entities'. These entities are what will be eventually be stored in the database.
start_block
- The particular start block after which you'd like your indexer to start indexing events.
module
- The
module
field contains a file path that points to code that will be run as an executor inside of the indexer. - There are two available options for modules/execution:
wasm
andnative
.- When specifying a
wasm
module, the provided path must lead to a compiled WASM binary.
- When specifying a
Important: At this time,
wasm
is the preferred method of execution.
report_metrics
- Whether or not to report Prometheus metrics to the Fuel backend
GraphQL Schema
The GraphQL schema is a required component of the Fuel indexer. When data is indexed into the database, the actual values that are persisted to the database will be values created using the data structures defined in the schema.
In its most basic form, a Fuel indexer GraphQL schema should have a schema
definition that contains a defined query root. The rest of the implementation is up to you. Here's an example of a well-formed schema:
schema {
query: QueryRoot
}
type QueryRoot {
thing1: FirstThing
thing2: SecondThing
}
type FirstThing {
id: ID!
value: UInt8!
}
type SecondThing {
id: ID!
other_value: UInt8!
timestamp: Timestamp!
}
The types you see above (e.g., ID
, UInt8
, etc) are Fuel abstractions that were created to more seamlessly integrate with the Fuel VM and are not native to GraphQL. A deeper explanation on these
types can be found in the Types section.
Important: It is up to developers to manage their own unique IDs for each type, meaning that a data structure's
ID
field needs to be manually generated prior to saving it to the database. This generation can be as simple or complex as you want in order to fit your particular situation; the only requirement is that the developer implement their own custom generation. Examples can be found in the Block Explorer and Hello World sections.
WASM Modules
- WebAssembly (WASM) modules are compiled binaries that are registered into a Fuel indexer at runtime. The WASM bytes are read in by the indexer and executors are created which will implement blocking calls the to the WASM runtime.
Usage
To compile your index code to WASM, you'll first need to install the wasm32-unknown-unknown
target platform through rustup
, if you haven't done so already.
rustup add target wasm32-unknown-unknown
After that, you would compile your index code by navigating to the root folder for your index code and build. An example of this can be found below:
cd /my/index-lib && cargo build --release
Notes on WASM
There are a few points that Fuel indexer users should know when using WASM:
-
WASM modules are only used if the execution mode specified in your manifest file is
wasm
. -
Developers should be aware of what things may not work off-the-shelf in a module: file I/O, thread spawning, and anything that depends on system libraries. This is due to the technological limitations of WASM as a whole; more information can be found here.
-
As of this writing, there is a small bug in newly built Fuel indexer WASM modules that produces a WASM runtime error due to an errant upstream dependency. For now, a quick workaround requires the use of
wasm-snip
to remove the errant symbols from the WASM module. More info can be found in the related script here. -
Users on Apple Silicon macOS systems may experience trouble when trying to build WASM modules due to its
clang
binary not supporting WASM targets. If encountered, you can install a binary with better support from Homebrew (brew install llvm
) and instructrustc
to leverage it by setting the following environment variables:
AR=/opt/homebrew/opt/llvm/bin/llvm-ar
CC=/opt/homebrew/opt/llvm/bin/clang
Types
Below is a mapping of GraphQL schema types to their database equivalents.
Sway Type | GraphQL Schema Type | Postgres Type |
---|---|---|
u64 | ID | bigint primary key |
b256 | Address | varchar(64) |
str[4] | Bytes4 | varchar(16) |
str[8] | Bytes8 | varchar(64) |
str[32] | Bytes32 | varchar(64) |
str[32] | AssetId | varchar(64) |
b256 | ContractId | varchar(64) |
str[32] | Salt | varchar(64) |
u32 | UInt4 | integer |
u64 | UInt8 | bigint |
i64 | Timestamp | timestamp |
str[] | Blob | bytes |
str[32] | MessageId | varchar(64) |
bool | Boolean | bool |
Json | json | |
Charfield | varchar(255) | |
Blob | varchar(10485760) |
Example
Let's define an Event
struct in a Sway contract:
struct Event {
id: u64,
address: Address,
block_height: u64,
}
The corresponding GraphQL schema to mirror this Event
struct would resemble:
type Event {
id: ID!
account: Address!
block_height: UInt8!
}
And finally, this GraphQL schema will generate the following Postgres schema:
Table "schema.event"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------------+-------------+-----------+----------+---------+----------+-------------+--------------+-------------
id | bigint | | not null | | plain | | |
block_height | bigint | | not null | | plain | | |
address | varchar(64) | | not null | | plain | | |
object | bytea | | not null | | extended | | |
Indexes:
"event_pkey" PRIMARY KEY, btree (id)
Access method: heap
GraphQL Schema
The GraphQL schema is a required component of the Fuel indexer. When data is indexed into the database, the actual values that are persisted to the database will be values created using the data structures defined in the schema.
In its most basic form, a Fuel indexer GraphQL schema should have a schema
definition that contains a defined query root. The rest of the implementation is up to you. Here's an example of a well-formed schema:
schema {
query: QueryRoot
}
type QueryRoot {
thing1: FirstThing
thing2: SecondThing
}
type FirstThing {
id: ID!
value: UInt8!
}
type SecondThing {
id: ID!
other_value: UInt8!
timestamp: Timestamp!
}
The types you see above (e.g., ID
, UInt8
, etc) are Fuel abstractions that were created to more seamlessly integrate with the Fuel VM and are not native to GraphQL. A deeper explanation on these
types can be found in the Types section.
Important: It is up to developers to manage their own unique IDs for each type, meaning that a data structure's
ID
field needs to be manually generated prior to saving it to the database. This generation can be as simple or complex as you want in order to fit your particular situation; the only requirement is that the developer implement their own custom generation. Examples can be found in the Block Explorer and Hello World sections.
GraphQL Schema
The GraphQL schema is a required component of the Fuel indexer. When data is indexed into the database, the actual values that are persisted to the database will be values created using the data structures defined in the schema.
In its most basic form, a Fuel indexer GraphQL schema should have a schema
definition that contains a defined query root. The rest of the implementation is up to you. Here's an example of a well-formed schema:
schema {
query: QueryRoot
}
type QueryRoot {
thing1: FirstThing
thing2: SecondThing
}
type FirstThing {
id: ID!
value: UInt8!
}
type SecondThing {
id: ID!
other_value: UInt8!
timestamp: Timestamp!
}
The types you see above (e.g., ID
, UInt8
, etc) are Fuel abstractions that were created to more seamlessly integrate with the Fuel VM and are not native to GraphQL. A deeper explanation on these
types can be found in the Types section.
Important: It is up to developers to manage their own unique IDs for each type, meaning that a data structure's
ID
field needs to be manually generated prior to saving it to the database. This generation can be as simple or complex as you want in order to fit your particular situation; the only requirement is that the developer implement their own custom generation. Examples can be found in the Block Explorer and Hello World sections.
Directives
Per GraphQL: A directive is an identifier preceded by a @ character, optionally followed by a list of named arguments, which can appear after almost any form of syntax in the GraphQL query or schema languages.
-
As of this writing, the list of supported Fuel GraphQL schema directives includes:
@indexed
@unique
@join
Using our Library
and Book
example from the previous Foreign Keys section -- given the following schema:
@indexed
The @indexed
directive adds an index to the underlying database column for the indicated field of that type. Generally, an index is a data structure that allows you to quickly locate data without having to search each row in a database table.
schema {
query: QueryRoot
}
type QueryRoot {
book: Book
library: Library
}
type Book {
id: ID!
name: Bytes8! @indexed
}
type Library {
id: ID!
book: Book!
}
In this example, a single BTREE INDEX
constraint will be created on the book
table's name
column, which allows for faster lookups on that field.
Important: At the moment, index constraint support is limited to
BTREE
in Postgres withON DELETE
, andON UPDATE
actions not being supported.
@unique
The @unique
directive adds a UNIQUE
database constraint to the underlying database column for the indicated field of that type. A constraint specifies a rule for the data in a table and can be used to limit the type of data that can be placed in the table. In the case of a column with a UNIQUE
constraint, all values in the column must be different.
schema {
query: QueryRoot
}
type QueryRoot {
book: Book
library: Library
}
type Book {
id: ID!
name: Bytes8! @unique
}
type Library {
id: ID!
book: Book!
}
A UNIQUE
constraint will be created on the book
table's name
column, ensuring that no books can share the same name.
Important: When using explict or implicit foreign keys, it is required that the reference column name in your foreign key relationship be unique.
ID
types are by default unique, but all other types will have to be explicitly specified as being unique via the@unique
directive.
@join
The @join
directive is used to relate a field in one type to others by referencing fields in another type. You can think of it as a link between two tables in your database. The field in the referenced type is called a foreign key and it is required to be unique.
schema {
query: QueryRoot
}
type QueryRoot {
book: Book
library: Library
}
type Book {
id: ID!
name: Bytes8! @unique
}
type Library {
id: ID!
book: Book! @join(on:name)
}
A foreign key constraint will be created on library.book
that references book.name
, which relates the Book
s in a Library
to the underlying Book
table.
GraphQL API Server
- The
fuel-indexer-api-server
crate of the Fuel indexer contains a standalone GraphQL API server that acts as a queryable endpoint on top of the database. - Note that the main
fuel-indexer
binary of the indexer project also contains a queryable GraphQL API endpoint.
The
fuel-indexer-api-server
crate offers a standalone GraphQL API endpoint, whereas the GraphQL endpoint offered infuel-indexer
is bundled with other Fuel indexer functionality (e.g., execution, handling, data-layer contruction, etc).
Usage
To run the standalone Fuel indexer GraphQL API server using a configuration file:
fuel-indexer-api-server run --config config.yaml
In the above example, config.yaml
is based on the default service configuration file.
Options
USAGE:
fuel-indexer-api-server run [OPTIONS]
OPTIONS:
-c, --config <CONFIG>
API server config file.
--database <DATABASE>
Database type. [default: postgres] [possible values: postgres]
--graphql-api-host <GRAPHQL_API_HOST>
GraphQL API host. [default: 127.0.0.1]
--graphql-api-port <GRAPHQL_API_PORT>
GraphQL API port. [default: 29987]
-h, --help
Print help information
--log-level <LOG_LEVEL>
Log level passed to the Fuel Indexer service. [default: info] [possible values: info,
debug, error, warn]
--metrics <metrics>
Use Prometheus metrics reporting. [default: true]
--postgres-database <POSTGRES_DATABASE>
Postgres database.
--postgres-host <POSTGRES_HOST>
Postgres host.
--postgres-password <POSTGRES_PASSWORD>
Postgres password.
--postgres-port <POSTGRES_PORT>
Postgres port.
--postgres-user <POSTGRES_USER>
Postgres username.
--run-migrations <run-migrations>
Run database migrations before starting service. [default: true]
-V, --version
Print version information
Foreign Keys
- The Fuel indexer service supports foreign key constraints and relationships using a combination of GraphQL schema and a database.
- There are two types of uses for foreign keys - implicit and explicit.
IMPORTANT:
Implicit foreign keys do not require a
@join
directive. When using implicit foreign key references, merely add the referenced object as a field type (shown below). A lookup will automagically be done to add a foreign key constraint using this object's'id
field.Note that implicit foreign key relationships only use the
id
field on the referenced table. If you plan to use implicit foreign keys, the object being referenced must have anid
field.In contrast, explicit foreign keys do require a
@join
directive. Explicit foreign key references work similarly to implicit foreign keys; however, when using explicit foreign key references, you must add a@join
directive after your object type. This@join
directive includes the field in your foreign object that you would like to reference (shown below).
Let's learn how to use each foreign key type by looking at some GraphQL schema examples.
Usage
Implicit foreign keys
schema {
query: QueryRoot
}
type QueryRoot {
book: Book
library: Library
}
type Book {
id: ID!
name: Bytes8!
}
type Library {
id: ID!
book: Book!
}
Implicit foreign key breakdown
Given the above schema, two entities will be created: a Book
entity, and a Library
entity. As you can see, we add the Book
entity as an attribute on the Library
entity, thus conveying that we want a one-to-many or one-to-one relationship between Library
and Book
. This means that for a given Library
, we may also fetch one or many Book
entities. It also means that the column library.book
will be an integer type that references book.id
.
Explicit foreign keys
schema {
query: QueryRoot
}
type QueryRoot {
book: Book
library: Library
}
type Book {
id: ID!
name: Bytes8! @unique
}
type Library {
id: ID!
book: Book! @join(on:name)
}
Explicit foreign key breakdown
For the most part, this works the same way as implicit foreign key usage. However, as you can see, instead of implicitly using book.id
as the reference column for our Book
object, we're instead explicitly specifying that we want book.name
to serve as our foreign key. Also, please note that since we're using book.name
in our foreign key constraint, that column is required to be unique (via the @unique
directive).
Foreign Keys
- The Fuel indexer service supports foreign key constraints and relationships using a combination of GraphQL schema and a database.
- There are two types of uses for foreign keys - implicit and explicit.
IMPORTANT:
Implicit foreign keys do not require a
@join
directive. When using implicit foreign key references, merely add the referenced object as a field type (shown below). A lookup will automagically be done to add a foreign key constraint using this object's'id
field.Note that implicit foreign key relationships only use the
id
field on the referenced table. If you plan to use implicit foreign keys, the object being referenced must have anid
field.In contrast, explicit foreign keys do require a
@join
directive. Explicit foreign key references work similarly to implicit foreign keys; however, when using explicit foreign key references, you must add a@join
directive after your object type. This@join
directive includes the field in your foreign object that you would like to reference (shown below).
Let's learn how to use each foreign key type by looking at some GraphQL schema examples.
Usage
Implicit foreign keys
schema {
query: QueryRoot
}
type QueryRoot {
book: Book
library: Library
}
type Book {
id: ID!
name: Bytes8!
}
type Library {
id: ID!
book: Book!
}
Implicit foreign key breakdown
Given the above schema, two entities will be created: a Book
entity, and a Library
entity. As you can see, we add the Book
entity as an attribute on the Library
entity, thus conveying that we want a one-to-many or one-to-one relationship between Library
and Book
. This means that for a given Library
, we may also fetch one or many Book
entities. It also means that the column library.book
will be an integer type that references book.id
.
Explicit foreign keys
schema {
query: QueryRoot
}
type QueryRoot {
book: Book
library: Library
}
type Book {
id: ID!
name: Bytes8! @unique
}
type Library {
id: ID!
book: Book! @join(on:name)
}
Explicit foreign key breakdown
For the most part, this works the same way as implicit foreign key usage. However, as you can see, instead of implicitly using book.id
as the reference column for our Book
object, we're instead explicitly specifying that we want book.name
to serve as our foreign key. Also, please note that since we're using book.name
in our foreign key constraint, that column is required to be unique (via the @unique
directive).
ID Types
There are a few important things related to the use of IDs.
Every GraphQL type defined in your schema file is required to have an id field.
- This field must be called
id
- The type of this
id
field must by au64
- You typically want to use the
ID
type for theseid
fieldsWhy must every field have an ID?
Since the Fuel Indexer uses WASM runtimes to index events, an FFI is needed to call in and out of the runtime. When these calls out of the runtime are made, a pointer is passed back to the indexer service to indicate where the
id
of the type/object/entity being saved is.Is this liable to change in the future?
Yes, ideally we'd like ID's to be of any type, and we plan to work towards this in the future. π
forc index
forc index
is the recommended method for end users to interact with the Fuel indexer. After you have installed fuelup
, you can run the forc index help
command in your terminal to view the available commands.
forc index help
USAGE:
forc-index <SUBCOMMAND>
OPTIONS:
-h, --help Print help information
-V, --version Print version information
SUBCOMMANDS:
build Build an index
check Get status checks on all indexer components
deploy Deploy an index asset bundle to a remote or locally running indexer server
help Print this message or the help of the given subcommand(s)
init Create a new indexer project in the current directory
new Create a new indexer project in a new directory
remove Stop and remove a running index
start Start a local indexer service
forc index init
Create a new index project at the provided path. If no path is provided the current working directory will be used.
forc index init --namespace fuel
USAGE:
forc-index init [OPTIONS]
OPTIONS:
-h, --help Print help information
--name <NAME> Name of index.
--namespace <NAMESPACE> Namespace in which index belongs.
--native Whether to initialize an index with native execution enabled.
-p, --path <PATH> Path at which to create index.
forc index new
Create new index project at the provided path.
forc index new --namespace fuel --path /home/fuel/projects
USAGE:
forc-index new [OPTIONS] <PATH>
ARGS:
<PATH> Path at which to create index
OPTIONS:
-h, --help Print help information
--name <NAME> Name of index.
--namespace <NAMESPACE> Namespace in which index belongs.
--native Whether to initialize an index with native execution enabled.
forc index check
Check to see which indexer components you have installed.
forc index check
USAGE:
forc-index check [OPTIONS]
OPTIONS:
--grpahql-api-port <GRPAHQL_API_PORT>
Port at which to detect indexer service API is running. [default: 29987]
-h, --help
Print help information
--url <URL>
URL at which to find indexer service. [default: http://127.0.0.1:29987]
You can expect the command output to look something like this example in which the requisite components are installed but the indexer service is not running:
β forc index check
β Could not connect to indexer service: error sending request for url (http://127.0.0.1:29987/api/health): error trying to connect: tcp connect error: Connection refused (os error 61)
+--------+------------------------+----------------------------------------------------------------------------+
| Status | Component | Details |
+--------+------------------------+----------------------------------------------------------------------------+
| β
| fuel-indexer binary | Found 'fuel-indexer' at '/Users/me/.fuelup/bin/fuel-indexer' |
+--------+------------------------+----------------------------------------------------------------------------+
| βοΈ | fuel-indexer service | Failed to detect a locally running fuel-indexer service at Port(29987). |
+--------+------------------------+----------------------------------------------------------------------------+
| β
| psql | Found 'psql' at '/usr/local/bin/psql' |
+--------+------------------------+----------------------------------------------------------------------------+
| β
| fuel-core | Found 'fuel-core' at '/Users/me/.fuelup/bin/fuel-core' |
+--------+------------------------+----------------------------------------------------------------------------+
| β
| docker | Found 'docker' at '/usr/local/bin/docker' |
+--------+------------------------+----------------------------------------------------------------------------+
| β
| fuelup | Found 'fuelup' at '/Users/me/.fuelup/bin/fuelup' |
+--------+------------------------+----------------------------------------------------------------------------+
| β
| wasm-snip | Found 'wasm-snip' at '/Users/me/.cargo/bin/wasm-snip' |
+--------+------------------------+----------------------------------------------------------------------------+
forc index build
Build an index
forc index build --release --manifest my_index.manifest.yaml
USAGE:
forc-index build [OPTIONS] --manifest <MANIFEST>
OPTIONS:
-h, --help Print help information
--locked Ensure that the Cargo.lock file is up-to-date.
-m, --manifest <MANIFEST> Path of index manifest being built.
--native Building for native execution.
--profile <PROFILE> Build with the given profile.
-r, --release Build optimized artifacts with the release profile.
--target <TARGET> Target at which to compile.
-v, --verbose Verbose output.
forc index start
Start a local Fuel Indexer service.
forc index start --background
USAGE:
forc-index start [OPTIONS]
OPTIONS:
--background Whether to run the Fuel Indexer in the background.
--bin <BIN> Path to the fuel-indexer binary.
--config <CONFIG> Path to the config file used to start the Fuel Indexer.
-h, --help Print help information
--log-level <LOG_LEVEL> Log level passed to the Fuel Indexer service. [default: info]
[possible values: info, debug, error, warn]
forc index deploy
Deploy a given index project to a particular endpoint
forc index deploy --url https://index.swaysway.io --manifest my_index.manifest.yaml
USAGE:
forc-index deploy [OPTIONS] --manifest <MANIFEST>
OPTIONS:
--auth <AUTH> Authentication header value.
-h, --help Print help information
--manifest <MANIFEST> Path of the index manifest to upload.
--url <URL> URL at which to upload index assets. [default:
http://127.0.0.1:29987]
forc index remove
Stop and remove a running index
forc index remove --url https://index.swayswap.io --manifest my_index.manifest.yaml
USAGE:
forc-index remove [OPTIONS] --manifest <MANIFEST>
OPTIONS:
--auth <AUTH> Authentication header value.
-h, --help Print help information
--manifest <MANIFEST> Path of the index manifest to be parsed.
--url <URL> URL at which to upload index assets. [default:
http://127.0.0.1:29987]
For Contributors
Thanks for your interest in contributing to the Fuel indexer! Below we've compiled a list of sections that you may find useful as you work on a potential contribution:
Dependencies
fuelup
- We use fuelup in order to get the binaries produced by services in the Fuel ecosystem. Fuelup will install binaries related to the Fuel node, the Fuel indexer, the Fuel orchestrator (forc), and other components.
- fuelup can be downloaded here.
docker
- We use Docker to produce reproducible environments for users that may be concerned with installing components with large sets of dependencies (e.g. Postgres).
- Docker can be downloaded here.
Database
At this time, the Fuel indexer requires the use of a database. We currently support a single database option: Postgres. PostgreSQL is a database solution with a complex feature set and requires a database server.
PostgreSQL
Note: The following explanation is for demonstration purposes only. A production setup should use secure users, permissions, and passwords.
On macOS systems, you can install PostgreSQL through Homebrew. If it isn't present on your system, you can install it according to the instructions. Once installed, you can add PostgreSQL to your system by running brew install postgresql
. You can then start the service through brew services start postgresql
. You'll need to create a database for your index data, which you can do by running createdb [DATABASE_NAME]
. You may also need to create the postgres
role; you can do so by running createuser -s postgres
.
For Linux-based systems, the installation process is similar. First, you should install PostgreSQL according to your distribution's instructions. Once installed, there should be a new postgres
user account; you can switch to that account by running sudo -i -u postgres
. After you have switched accounts, you may need to create a postgres
database role by running createuser --interactive
. You will be asked a few questions; the name of the role should be postgres
and you should elect for the new role to be a superuser. Finally, you can create a database by running createdb [DATABASE_NAME]
.
In either case, your PostgreSQL database should now be accessible at postgres://postgres@127.0.0.1:5432/[DATABASE_NAME]
.
SQLx
- After setting up your database, you should install
sqlx-cli
in order to run migrations for your indexer service. - You can do so by running
cargo install sqlx-cli --features postgres
. - Once installed, you can run the migrations by running the following command after changing
DATABASE_URL
to match your setup.
Building from Source
Clone repository
git clone git@github.com:FuelLabs/fuel-indexer.git && cd fuel-indexer/
Run migrations
Postgres migrations
cd packages/fuel-indexer-database/postgres
DATABASE_URL=postgres://postgres@localhost sqlx migrate run
Start the service
cargo run --bin fuel-indexer run
You can also start the service with a fresh local node for development purposes:
cargo run --features local-node --bin fuel-indexer run
If no configuration file or other options are passed, the service will default to a
postgres://postgres@localhost
database connection.
Testing
Fuel indexer tests are currently broken out by a database feature flag. In order to run tests with a Postgres backend, use --features postgres
.
Further, the indexer uses end-to-end (E2E) tests. In order to trigger these end-to-end tests, you'll want to use the e2e
features flag: --features e2e
.
All end-to-end tests also require the use of a database feature. For example, to run the end-to-end tests with a Posgres backend, use
--features e2e,postgres
.
Default tests
cargo test --locked --workspace --all-targets
End-to-end tests
cargo test --locked --workspace --all-targets --features e2e,postgres
trybuild
tests
For tests related to the meta-programming used in the Fuel indexer, we use trybuild
.
RUSTFLAGS='-D warnings' cargo test -p fuel-indexer-macros --locked
Contributing to Fuel Indexer
Thanks for your interest in contributing to Fuel Indexer! This document outlines some the conventions on building, running, and testing Fuel Indexer.
Fuel Indexer has many dependent repositories. If you need any help or mentoring getting started, understanding the codebase, or anything else, please ask on our Discord.
Code Standards
We use an RFC process to maintain our code standards. They currently live in the RFC repo: https://github.com/FuelLabs/rfcs/tree/master/text/code-standards
Building and setting up a development workspace
Fuel Core is mostly written in Rust, but includes components written in C++ (RocksDB).
We are currently using the latest Rust stable toolchain to build the project.
But for rustfmt
, we use Rust nightly toolchain because it provides more code style features(you can check rustfmt.toml
).
Prerequisites
To build Fuel Core you'll need to at least have the following installed:
git
- version controlrustup
- Rust installer and toolchain managerclang
- Used to build system libraries (required for rocksdb).postgresql/libpq
- Used for Postgres backend.
See the README.md for platform specific setup steps.
Getting the repository
Future instructions assume you are in this repository
git clone https://github.com/FuelLabs/fuel-indexer
cd fuel-indexer
Configuring your Rust toolchain
rustup
is the official toolchain manager for Rust.
We use some additional components such as clippy
and rustfmt
, to install those:
rustup component add clippy
rustup component add rustfmt
Fuel Indexer also uses a few other tools installed via cargo
cargo install sqlx-cli
cargo install wasm-snip
Building and testing
Fuel Indexer's two primary crates are fuel-indexer
and fuel-indexer-api-server
.
You can build Fuel Indexer:
cargo build -p fuel-indexer -p fuel-indexer-api-server
This command will run cargo build
and also dump the latest schema into /assets/
folder.
Linting is done using rustfmt and clippy, which are each separate commands:
cargo fmt --all --check
cargo clippy --all-features --all-targets -- -D warnings
The test suite follows the Rust cargo standards. The GraphQL service will be instantiated by Tower and will emulate a server/client structure.
Testing is simply done using Cargo:
RUSTFLAGS='-D warnings' SQLX_OFFLINE=1 cargo test --locked --all-targets --all-features
Build Options
For optimal performance, we recommend using native builds. The generated binary will be optimized for your CPU and may contain specific instructions supported only in your hardware.
To build, run:
cargo build --release --bin fuel-indexer
The generated binary will be located in ./target/release/fuel-indexer
Build issues
- Due to dependencies on external components such as RocksDb, build times can be large without caching. We currently use sccache
cargo build -p fuel-indexer --no-default-features
Contribution flow
This is a rough outline of what a contributor's workflow looks like:
- Make sure what you want to contribute is already tracked as an issue. We may discuss the problem and solution in the issue. β οΈ DO NOT submit PRs that do not have an associated issue β οΈ
- Create a Git branch from where you want to base your work.
- Most work is usually branched off of
master
- Give your branch a name related to the work you're doing
- Most work is usually branched off of
- Write code, add test cases, and commit your work.
- Run tests and make sure all tests pass.
- Your commit message should be formatted as
[commit type]: [short commit blurb]
- Examples:
- If you fixed a bug, your message is
fix: database locking issue
- If you added new functionality, your message would be
enhancement: i add something super cool
- If you just did a chore your message is:
chore: i did somthing not fun
- If you fixed a bug, your message is
- Keeping commit messages short and consistent helps users parse release notes
- Examples:
- Push up your branch to Github then (on the right hand side of the Github UI):
- Assign yourself as the owner of the PR
- Add any and all necessary labels to your PR
- Link the issue your PR solves, to your PR
- If you are part of the FuelLabs Github org, please open a PR from the repository itself.
- Otherwise, push your changes to a branch in your fork of the repository and submit a pull request.
- Make sure mention the issue, which is created at step 1, in the commit message.
- Your PR will be reviewed and some changes may be requested.
- Once you've made changes, your PR must be re-reviewed and approved.
- If the PR becomes out of date, you can use GitHub's 'update branch' button.
- If there are conflicts, you can merge and resolve them locally. Then push to your PR branch.
- Any changes to the branch will require a re-review.
- Our CI (Github Actions) automatically tests all authorized pull requests.
- Use Github to merge the PR once approved.
Commit categories
bug
: If fixing broken functionalityenhancement
: If adding new functionalitychore
: If finishing valuable work (that's no fun!)testing
: If only updating/writing testsdocs
: If just updating docsfeat
: If adding a non-trivial new feature- There will be categories not covered in this doc - use your best judgement!
Thanks for your contributions!
Finding something to work on
For beginners, we have prepared many suitable tasks for you. Checkout our Good First Issues for a list.
If you are planning something that relates to multiple components or changes current behaviors, make sure to open an issue to discuss with us before continuing.
Release Schedule
Major releases
- E.g.,
v2.0.0
->v3.0.0
- Major releases of large features and breaking changes
- Cadence: TBD - as needed
Minor releases
- E.g.,
v0.3.0
->v0.4.0
- General releases of new functionality, fixes, and some breaking changes
- Cadence: Every other week, Tuesday morning 11am EST
Patch releases
- E.g.,
v0.1.3
->v0.1.4
- Releases for bug fixes and time sensitive improvements
- Cadence: Ad-hoc as needed throughout the week