.env file has to be set up first.
If you haven’t set it up during the ‘Quick Start’ stage, copy and rename the .env.example reference file or create empty .env file and fill with the following placeholder data:
Unwrap for `.env` file placeholder content
Unwrap for `.env` file placeholder content
Environment Parameters
FT_ACCOUNT_PRIVATE_KEY
Paste in the private key of an EVM-compatible Web3 account.
- Your Web3 account should be unqie for each node that you run simultaniously. This applies to both Inference and Relay nodes.
For example: if you intend to run 8 nodes on 8×GPU rig at the same time, you’ll need 8 private keys. - Keep your Private Key secure. Never share it publicly or commit it to version control systems.
FT_NODE_LISTENER_PORT
Default port is 42042. Change it if necessary.
- If another process in the system is using this port, for example, a Relay Node is running on the same machine, then change this port for the Inference Node.
- If running several Inference Nodes on the same machine at the same time, each of them should have its own unique port defined.
- The defined prots should not be occupied by other processes in the system.
- It must be different from the
FT_RPC_SERVICE_PORT.
FT_CAPSULE_HTTP_PORT
Default port is 42442. Change it if necessary.
- If another process in the system is using this port, for example, a Relay Node is running on the same virtual machine, then change this port for the Inference Node.
- It must be different from the
FT_NODE_LISTENER_PORT.
FT_CAPSULE_LLM_HF_REPO
Hugging Face repository ID.
For example: Fortytwo-Network/Strand-Rust-Coder-14B-v1-GGUFfor this repository: https://huggingface.co/Fortytwo-Network/Strand-Rust-Coder-14B-v1-GGUF
FT_CAPSULE_LLM_HF_MODEL_NAME
Hugging Face model ID.
For example: Fortytwo_Strand-Rust-Coder-14B-v1-Q4_K_M.gguffrom this repository: https://huggingface.co/Fortytwo-Network/Strand-Rust-Coder-14B-v1-GGUF