Fortytwo App | Fortytwo Container | Fortytwo CLI | |
|---|---|---|---|
| Runner tier | Beginner | Docker user | Console user |
| Best for | Personal device | Server/VM | Personal device |
| Interaction type | GUI | Console | Guided console |
| OS | |||
| Nvidia GPU | Supported | Required | Supported |
| Apple Silicon | Supported | ✖ | Supported |
| features | |||
| Manual Mode | ✓ | ✓ | ✓ |
| Auto Mode | ✓ | ✖ | ✖ |
| Editable KV Cache | ✓ | ✖ | ✓ |
| Multi-GPU | ✓ | ✓ | ✓ |
| Split GPUs | ✖ | ✓ | ✖ |
| Load Custom GGUF | ✓ | ✖ | ✓ |
Features Explained
Manual Mode: Manual model selection
Manual Mode: Manual model selection
You can maximize your node’s potential but it requires knowledge and commitment:
- You choose which model your node runs.
- Performance depends on your choices.
- This mode is intended for noderunners who are familiar with language models.
Auto Mode: Automatic model selection
Auto Mode: Automatic model selection
Your node does all the work:
- Models are selected automatically.
- Performance is balanced.
- You don’t need to know anything about language models.
Editable KV Cache
Editable KV Cache
By default, in our applications we use adaptive KV Cache size, so the node can adapt to your hardware. When you launch the node it analyses your available resources and reserves the following:
- GPU-based systems (primarily Windows, Linux) — reserves 90% of idle VRAM.
- ARM-based systems with unified memory (primarily macOS) — reserves 80 to 85% of leftover RAM.
Multi-GPU
Multi-GPU
On systems with several GPUs installed, or when several GPUs are allocated to a single process, the node will utilize all of the available resources from these GPUs.For example: your system is equipped with 2 GPUs, each with 24 GB of VRAM. In this case, your node will read it as a total of 48 GB VRAM and will be able to run bigger models than a single GPU could allow.
Split GPUs
Split GPUs
Allows to assign a particular GPU or several GPUs from an available array to a single node.For example: if 8 GPUs are available, it is possible to run up to 8 nodes on this device.
Load Custom GGUF
Load Custom GGUF
Allows to select an externally downloaded model in GGUF format.
Otherwise, only allows loading models from the Hugging Face repository, like Strand-Rust-Coder 14B on Hugging Face .
Otherwise, only allows loading models from the Hugging Face repository, like Strand-Rust-Coder 14B on Hugging Face .
Note that not all GGUF models are immediately supported.