Fixed
- Fixed an issue that caused a loss of some requests in the network.
Fixed
- Fixed an issue that caused a
Model produced no outputerror to be displayed incorrectly.
Improved readability, Windows layout fix.
Added
- General readability improvements, style unification and extra spacings added.
- (Windows) SETTINGS symbol added instead of a symbol that caused visualization issues.
- Settings now have a header when activated.
- Auto-Select mode now has a header when activated.
Changed
- “You chose” text has been changed to “Model selected” in the model selection option.
Better logs readability.
Changed
- “New request found” log entries are now summarized to not overload the CLI log.
Added
- Added a new field to the Bug Report form,
Specify public wallet address. Filling out this field can substantially speed up resolving an issue.
Fixed
- Architecture for Linux version now correctly shows x64 instead of ARM64.
- Fixed an issue with activation code functionality during authorization.
- Fixed an issue where “Insufficient MON” notification was displayed when a user had sufficient MON in their wallet.
Better logs readability.
Changed
- Improved logs readability by reducing the number of excessive log entries and removing unnecessary details.
Strand-Rust-Coder-14B-v1 is now available in the CLI App. Our first Fortytwo-native model built with the community; state-of-the-art for Rust code generation.You can find more details on the model’s Hugging Face page.
Added
- Strand-Rust-Coder-14B-v1 is available under 21st position in the model selection list.
Fixed
- Fixed custom model import from Hugging Face for Windows.
Added support of features from the ⎔ Capsule 0.2.3 release.
Added
- KV Cache can now be manually defined.
- The option can be found under Settings category in the model selection menu.
- The option must be defined each time the script is restarted.
- Default option on start-up has
modeset toauto, allocating as much available resources as possible for caching.
- Added support for local GGUF model import under Import Custom option in the model selection menu list.
Changed
- Settings menu option added to the model selection list as a [0] option.
- Import Custom now provides options: [1] Hugging Face Import or [2] Local GGUF Model Import.
Strand-Rust-Coder-14B-v1 is now available in the App. Our first Fortytwo-native model built with the community; state-of-the-art for Rust code generation.You can find more details on the model’s Hugging Face page.
Added
- macOS version is now signed and supports auto-updates.
- New app update checking flow: startup update screen has been redesigned; the app now also downloads updates for protocol/capsule during the initial update.
- New model card view — you can now set the model to immediately enable upon download completion.
- Improved model manager performance — model list is now cached in the storage folder and updates automatically; models and updates should now be accessible in most countries without a VPN.
- Model manager stabilization — improved reliability when pausing or cancelling downloads.
Fixed
- Fixed an issue causing Swapped Identity to appear for some users after an update.
- Fixed duplicate “resolved” notifications appearing multiple times.
- Fixed fullscreen notifications not showing on the Checking for Updates screen.
- Fixed an issue where quitting the app sometimes took longer than expected.
- Fixed menu bar app icon color for Linux version.
macOS Users
Version 0.2.2 is now code-signed, which means its internal app identity has changed. Because of this, the new version cannot access data previously stored in the Keychain by the unsigned v0.2.1 build. Fortytwo continues to use macOS Keychain to securely store your private key at system level.When launching the new version, macOS may ask for your system password to grant Fortytwo access to its secure data — you have two options: you can safely enter your password and clickAlways Allow to avoid future prompts, or you can delete the previous data and restore your profile in the new version.Steps for macOS users to reset the app data
1
Export your private key
How to export your private key:
- Right click on the Fortytwo App icon in the Menu Bar.
- Select
Account>Export Private Key. - Select destination and confirm.
2
Delete the old record
Open Terminal and run this command:
Zsh
3
Install the latest version of the Fortytwo App
Download and install the latest signed macOS version (0.2.2 and above).
4
Launch the App
Run the app and sign in again using your private key or secret recovery phrase.
Fixed
- Fix an issue where best-fit model was detected incorrectly and caused errors and unintended behavior.
- Fix cache cleanup when canceling a model download.
Introducing manual model management mode in the Fortytwo App. It is now possible to conveniently switch between auto and manual modes at any time.
Added
- Auto/Manual mode switch with an onboarding stage explaining the modes.
- Main screen now includes:
- Model cards.
- Controls to switch between the modes.
- Model search.
- Manual mode features:
- Model search. Search for models from the Fortytwo App featured list, Hugging Face, or your local imported models.
- Model download and deletion.
- Model activation — defining active model from the list of downloaded and imported ones.
- Setting the model to activate when downloaded or imported.
- Featured models can get tagged as Best-fit, Recommended, Slow, or Unfit to aid with the right model selection for the system.
- Model cards list recommended VRAM/RAM sizes for their respective models.
- Local import support for .GGUF model files.
Fixed
- Corrected free storage calculation in tray menu.
- Added Return to authorization options button on the Activation code step — allows returning if the code is invalid.
Changed
- Simplified and clarified the way the node reports round results in its metrics.
Added
The following has been added for the capsule to support upcoming app features.- Local model and embeddings models loading to unlock faster cold starts:
--llm-model-path <path>--embeddings-model-path <path>
- KV-cache size controls to control how much of your system resources are allocated to inference generation. Default:
--kv-size-mode auto--kv-size-mode <auto|min|medium|max>(apply sizing by mode: auto | 33% | 66% | 100% of available limit)--kv-size-tokens <int>(default and min are1024, target cache in tokens, fallback to default:--kv-size-mode auto)--kv-size-gb <float>(default is1.0, target cache in GB. If less than1, fallback to--kv-size-modeor--kv-size-mode auto)
Added
- Faster, more resilient networking: QUIC transport (opt-in) runs alongside TCP on the same port with automatic TCP fallback; upgraded to QUICv1 for lower latency and smoother handshakes.
- Connect through tough networks: NAT hole-punching (DCUtR) upgrades relayed links to direct connections with retry/backoff and Prometheus metrics for visibility.
- Smarter peer knowledge: PeerStore now tracks multiple addresses per peer to improve reachability.
- Configurable relay mode: Optional GossipSub relay acceptance/validation (off by default). Enable with
FT_GOSSIPSUB_RELAY_ENABLED=true. - Easier bring-up & testing: IPFS bootstrapping option; Docker Compose to run multiple nodes on one machine; Dockerfile now works with Podman.
- Better observability: New Prometheus metrics for DCUtR operations, peer analysis, and connection state.
Changed
- Less waste, better throughput: Participation logic tuned so a node skips very slow or ineffective rounds.
- Lower latency by default: Dialer prioritizes QUIC addresses when available.
- Cleaner logs: Error noise reduced where retries are automatic; clearer transport-type logging.
- Smarter routing: Improved address policy for relayed vs. direct connections.
- Safer defaults: Updated node/network configuration; DCUtR is enabled conditionally via config.
Added
- Extra validation before capsule auto-start so it doesn’t interfere with app authorization process.
- Compatibility check: if nvidia-smi is missing and the system is not macOS, the app now shows System Incompatible notification.
Changed
- GPU-based systems will now always run GPU capsule.
- Simplified onboarding:
Go to nodebutton now closes the dialog instead of redirecting to another page. - (macOS) Improved active window behavior:
- App window always creates a menu bar entry.
- When switching to app, a menu bar window is opened.
- When launching the app when it is already running, its window is opened instead of launching another instance of the app.
Fixed
- Notifications:
- Removed interaction options from notifications that have already been resolved.
- Fixed globe icon color on notifications — it was invisible in light theme.
Added
- Bug reporting: System information is now collected and sent as a separate file (contained in the log archive) when submitting a bug report.
- Added “Documentation” item to the tray menu.
- (macOS) Launch Agent is now used instead of AppleScript for the auto-launch option — now the app should’t ask for additional permissions.
Changed
- (macOS) Updated app icons.
- Clarified titles in a few app messages.
- Adjusted memory check — it now uses available VRAM instead of total VRAM.
- VRAM warning is now triggered only if less than 2 GB of free VRAM is available.
Fixed
- Notification Center:
- Some feedback messages no longer appear in the notification center for clarity.
- Some warnings and critical notifications (e.g. Heavy Load, Connection Failed) will no longer duplicate if an active notification with the same code already exists.
- Fixed Connection Failed notification during onboarding – now only shown at the creation stage when an internet connection is actually needed.
- Fixed model list handling — now filtering out unnecessary models using the
automation:falsetag, they will no longer appear in automatic mode. - Improved stability when losing internet connection:
- When a download is interrupted due to lost connection, the model will no longer be mistakenly marked as downloaded and re-downloaded by the capsule.
- Better separation of Connection Failed error states during node operation.
- Reconnection notifications are now more stable both when the capsule is stopped and when it is running. It retries pings and disappears when connection is established; if automatically paused, it resumes automatically.
- Service connection status no longer disappears after a timeout when offline — it now clears only after a successful ping or when the node connection issue is resolved.
- Downloads now reliably restart once internet connection is restored (previously, this worked inconsistently).
- Fixed broken dock menu on macOS
- Fixed incorrect behavior of tray options in “Open in window” mode.
Fixed
- Price calculation when creating a request through rpc API has been fixed.
Added
- Startup timeout (180s) to prevent hangs during initialization.
- Detailed startup logs for Swarm and Blockchain readiness.
- New exit codes:
- 10: Startup error
- 11: Startup timeout
- Added delegator address to rpc API.
Changed
- Extended capsule API for making participation decisions.
Fixed
- Readiness check loop now handles shutdowns correctly, avoiding hot loops and high CPU load.
Added
- Added health check to API.
- Added non-blocking metadata request.
- Added context length checker to evaluate-participation request.
- Added support structured outputs.
- Added new ranking schema for v1 ranking.
- Implemented Efficient KV Cache Reuse for ranking.
Improved chain sync robustness and streamlined network module logic.
Added
- Async block gap processing to detect and fill missing blockchain data automatically.
- Minor stability improvements for consensus submissions.
Changed
- Removed relay-specific logic from the network module to simplify architecture.
Corrected label assignments in the inference metrics output.
Fixed
- Corrected label assignments in the inference metrics output.
Enhanced observability, RPC efficiency, and improved compatibility for CLI/GUI workflows.
Added
- Protocol exit codes for improved error signaling.
- Optimized RPC by implementing raw block polling with bloom filter pre-checks to reduce event overhead.
- Added operational metrics (node state, request tracking).
- Included blockchain-specific metrics.
Changed
- Updated console time format for improved readability.
- Reintroduced CLI arguments for backward compatibility with CLI and GUI apps.
- Upgraded Rust version to 1.85.0.
Fixed
- Corrected RPC request handling to ensure proper use of swarm consensus.
Improved process management with graceful shutdown and defined exit codes.
Added
- Graceful shutdown handling for clean termination.
- Defined exit codes for clearer process state signaling and bug reporting.
Introduced robust failover, improved ranking, and overall improvements to networking stability
Added
- Implemented polling-based staggered submission for inference resolutions with position-based delays.
- Added initial version of new ranking algorithm.
- Added fallback RPC support with automatic failover and retry layer for improved blockchain connectivity.
Changed
- Changed blockchain event query error logging from error to warning level since query_events is automatically retried.
- Changed swarm connection loss logging from error to warning level when insufficient peers are available.
- Changed capsule binding address from localhost to 0.0.0.0 to allow access from external clients.
Dependency upgrade for improved compatibility and performance.
Changed
- Updated core dependencies: llama.cpp and ft-tools.
- Upgraded Rust version to 1.85.0.
Major networking upgrade with peer discovery, sync improvements, and cleanup of blockchain and log systems.
Added
- Introduced /metrics endpoint for Prometheus-based performance analysis.
- Enabled new peer discovery method during node initialization.
- Added node startup synchronization.
- Integrated Identify protocol for peer recognition.
- Forwarded sorted ranking results in HTTP responses.
Changed
- Renamed 42T token to FOR.
- Updated bootstrap nodes.
- Upgraded Rust toolchain to 1.84.0.
Fixed
- Reworked network interaction to resolve sync issues.
- Ignored early requests received before node startup completed.
- Cleaned up log to reduce noise.
Introduced new peer discovery method and startup synchronization with major networking and blockchain module updates.
Added
- Integrated Kademlia protocol for peer discovery during initialization.
- Implemented node startup synchronization to ensure readiness.
- Added Identify protocol support for recognizing peer nodes.
- Forwarded sorted ranking results included in HTTP response.
Changed
- Refactored blockchain module to use alloy instead of ethers.
Fixed
- Resolved bootstrap node initialization issue.
- Ignored incoming requests created before node startup is complete.
Fixed warm-up logic to improve LLM readiness at startup.
Fixed
- Resolved issues with LLM warm-up during capsule initialization.
Improved development observability.
Changed
- Added additional logs for development and debugging purposes.
Added request sizing parameter to capsule completions.
Added
- Included max_size parameter in the completions request to the capsule for better input control.
Added support for reasoning token controls and max output size.
Added
- Option to include reasoning tokens in output.
- Parameter to adjust reasoning token count.
- max_size field added to completion requests.
Increased reserved memory to improve runtime reliability.
Changed
- Increased default reserved memory allocation.
Added request size cap to prevent overload in devs request.
Added
- max_size field to the /devs request for controlled query sizing.
Major performance and capability upgrade including dynamic context, GPU offloading, and embeddings truncation.
Added
- Embedding truncation for improved memory efficiency.
- Support for using the model’s maximum context size.
- Optimized input token decoding for better inference speed.
- Tokenizer object now returned in /metadata.
- Dynamic selection mechanism for context length.
- GPU offloading support for KV cache.
Changed
- Updated development dependencies.
Fixed
- Fixed batch prefill logic to improve prompt handling stability.
Made tokenizer ID optional in developer-facing requests.
Fixed
- Tokenizer_id in /devs request is now optional, improving support for lightweight queries.
Improved handling for empty tokenizer values in intents.
Fixed
- Correctly handles empty tokenizer cases in submitted intents.
Major feature release with metadata access, performance estimation, new ranking strategies, and node load balancing.
Added
- GET /metadata endpoint for capsules to fetch version, token limits, and tokenizer info.
- Participation now estimates performance limits from capsule metadata.
- New ranking method added that returns a confidence score per candidate.
- Load balancing system introduced to evenly distribute requests across nodes.
- New experimental pairwise ranking algorithm implemented.
- Failsafe added to exit if allowance approval fails at minimum threshold.
Improved system prompt and added error handling for model outputs.
Changed
- Added handling for empty model output in get_completions response.
- Updated system prompt for better context initialization.
Disabled Python execution and introduced structured inputs for participation logic.
Changed
- Temporarily disabled Python interpreter.
- Fixed emoji rendering issue in output.
- Added structured input format for evaluate-participation.
Improved token handling and pre-checks before participation.
Added
- Node balance validation before joining a round.
- Auto-approval of 42T token allowance for smoother participation.
Adjusted participation logic and timing for better request flow.
Changed
- Increased max duration for inference join attempts.
- Modified participation probability to retain a proportional number of idle nodes for upcoming requests.
Performance improvements, private crate support, and prompt tuning.
Added
- Support for custom private Rust crates.
- Token generation speed: 97.63% parity with llama.cpp.
- Performance metrics logged in debug output.
Changed
- Codebase refactor for modularity and readability.
- Updated ranking prompt and generation parameters.
- Updated papaya crate to v0.2.1.
- Separated token-to-text from generation worker.
Feature update with a new embeddings model, flash attention and improved ranking capabilities.
Added
- New embeddings model.
- Flash attention for faster processing.
- max_tokens support for ranking requests.
Changed
- Expanded context window for embeddings. ranking logic.
Fixed
- LLM context window bug.
- General resource optimization.
Backend upgrades and CUDA environment improvements.
Changed
- Upgraded to llama.cpp version b4902.
- Updated Linux CUDA configuration.
Minor runtime adjustment to stabilize startup.
Added
- Added delay before initializing BlockchainBridge.
Major enhancements to blockchain integration, formatting, and dependencies.
Added
- Retry logic for blockchain sync, balances, approvals, and intents via retry_on_error!.
- New u256_frac_mul! macro for precise U256 math.
- Time sync with on-chain deadlines for accurate state validation.
Changed
- Code formatting improvements via rustfmt.toml.
- Dependency Updates
- Upgraded rand crate from 0.8 → 0.9.0.
- Added semver crate at 1.0.0.
Improved swarm stability, fixed reconnection and ranking issues, and enhanced logging.
Added
- Implemented forced exit on swarm disconnection to improve network stability.
Changed
- Enhanced log messages for better debugging and clarity.
Fixed
- Resolved an issue causing failures in swarm reconnection.
- Corrected the ranking algorithm for improved inference accuracy.
- Fixed a bug where ParticipateInInference continued running unexpectedly.