Next-Gen ML Training Platform|SHADA v2.0

Train Smarter.
Deploy Faster.

AxoLexis is a professional desktop platform for training, evaluating, and deploying machine learning models — powered by the cutting-edge SHADA algorithm, an end-to-end self-supervised hierarchical deep learning framework.

4
Training Phases
2
Modalities (NLP + CV)
17+
ML Components
~7B
Max Parameters
scroll

Everything You Need to Train ML Models

A complete, end-to-end platform covering every stage of the ML lifecycle — from raw data to deployed, production-ready models.

Data Centric

Dataset Management

Load, preview, and configure datasets with built-in preprocessing and augmentation pipelines. Supports CSV, JSON, image folders, and HuggingFace hub.

Learn more
Scalable

Model Configuration

Choose from Nano to XL model tiers (~150M to ~7B params). Configure architecture, hyperparameters, LoRA adapters, and optimization settings visually.

Learn more
High Perf

SHADA Training Pipeline

4-phase unified training: Self-supervised pre-training, Multi-task fine-tuning, Supervised fine-tuning, and Deployment optimization — all from one interface.

Learn more
Live Stats

Real-time Evaluation

Live metrics dashboard with accuracy, loss curves, F1 score, and per-class analysis. Compare multiple runs side-by-side with beautiful charts.

Learn more
Native

Desktop Application

Native Python/Qt desktop app with a modern glassmorphism UI. No browser required — full GPU access, local data privacy, and offline operation.

Learn more
Production

Export & Deploy

Export models to ONNX, TorchScript, or HuggingFace format. Built-in quantization (INT4/INT8), pruning, and deployment optimization with one click.

Learn more
Human Logic

RL Alignment

Optional reinforcement learning alignment with PPO (RLHF) and DPO. Fine-tune model behavior from human preference data with full KL divergence control.

Learn more
Vision + NLP

Multi-modal Support

Unified encoder for both NLP and Computer Vision tasks. Train on text classification, sentiment analysis, image classification, and more from one platform.

Learn more
Reliable

Checkpoint Management

Initialize from any foundation model — CLIP, LLaMA, ViT, DINOv2. Full checkpoint saving, resuming, and experiment versioning built in.

Learn more

From Raw Data to Deployed Model

AxoLexis guides you through every step with an intelligent wizard that adapts to your workflow and project requirements.

01

Upload Your Dataset

Load datasets from local files, HuggingFace Hub, or connect to cloud storage. AxoLexis auto-detects format (CSV, JSON, image folders) and provides interactive preview and statistics.

CSV / JSONImage FoldersHuggingFace HubAuto-detect
02

Configure Your Model

Select model tier (Nano to XL), configure architecture, set hyperparameters, and enable LoRA adapters — all from an intuitive step-by-step wizard interface.

Nano to XL TiersLoRA AdaptersHyperparameter TuningBackbone Selection
03

Train with SHADA

Launch the 4-phase SHADA training pipeline. Monitor real-time loss curves, gradient norms, and learning rate schedules with live charts and progress tracking.

4-Phase PipelineLive MetricsGradNorm BalancingCurriculum Learning
04

Evaluate & Export

Run comprehensive evaluation with accuracy, F1, confusion matrices, and benchmark comparisons. Export to ONNX, TorchScript, or HuggingFace format in one click.

Full Eval SuiteONNX / TorchScriptINT4 QuantizationOne-click Export

Supported ML Workflows

AxoLexis covers the full machine learning lifecycle with dedicated, purpose-built workflow panels for every stage.

Multi-Phase Training

Self-supervised to Multi-task to Supervised to Deployment

AxoLexis orchestrates the full SHADA 4-phase training pipeline automatically. Start from unlabeled data with self-supervised pre-training, progress through multi-task and supervised fine-tuning, and end with deployment optimization — all configurable via the UI.

axolexis — training
$ axolexis run --workflow training
Workflow initialized
Dataset loaded successfully
▶ Starting Multi-Phase Training...
Processing...
Phase 1 — SSL Pre-training
MAE + DINO + SimCLR joint objectives, ~150K steps
Phase 2 — Multi-task Fine-tuning
GradNorm balancing, curriculum learning, ~50K steps
Phase 3 — Supervised Fine-tuning
LLRD, SWA, R-Drop, GMP pruning, 10–30K steps
Phase 4 — Deployment Optimization
PTQ/QAT INT4, GQA, TTA, temperature scaling
Optional — RL Alignment
PPO (RLHF) and DPO from human preference data

Technical Overview

AxoLexis is built on a research-grade foundation, combining the state-of-the-art SHADA algorithm with a professional desktop application interface.

Modality-Agnostic Architecture

The SHADA encoder processes both image (ConvNeXt stem + hierarchical stages) and text (transformer stages) inputs through the same unified backbone, enabling true multi-modal training from a single codebase.

Scalable from Edge to Cloud

Model tiers range from Nano (~150M params, suited for edge deployment) to XL (~7B params + MoE, requiring 32× H100 infrastructure). Train the same architecture at any scale without code changes.

Privacy-First Desktop Design

AxoLexis runs entirely locally — your data never leaves your machine. Full GPU utilization, offline operation, and direct filesystem access make it ideal for sensitive research and enterprise environments.

Technology Stack

PY
Python
Core runtime & ML logic
PT
PyTorch
Deep learning framework
UI
PyQt6 / PySide6
Desktop UI framework
HF
HuggingFace
Transformers & datasets
SA
SHADA Algorithm
Custom training framework
ON
ONNX Runtime
Cross-platform inference

SHADA Integration

AxoLexis is built around the SHADA (Self-supervised Hierarchical Adaptive Deep Algorithm) — a unified 4-phase training framework.

Master LossL_total = L_task + α·L_mae + β·L_contrastive + γ·L_dino + δ·L_mtl
SSL ObjectivesJoint MAE + NT-Xent + DINO pre-training
OptimizationGradNorm, SWA, LLRD, Label Smoothing
RL AlignmentPPO (RLHF) and DPO (optional)
Research Note: SHADA is an active research and implementation framework. Visit the SHADA website for details.
Omar Alghafri

Omar Alghafri

Algorithm Developer
Full-Stack Application Developer

The visionary behind the SHADA algorithm and the AxoLexis application. Dedicated to pushing the boundaries of self-supervised learning and intuitive machine learning software.