Validators are responsible for the network stability, it is very important to be able to react at any time of the day or night in case of trouble. We strongly encourage validators to set up a monitoring and alerting system, learn more about this from our secure setup guide.
Start the AI Service
Prerequisite
Must have conda installed, follow the instruction here.
First, open a terminal and create a new conda environment:
condacreate-nuomi-aipython=3.10-y
If you've just installed Miniconda/Anaconda, initialize conda in your shell:
# For bash userssource~/miniconda3/etc/profile.d/conda.sh# For zsh userssource~/miniconda3/etc/profile.d/conda.sh# Activate the environmentcondaactivateuomi-ai
2. Install PyTorch with CUDA Support
Install PyTorch and related packages:
Verify the installation:
This should print True if CUDA is properly installed.
3. Install CUDA Development Tools
Install CUDA toolkit and NVCC compiler:
Verify the installation:
4. Install Additional Dependencies
Install the required Python packages:
Troubleshooting
Common Issues
CUDA Not Found
Verify NVIDIA drivers are installed: nvidia-smi
Check CUDA installation: nvcc --version
Ensure PyTorch CUDA is properly installed: python -c "import torch; print(torch.version.cuda)"
Build Failures
Ensure you have build tools installed:
For Auto-GPTQ issues, try installing from source:
Version Conflicts
If you encounter package conflicts, try creating a fresh environment
Consider using pip install --no-deps for problematic packages
Verification
To verify the complete setup, run this test script:
Create the service
Create a new systemd service file for the AI component:
Add the following content to the service file:
Important Notes:
The WorkingDirectory path (/home/uomi/uomi-node-ai) should be adjusted to match your actual installation directory
The ExecStart command assumes:
Miniconda is installed in /home/uomi/miniconda3
A conda environment named uomi-ai exists
The main Python script is named uomi-ai.py
Modify these paths and names according to your specific setup