Getting Started

getting-started installation setup

This guide will help you set up and run the Arabic Sign Language Recognition system on your local machine or using Docker.

Prerequisites

Choose one of the following setups:

  • Docker and Docker Compose installed
  • No other dependencies required

Option 2: Local Development

  • Python 3.12+
  • uv - Fast Python package installer
  • Webcam (for live recognition)

Installation

1. Clone the Repository

git clone https://github.com/yousefelkilany/word-level-arabic-sign-language.git
cd word-level-arabic-sign-language

2. Configuration Setup

Create a .env file from the example:

cp .env.example .env

Edit .env with your configuration:

# Model Configuration
ONNX_CHECKPOINT_FILENAME=last-checkpoint-signs_502.pth.onnx
 
# CORS Configuration
DOMAIN_NAME=http://localhost:8000
 
# Development Mode (1 = local, 0 = Kaggle paths)
LOCAL_DEV=1
 
# Force CPU execution (1 = CPU only, 0 = use GPU if available)
USE_CPU=1

IMPORTANT

Set LOCAL_DEV=1 to use local data/ and models/ directories instead of Kaggle paths.

See Environment Configuration for detailed variable descriptions.

3. Choose Your Setup Method

Build and start the services:

docker-compose up --build

The API will be available at http://localhost:8000.

Features:

  • ✅ All dependencies included
  • ✅ Consistent environment
  • ✅ Easy deployment
  • ✅ Hot reload enabled (code changes reflected immediately)

See Docker Setup for advanced configuration.

Option B: Local Development Setup

  1. Install Dependencies:
uv sync
  1. Run the Backend:
# Direct Python execution
python src/api/run.py
 
# OR using Make
make local_setup && python src/api/run.py

The API will be available at http://localhost:8000.

First Run

1. Access the Web Interface

Navigate to http://localhost:8000/live-signs in your web browser.

2. Grant Camera Permissions

When prompted, allow the browser to access your webcam.

3. Start Signing

  • Position yourself in front of the camera
  • Perform Arabic sign language gestures
  • The system will detect and display recognized signs in real-time

Project Structure

arabic-sign-language-karsl/
├── src/
│   ├── api/          # FastAPI application and WebSocket handlers
│   ├── core/         # Core utilities (MediaPipe, constants)
│   ├── data/         # Dataset processing and loading
│   └── modelling/    # Model architecture and training
├── static/           # Frontend (HTML, CSS, JavaScript)
├── models/           # ONNX models for inference
├── checkpoints/      # Training checkpoints
├── data/             # Dataset and labels
├── docs/             # This documentation
├── Dockerfile        # Container image configuration
├── docker-compose.yml
├── pyproject.toml    # Python dependencies
└── makefile          # Build automation

See Project Structure for detailed organization.

Available Commands

Using Make

# Training
make train              # Train model with default settings
make cpu_train          # Train on CPU only
 
# Model Export
make export_model checkpoint_path=path/to/checkpoint.pth
 
# Benchmarking
make onnx_benchmark checkpoint_path=path/to/model.onnx
 
# Local Development
make local_setup        # Set LOCAL_DEV=1 for current command

See Makefile Commands for all available commands.

Using Docker

# Start services
docker-compose up
 
# Rebuild and start
docker-compose up --build
 
# Force recreate containers
docker-compose up --build --force-recreate
 
# Stop services
docker-compose down
 
# View logs
docker-compose logs -f

Verification

Test the API

curl http://localhost:8000/

Expected response: HTML content from the live signs interface.

Test WebSocket Connection

Open the browser console at http://localhost:8000/live-signs and check for:

WebSocket connection established

Troubleshooting

Common Issues

Port Already in Use

# Change port in docker-compose.yml or when running locally
uvicorn api.main:app --port 8001

Model Not Found

  • Ensure ONNX model exists in models/ directory
  • Check ONNX_CHECKPOINT_FILENAME in .env
  • Download pre-trained model if needed

Camera Not Detected

  • Grant browser camera permissions
  • Check if another application is using the camera
  • Try a different browser (Chrome/Edge recommended)

CORS Errors

  • Verify DOMAIN_NAME in .env matches your frontend URL
  • Check browser console for specific CORS errors

See Troubleshooting for more solutions.

Next Steps

Resources


Related Pages: