Product Description
avaaz.ai is a mobile and web application featuring a motivating conversational AI tutor powered by advanced agentic capabilities. It teaches oral language skills through structured, interactive lessons that adapt to each student’s pace and performance. The core goal is to help students speak new languages confidently to pass the B2 oral proficiency exam.
1. Features
- Voice-First Conversational Engine — Students engage in ultra-low-latency speech-to-speech interaction with the AI Tutor, enabling natural dialogue and instant corrective feedback using speech, text, and visuals.
- CEFR-B2 Aligned Curriculum with Real-Time AI Practice — A full CEFR-based speaking progression up to B2, seamlessly integrated with adaptive AI conversation to bridge passive knowledge and active speaking skills.
- Immigrant-Focused Real-Life Scenarios — Lessons target real-world contexts relevant to immigrants, such as workplace, healthcare, school, or daily interactions, enhancing integration and confidence in practical use.
- Mock Oral Exam Mode — Simulates B2 oral exams and citizenship interviews with timed prompts, rubrics, and examiner-style feedback to build test readiness.
- Multilingual Scaffolding and Integrated Translation — Learners receive UI support, bilingual explanations, and on-demand translations in their native language, helping low-confidence speakers stay engaged.
- Comprehensive Speaking Feedback — Beyond pronunciation and grammar, learners get targeted insights on fluency, phrasing, coherence, and vocabulary range, aligned with B2 standards.
- Accent and Cultural Adaptation — Lessons reflect local dialects and cultural etiquette relevant to the learner’s destination country, supporting realistic and socially appropriate speech.
- Immersive Role-Plays with Visual Cues — Speaking simulations are enhanced with contextual images (e.g. menus, documents, locations) to deepen realism and task-based practice.
- Gamified Progress and Motivation — Daily speaking challenges, streak rewards, and level badges reinforce habit formation and learning consistency.
- Post-Session Performance Summaries — After each session, learners receive a concise report of strengths, errors, and suggested improvements to reinforce progress.
- Customizable AI Tutor Voices — Learners can choose from various voice options with different genders or regional accents to diversify listening practice and personalize interaction.
- Cross-Device Learning Continuity — Seamless synchronization ensures progress tracking and learning state remain consistent across mobile and web platforms.
2. Student & AI Tutor Interactions (English → Norwegian Bokmål)
2.1 Real-Time Conversational Flow
Scenario: A beginner is learning everyday greetings.
AI Tutor (in Norwegian): “Hei! Hvordan går det?”
Student: “Uh… I think that means, ‘How are you?’”
AI Tutor: “Exactly! You can answer: Jeg har det bra — that means ‘I’m doing well.’ Try it: Jeg har det bra.”
Student: “Jeg har det bra.”
AI Tutor: “Flott! Literally, it means ‘I have it good.’ Norwegians use that to say they’re fine.”
Benefit: The AI provides instant translation and cultural context, helping English speakers grasp meaning and tone through natural conversation.
2.2 Adaptive Pronunciation, Fluency, and Grammar Support
Scenario: The learner practices sentence structure and pronunciation.
AI Tutor: “Let’s say: Jeg bor i Oslo — ‘I live in Oslo.’”
Student: “Jeg bor på Oslo.”
AI Tutor: “Almost right! We say i Oslo, not på Oslo. I means ‘in,’ while på means ‘on.’ Try again: Jeg bor i Oslo.”
Student: “Jeg bor i Oslo.”
AI Tutor: “Perfekt! Now, about pronunciation — it’s Oslo, not Osh-lo. The s is soft. Listen: Oslo.”
Student: “Oslo.”
AI Tutor: “Bra! That sounded natural.”
Benefit: The AI corrects grammar and pronunciation simultaneously, giving immediate, native-level feedback tailored to English-speaking learners.
2.3 Scenario-Based Oral Practice
Scenario: The learner prepares for a B2-level job interview.
AI Tutor: “Du er på et jobbintervju. Spørsmålet er: Hvorfor vil du jobbe her? — ‘Why do you want to work here?’”
Student: “Fordi jeg liker din selskap.”
AI Tutor: “Good start! In Norwegian, we say firmaet ditt instead of din selskap. Try: Fordi jeg liker firmaet ditt og arbeidsmiljøet her. — ‘Because I like your company and the work environment here.’”
Student: “Fordi jeg liker firmaet ditt og arbeidsmiljøet her.”
AI Tutor: “Veldig bra! That sounded confident and natural.”
Benefit: The learner rehearses authentic B2-level oral exam tasks and professional interactions with context-sensitive correction and adaptive feedback.
3. Pricing Plans
| Plan | Hours | NOK | Purpose |
|---|---|---|---|
| First Light | 1 | Gratis | A taste of the experience - no pressure, no card required. |
| Spark | 5 | kr 999 | A short, flexible plan to get started or focus on specific skills. |
| Glow | 15 | kr 1 999 | Great for consistent improvement and practicing real conversations. |
| ⭐ Shine (Recommended) | 50 | kr 5 999 | The sweet spot for building natural fluency and confidence. |
| Radiance | 200 | kr 17 999 | Designed for dedicated learners seeking transformation. |
4. Configuration
4.1 Configure the VPS
4.1.1 Configure the firewal at the VPS host
| Public IP |
|---|
| 217.154.51.242 |
| Action | Allowed IP | Protocol | Port(s) | Description |
|---|---|---|---|---|
| Allow | Any | TCP | 80 | HTTP |
| Allow | Any | TCP | 443 | HTTPS |
| Allow | Any | TCP | 2222 | Git SSH |
| Allow | Any | TCP | 2885 | VPS SSH |
| Allow | Any | UDP | 3478 | STUN/TURN |
| Allow | Any | TCP | 5349 | TURN/TLS |
| Allow | Any | TCP | 7881 | LiveKit TCP |
| Allow | Any | UDP | 50000-60000 | LiveKit Media |
4.1.2 Configure the DNS settings at domain registrar
| Host (avaaz.ai) | Type | Value |
|---|---|---|
| @ | A | 217.154.51.242 |
| www | CNAME | avaaz.ai |
| app | A | 217.154.51.242 |
| api | A | 217.154.51.242 |
| rtc | A | 217.154.51.242 |
| git | A | 217.154.51.242 |
4.1.3 Change the SSH port from 22 to 2885
-
Connect to the server.
ssh username@avaaz.ai -
Edit the SSH configuration file.
sudo nano /etc/ssh/sshd_config -
Add port 2885 to the file and comment out port 22.
#Port 22 Port 2885 -
Save the file and exit the editor.
- Press
Ctrl+O, thenEnterto save, andCtrl+Xto exit.
- Press
-
Restart the SSH service.
sudo systemctl daemon-reload && sudo systemctl restart ssh.socket && sudo systemctl restart ssh.service -
Before closing the current session, open a new terminal window and connect to the server to verify the changes work correctly.
ssh username@avaaz.ai # ssh: connect to host avaaz.ai port 22: Connection timed out ssh username@avaaz.ai -p 2885 -
Once the connection is successful, close the original session safely.
4.1.4 Build and deploy the infrastructure
-
Check with
dig git.avaaz.ai +shortwether the DNS settings have been propagated. -
SSH into the VPS to install Docker & docker compose.
ssh username@avaaz.ai -p 2885 -
Update system packages.
sudo apt update && sudo apt upgrade -y -
Install dependencies for Docker’s official repo
sudo apt install -y \ ca-certificates \ curl \ gnupg \ lsb-release -
Add Docker’s official APT repo.
sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg \ sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) \ signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update -
Install Docker Engine + compose plugin.
sudo apt install -y \ docker-ce \ docker-ce-cli \ containerd.io \ docker-buildx-plugin \ docker-compose-plugin -
Verify the installation.
sudo docker --version sudo docker compose version -
Create the
/etc/docker/daemon.jsonfile to avoid issues with overusing disk for log data.sudo nano /etc/docker/daemon.json -
Paste the following.
{ "log-driver": "local", "log-opts": { "max-size": "10m", "max-file": "3" } } -
Save the file and exit the editor.
- Press
Ctrl+O, thenEnterto save, andCtrl+Xto exit.
- Press
-
Restart the docker service to apply changes.
sudo systemctl daemon-reload sudo systemctl restart docker -
Create directory for infra stack in
/srv/infra.sudo mkdir -p /srv/infra sudo chown -R $USER:$USER /srv/infra cd /srv/infra -
Create directories for Gitea (repos, config, etc.) and Runner persistent data. Gitea runs as UID/GID 1000 by default.
mkdir -p gitea-data gitea-runner-data -
Create the
/srv/infra/docker-compose.yml(Caddy + Gitea + Runner) file.nano docker-compose.yml -
Paste the following.
services: caddy: # Use the latest official Caddy image image: caddy:latest # Docker Compose automatically generates container names: <folder>_<service>_<index> container_name: caddy # Fixed name used by Docker engine # Automatically restart unless manually stopped restart: unless-stopped ports: # Expose HTTP (ACME + redirect) - "80:80" # Expose HTTPS/WSS (frontend, backend, LiveKit) - "443:443" volumes: # Mount the Caddy config file read-only - ./Caddyfile:/etc/caddy/Caddyfile:ro # Caddy TLS certs (persistent Docker volume) - caddy_data:/data # Internal Caddy state/config - caddy_config:/config networks: # Attach to the shared "proxy" network - proxy gitea: # Official Gitea image with built-in Actions image: gitea/gitea:latest container_name: gitea # Fixed name used by Docker engine # Auto-restart service restart: unless-stopped environment: # Run Gitea as host user 1000 (prevents permission issues) - USER_UID=1000 # Same for group - USER_GID=1000 # Use SQLite (stored inside /data) - GITEA__database__DB_TYPE=sqlite3 # Location of the SQLite DB - GITEA__database__PATH=/data/gitea/gitea.db # Custom config directory - GITEA_CUSTOM=/data/gitea volumes: # Bind mount instead of Docker volume because: # - We want repos, configs, SSH keys, and SQLite DB **visible and editable** on host # - Easy backups (just copy `./gitea-data`) # - Easy migration # - Avoids losing data if Docker volumes are pruned - ./gitea-data:/data networks: - proxy ports: # SSH for Git operations mapped to host 2222 - "2222:22" gitea-runner: # Official Gitea Actions Runner image: gitea/act_runner:latest container_name: gitea-runner # Fixed name used by Docker engine restart: unless-stopped depends_on: # Runner requires Gitea to be available - gitea volumes: # Runner uses host Docker daemon to spin up job containers (Docker-out-of-Docker) - /var/run/docker.sock:/var/run/docker.sock # Bind mount instead of volume because: # - Runner identity is stored in /data/.runner # - Must persist across container recreations # - Prevents duplicated runner registrations in Gitea # - Easy to inspect/reset via `./gitea-runner-data/.runner` - ./gitea-runner-data:/data environment: # Base URL of your Gitea instance - GITEA_INSTANCE_URL=${GITEA_INSTANCE_URL} # One-time registration token - GITEA_RUNNER_REGISTRATION_TOKEN=${GITEA_RUNNER_REGISTRATION_TOKEN} # Human-readable name for the runner - GITEA_RUNNER_NAME=${GITEA_RUNNER_NAME} # Runner labels (e.g., ubuntu-latest) - GITEA_RUNNER_LABELS=${GITEA_RUNNER_LABELS} # Set container timezone to UTC for consistent logs - TZ=Etc/UTC networks: - proxy # Start runner using persisted config command: ["act_runner", "daemon", "--config", "/data/.runner"] networks: proxy: # Shared network for Caddy + Gitea (+ later app stack) name: proxy # Default Docker bridge network driver: bridge volumes: # Docker volume for Caddy TLS data (safe to keep inside Docker) caddy_data: name: caddy_data # Docker volume for internal Caddy configs/state caddy_config: name: caddy_config -
Save the file and exit the editor.
- Press
Ctrl+O, thenEnterto save, andCtrl+Xto exit.
- Press
-
Create the
/srv/infra/.envfile with environment variables.nano .env -
Paste the following:
# Base URL of your Gitea instance (used by the runner to register itself # and to send/receive workflow job information). GITEA_INSTANCE_URL=https://git.avaaz.ai # One-time registration token generated in: # Gitea → Site Administration → Actions → Runners → "Generate Token" # This MUST be filled in once, so the runner can register. # After registration, the runner stores its identity inside ./gitea-runner-data/.runner # and this value is no longer needed (can be left blank). GITEA_RUNNER_REGISTRATION_TOKEN= # Human-readable name for this runner. # This is shown in the Gitea UI so you can distinguish multiple runners: # Example: "vps-runner", "staging-runner", "gpu-runner" GITEA_RUNNER_NAME=gitea-runner # Runner labels allow workflows to choose specific runners. # The label format is: label[:schema[:args]] # - "ubuntu-latest" is the <label> name that workflows request using runs-on: [ "ubuntu-latest" ]. # - "docker://" is the <schema> indicating the job runs inside a separate Docker container. # - "catthehacker/ubuntu:act-latest" is the <args>, specifying the Docker image to use for the container. # Workflows can target this using: # runs-on: [ "ubuntu-latest" ] GITEA_RUNNER_LABELS=ubuntu-latest:docker://catthehacker/ubuntu:act-latest -
Save the file and exit the editor.
- Press
Ctrl+O, thenEnterto save, andCtrl+Xto exit.
- Press
-
Create
/srv/infra/Caddyfileto configure Caddy.nano Caddyfile -
Paste the following:
{ # Global Caddy options. # # auto_https on # - Caddy listens on port 80 for every host (ACME + redirect). # - Automatically issues HTTPS certificates. # - Automatically redirects HTTP → HTTPS unless disabled. # } # ------------------------------------------------------------ # Redirect www → root domain # ------------------------------------------------------------ www.avaaz.ai { # Permanent redirect to naked domain redir https://avaaz.ai{uri} permanent } # ------------------------------------------------------------ # Marketing site (optional — if frontend handles it, remove this) # Redirect root → app # ------------------------------------------------------------ avaaz.ai { # If you have a static marketing page, serve it here. # If not, redirect visitors to the app. redir https://app.avaaz.ai{uri} } # ------------------------------------------------------------ # Frontend (Next.js) # Public URL: https://app.avaaz.ai # Internal target: frontend:3000 # ------------------------------------------------------------ app.avaaz.ai { # Reverse-proxy HTTPS traffic to the frontend container reverse_proxy frontend:3000 # Access log for debugging frontend activity log { output file /data/app-access.log } # Compression for faster delivery of JS, HTML, etc. encode gzip zstd } # ------------------------------------------------------------ # Backend (FastAPI) # Public URL: https://api.avaaz.ai # Internal target: backend:8000 # ------------------------------------------------------------ api.avaaz.ai { # Reverse-proxy all API traffic to FastAPI reverse_proxy backend:8000 # Access log — useful for monitoring API traffic and debugging issues log { output file /data/api-access.log } # Enable response compression (JSON, text, etc.) encode gzip zstd } # ------------------------------------------------------------ # LiveKit (signaling only — media uses direct UDP) # Public URL: wss://rtc.avaaz.ai # Internal target: livekit:7880 # ------------------------------------------------------------ rtc.avaaz.ai { # LiveKit uses WebSocket signaling, so we reverse-proxy WS → WS reverse_proxy livekit:7880 # Access log — helps diagnose WebRTC connection failures log { output file /data/rtc-access.log } # Compression not needed for WS traffic, but harmless encode gzip zstd } # ------------------------------------------------------------ # Gitea (Git server UI + HTTPS + SSH clone) # Public URL: https://git.avaaz.ai # Internal target: gitea:3000 # ------------------------------------------------------------ git.avaaz.ai { # Route all HTTPS traffic to Gitea’s web UI reverse_proxy gitea:3000 # Log all Git UI requests and API access log { output file /data/git-access.log } # Compress UI responses encode gzip zstd } -
Save the file and exit the editor.
- Press
Ctrl+O, thenEnterto save, andCtrl+Xto exit.
- Press
-
Start the stack from
/srv/infra.sudo docker compose pull # fetch images: caddy, gitea, act_runner sudo docker compose up -d # start all containers in the background -
Verify that the status of all the containers are
Up.sudo docker compose ps -a -
Open
https://git.avaaz.aiin your browser. Caddy should have already obtained a cert and you should see the Gitea installer. -
Configure database settings.
- Database Type:
SQLite3 - Path:
/data/gitea/gitea.db(matchesGITEA__database__PATH)
- Database Type:
-
Configure general settings.
- Site Title: default (
Gitea: Git with a cup of tea) - Repository Root Path: default (
/data/git/repositories) - LFS Root Path: default (
/data/git/lfs)
- Site Title: default (
-
Configure server settings.
- Domain:
git.avaaz.ai(external HTTPS via Caddy) - SSH Port:
2222(external SSH port) - HTTP Port:
3000(internal HTTP port) - Gitea Base URL / ROOT_URL:
https://git.avaaz.ai/
- Domain:
-
Create the admin account (username + password + email) and finish installation.
-
Edit Gitea
/data/gitea/conf/app.iniat the host bind mount/srv/infra/gitea-data/gitea/conf/app.ini.nano gitea-data/gitea/conf/app.ini -
Add/verify the following sections.
[server] ; Gitea serves HTTP internally (Caddy handles HTTPS externally) PROTOCOL = http ; External hostname used for links and redirects DOMAIN = git.avaaz.ai ; Hostname embedded in SSH clone URLs SSH_DOMAIN = git.avaaz.ai ; Internal container port Gitea listens on (Caddy reverse-proxies to this) HTTP_PORT = 3000 ; Public-facing base URL (MUST be HTTPS when behind Caddy) ROOT_URL = https://git.avaaz.ai/ ; Enable Gitea's built-in SSH server inside the container DISABLE_SSH = false ; Host-side SSH port exposed by Docker (mapped to container:22) SSH_PORT = 2222 ; Container-side SSH port (always 22 inside the container) SSH_LISTEN_PORT = 22 [database] ; SQLite database file stored in bind-mounted volume PATH = /data/gitea/gitea.db ; Using SQLite (sufficient for single-node small/medium setups) DB_TYPE = sqlite3 [security] ; Prevent web-based reinstallation (crucial for a secured instance) INSTALL_LOCK = true ; Auto-generated on first startup; DO NOT change or delete SECRET_KEY = [actions] ; Enable Gitea Actions (CI/CD) ENABLED = true ; Default platform to get action plugins, github for https://github.com, self for the current Gitea instance. DEFAULT_ACTIONS_URL = github -
Restart Gitea to apply changes.
sudo docker compose restart gitea -
Check if Actions is enabled.
- Log in as admin at
https://git.avaaz.ai. - Go to Site Administration.
- Look for a menu item Actions. If
[actions] ENABLED = trueinapp.ini, there will be options related to Runners, allowing management of instance-level action runners. Otherwise, the Actions menu item in the Site Administration panel will not appear, indicating the feature is globally disabled.
- Log in as admin at
-
Get registration token to register the Gitea Actions runner and create a user account.
- Log in as admin at
https://git.avaaz.ai. - Go to Site Administration → Actions → Runners.
- Choose Create new Runner.
- Copy the Registration Token.
- Create a user account.
- Log in as admin at
-
Edit
.envto add the token.nano .env -
Paste the Registration Token after
=without spaces.# One-time registration token generated in: # Gitea → Site Administration → Actions → Runners → "Generate Token" # This MUST be filled in once, so the runner can register. # After registration, the runner stores its identity inside ./gitea-runner-data/.runner # and this value is no longer needed (can be left blank). GITEA_RUNNER_REGISTRATION_TOKEN= -
Check for configuration changes and restart the container
gitea-runner.sudo docker compose up -d gitea-runner -
Confirm that the Gitea instance URL, Runner name, and Runner labels in
gitea-runner-data/.runnerfile are the same as the values in the.envfile. Fix it usingnano gitea-runner-data/.runnerif different. -
Verify that the Runner is connected to
https://git.avaaz.aiand is polling for jobs.sudo docker logs -f gitea-runner -
Generate an SSH key on laptop. Accept the defaults and optionally set a passphrase. The public key is placed in
~/.ssh/id_ed25519.pub.ssh-keygen -t ed25519 -C "user@avaaz.ai" -
Add the public key to Gitea.
- Log into
https://git.avaaz.aias user. - Go to Profile → Settings → SSH / GPG Keys → Add Key.
- Paste the contents starting with
ssh-ed25519in~/.ssh/id_ed25519.pub. - Save.
- Log into
-
Test SSH remote on laptop.
ssh -T -p 2222 git@git.avaaz.ai -
Type
yesto tell SSH client to trust the fingerprint and pressEnter. Enter the passphrase and verify the response You've successfully authenticated..., but Gitea does not provide shell access. -
Confirm that Gitea’s clone URLs of a repo show
ssh://git@git.avaaz.ai:2222/<user>/<repo>.git. -
Upgrade Docker images safely.
sudo docker compose pull # pull newer images sudo docker compose up -d # recreate containers with new images -
Restart the whole infra stack.
sudo docker compose restart # restart all containers -
Check logs for troubleshooting.
sudo docker logs -f caddy # shows “obtaining certificate” or ACME errors if HTTPS fails. sudo docker logs -f gitea # shows DB/permissions problems, config issues, etc. sudo docker logs -f gitea-runner # shows registration/connection/job-execution issues.
4.1.5 Validate the infrastructure
-
Confirm that all containers
caddy,gitea, andgitea-runnerareUp.sudo docker compose ps -a -
Confirm that
https://git.avaaz.aishows Gitea login page with a valid TLS cert (padlock icon) when opened in a browser. -
Confirm the response You've successfully authenticated..., but Gitea does not provide shell access. when connecting to Gitea over SSH.
ssh -T -p 2222 git@git.avaaz.ai -
Create a
testrepo in Gitea and confirm cloning it.git clone ssh://git@git.avaaz.ai:2222/<your-user>/test.git -
Confirm that the Actions runner
gitea-runneris registered and online with status Idle.- Log in as admin at
https://git.avaaz.ai. - Go to Site Administration → Actions → Runners.
- Log in as admin at
-
Add
.gitea/workflows/test.ymlto thetestrepo root, commit and push.# Workflow Name name: Test Workflow # Trigger on a push event to any branch on: push: branches: # This means 'any branch' - '**' # Define the jobs to run jobs: hello: # Specify the runner image to use runs-on: [ "ubuntu-latest" ] # Define the steps for this job steps: - name: Run a Test Script run: echo "Hello from Gitea Actions!" -
Confirm a workflow run appears in Gitea → test repo → Actions tab and progresses from queued → in progress → success.
-
Confirm the logs show the job picked up, container created, and the “Hello from Gitea Actions!” output.
sudo docker logs -f gitea-runner
4.2 Configure the Development Laptop
4.2.1 Run Applicaiton
-
Removes all cached Python packages stored by pip, removes local Python cache files, clears the cache used by uv, and forcibly clear the cache for Node.js.
uv tool install cleanpy pip cache purge && cleanpy . && uv cache clean && npm cache clean --force -
Resolve dependencies from your pyproject.toml and upgrade all packages. Synchronize the virtual environment with the dependencies specified in the uv.lock including packages needed for development.
cd backend uv lock --upgrade uv sync --dev -
Lint and check code for errors, style issues, and potential bugs, and try to fix them. Discover and run tests in tests/.
cd backend uv run ruff check --fix && uv run pytest -
Starts a local development API server, visible at port 8000, and automatically reloads the server as you make code changes.
cd backend uv run uvicorn src.main:app --reload --port 8000 -
Scans dependencies for security vulnerabilities and attempts to automatically fix them by force-updating to the latest secure versions.
cd frontend npm audit fix --force -
Install dependencies from package.json, then update those dependencies to the latest allowed versions based on version ranges. Next, check the source code for stylistic and syntax errors according to configured rules. Finally, compile or bundle the application for deployment or production use.
cd frontend npm install && npm update && npm run lint && npm run build -
Execute start script in package.json, launch your Node.js application in production mode.
cd frontend npm run start
5. Example Project Structure
avaaz.ai/
├── .dockerignore # Specifies files and directories to exclude from Docker builds, such as .git, node_modules, and build artifacts, to optimize image sizes.
├── .gitignore # Lists files and patterns to ignore in Git, including .env, __pycache__, node_modules, and logs, preventing sensitive or temporary files from being committed.
├── .gitattributes # Controls Git’s handling of files across platforms (e.g. normalizing line endings with * text=auto), and can force certain files to be treated as binary or configure diff/merge drivers.
│
├── .env.example # Template for environment variables, showing required keys like DATABASE_URL, GEMINI_API_KEY, LIVEKIT_API_KEY without actual values.
├── docker-compose.dev.yml # Docker Compose file for development environment: defines services for local frontend, backend, postgres, livekit with volume mounts for hot-reloading.
├── docker-compose.prod.yml # Docker Compose file for production: defines services for caddy, gitea (if integrated), frontend, backend, postgres, livekit with optimized settings and no volumes for code.
├── README.md # Project overview: includes setup instructions, architecture diagram (embed the provided Mermaid), contribution guidelines, and deployment steps.
│
├── .gitea/ # Directory for Gitea-specific configurations, as the repo is hosted on Gitea.
│ └── workflows/ # Contains YAML files for Gitea Actions workflows, enabling CI/CD pipelines.
│ ├── ci.yml # Workflow for continuous integration: runs tests, linting (Ruff), type checks, and builds on pull requests or pushes.
│ └── cd.yml # Workflow for continuous deployment: triggers builds and deploys Docker images to the VPS on merges to main.
│
├── .vscode/ # Editor configuration for VS Code to standardize the development environment for all contributors.
│ ├── extensions.json # Recommends VS Code extensions (e.g. Python, ESLint, Docker, GitLens) so developers get linting, formatting, and container tooling out of the box.
│ └── settings.json # Workspace-level VS Code settings: formatter on save, path aliases, Python/TypeScript language server settings, lint integration (Ruff, ESLint), and file exclusions.
│
├── backend/ # Root for the FastAPI backend, following Python best practices for scalable applications (inspired by FastAPI's "Bigger Applications" guide).
│ ├── Dockerfile # Builds the backend container: installs UV, copies pyproject.toml, syncs dependencies, copies source code, sets entrypoint to Gunicorn/Uvicorn.
│ ├── pyproject.toml # Project metadata and dependencies: uses UV for dependency management, specifies FastAPI, SQLAlchemy, Pydantic, LiveKit-Agent, etc.; includes [tool.uv], [tool.ruff] sections for config.
│ ├── uv.lock # Lockfile generated by UV, ensuring reproducible dependencies across environments.
│ ├── ruff.toml # Configuration for Ruff linter and formatter (can be in pyproject.toml): sets rules for Python code style, ignoring certain errors if needed.
│ ├── alembic.ini # Configuration for Alembic migrations: points to SQLAlchemy URL and script location.
│ ├── alembic/ # Directory for database migrations using Alembic, integrated with SQLAlchemy.
│ │ ├── env.py # Alembic environment script: sets up the migration context with SQLAlchemy models and pgvector support.
│ │ ├── script.py.mako # Template for generating migration scripts.
│ │ └── versions/ # Auto-generated migration files: each represents a database schema change, e.g., create_tables.py.
│ ├── src/ # Source code package: keeps business logic isolated, importable as 'from src import ...'.
│ │ ├── __init__.py # Makes src a package.
│ │ ├── main.py # FastAPI app entrypoint: initializes app, includes routers, sets up middleware, connects to Gemini Live via prompts.
│ │ ├── config.py # Application settings: uses Pydantic-settings to load from .env, e.g., DB_URL, API keys for Gemini, LiveKit, Stripe (for pricing plans).
│ │ ├── api/ # API-related modules: organizes endpoints and dependencies.
│ │ │ ├── __init__.py # Package init.
│ │ │ ├── dependencies.py # Global dependencies: e.g., current_user via FastAPI Users, database session.
│ │ │ └── v1/ # Versioned API: allows future versioning without breaking changes.
│ │ │ └── routers/ # API routers: modular endpoints.
│ │ │ ├── auth.py # Handles authentication: uses FastAPI Users for JWT/OAuth, user registration/login.
│ │ │ ├── users.py # User management: progress tracking, plan subscriptions.
│ │ │ ├── lessons.py # Lesson endpoints: structured oral language lessons, progress tracking.
│ │ │ ├── chat.py # Integration with LiveKit and Gemini: handles conversational AI tutor sessions.
│ │ │ └── documents.py # Document upload and processing: endpoints for file uploads, using Docling for parsing and semantic search prep.
│ │ ├── core/ # Core utilities: shared across the app.
│ │ │ ├── __init__.py # Package init.
│ │ │ └── security.py # Security functions: hashing, JWT handling via FastAPI Users.
│ │ ├── db/ # Database layer: SQLAlchemy setup with pgvector for vector embeddings (e.g., for AI tutor memory).
│ │ │ ├── __init__.py # Package init.
│ │ │ ├── base.py # Base model class for SQLAlchemy declarative base.
│ │ │ ├── session.py # Database session management: async session maker.
│ │ │ └── models/ # SQLAlchemy models.
│ │ │ ├── __init__.py # Exports all models.
│ │ │ ├── user.py # User model: includes fields for progress, plan, proficiency.
│ │ │ ├── lesson.py # Lesson and session models: tracks user interactions, B2 exam prep.
│ │ │ └── document.py # Document chunk model: for semantic search, with text, metadata, embedding (pgvector).
│ │ ├── schemas/ # Pydantic schemas: for API validation and serialization.
│ │ │ ├── __init__.py # Exports schemas.
│ │ │ ├── user.py # User schemas: create, read, update.
│ │ │ ├── lesson.py # Lesson schemas: input/output for AI interactions.
│ │ │ └── document.py # Document schemas: for upload responses and search queries.
│ │ └── services/ # Business logic services: decoupled from API.
│ │ ├── __init__.py # Package init.
│ │ ├── llm.py # Gemini Live integration: prompt engineering for conversational tutor.
│ │ ├── payment.py # Handles pricing plans: integrates with Stripe for subscriptions (Spark, Glow, etc.).
│ │ └── document.py # Docling processing: parses files, chunks, embeds (via Gemini), stores for semantic search.
│ └── tests/ # Unit and integration tests: uses pytest, Hypothesis for property-based testing, httpx for API testing.
│ ├── __init__.py # Package init.
│ ├── conftest.py # Pytest fixtures: e.g., test database, mock Gemini.
│ └── test_users.py # Example test file: tests user endpoints.
│
├── frontend/ # Root for Next.js frontend and PWA, following Next.js app router best practices (2025 standards: improved SSR, layouts).
│ ├── Dockerfile # Builds the frontend container: installs dependencies, builds Next.js, serves with Node.
│ ├── .eslintrc.json # ESLint configuration: extends next/core-web-vitals, adds rules for code quality.
│ ├── next.config.js # Next.js config: enables PWA, images optimization, API routes if needed.
│ ├── package.json # Node dependencies: includes next, react, @livekit/client for WebRTC, axios or fetch for API calls.
│ ├── package-lock.json # Lockfile for reproducible npm installs.
│ ├── tsconfig.json # TypeScript config: targets ES2022, includes paths for components.
│ ├── app/ # App router directory: pages, layouts, loading states.
│ │ ├── globals.css # Global styles: Tailwind or CSS modules.
│ │ ├── layout.tsx # Root layout: includes providers, navigation.
│ │ ├── page.tsx # Home page: landing for avaaz.ai.
│ │ └── components/ # Reusable UI components.
│ │ ├── ChatInterface.tsx # Component for conversational tutor using LiveKit WebRTC.
│ │ └── ProgressTracker.tsx # Tracks user progress toward B2 exam.
│ ├── lib/ # Utility functions: API clients, hooks.
│ │ └── api.ts # API client: typed fetches to backend endpoints.
│ └── public/ # Static assets.
│ ├── favicon.ico # Site icon.
│ └── manifest.json # PWA manifest: for mobile app-like experience.
│
├── infra/ # Infrastructure configurations: Dockerfiles and configs for supporting services, keeping them separate for scalability.
│ ├── caddy/ # Caddy reverse proxy setup.
│ │ ├── Dockerfile # Extends official Caddy image, copies Caddyfile.
│ │ └── Caddyfile # Caddy config: routes www.avaaz.ai to frontend, api.avaaz.ai to backend, WSS to LiveKit; auto HTTPS.
│ ├── gitea/ # Gitea git server (added for customization if needed; otherwise use official image directly in Compose).
│ │ ├── Dockerfile # Optional: Extends official Gitea image, copies custom config for Actions integration.
│ │ └── app.ini # Gitea config: sets up server, database, Actions runner.
│ ├── livekit/ # LiveKit server for real-time audio/video in tutor sessions.
│ │ ├── Dockerfile # Extends official LiveKit image, copies config.
│ │ └── livekit.yaml # LiveKit config: API keys, room settings, agent integration for AI tutor.
│ └── postgres/ # PostgreSQL with pgvector.
│ ├── Dockerfile # Extends postgres image, installs pgvector extension.
│ └── init/ # Initialization scripts.
│ └── 00-pgvector.sql # SQL to create pgvector extension on db init.
│
└── docs/ # Documentation: architecture, APIs, etc.
└── architecture.md # Detailed system explanation, including the provided Mermaid diagram.