owenkaplinsky commited on
Commit
f108b29
·
1 Parent(s): dbe2d5a

Clean initial commit for HuggingFace

Browse files

# Conflicts:
# .gitignore
# Dockerfile
# README.md
# docker/Dockerfile.candidates_db_init
# docker/Dockerfile.supervisor_api
# docker/docker-compose.yml
# docker/info.md
# scripts/db/list_candidates.py
# scripts/db/test_connection.py
# scripts/db/test_session.py
# scripts/db/wipe.py
# src/frontend/streamlit/voice_screening_ui/proxy.py
# start.sh
# tests/create_dummy_candidate.py
# tests/verify_voice_integration.py

.gitignore CHANGED
@@ -67,7 +67,4 @@ src/database/cvs/tests/*.txt
67
  .lgcache/
68
  .langgraph_api/
69
 
70
- .idea/
71
-
72
- # any .wav files
73
- *.wav
 
67
  .lgcache/
68
  .langgraph_api/
69
 
70
+ .idea/
 
 
 
Dockerfile CHANGED
@@ -2,8 +2,8 @@ FROM python:3.12-slim
2
 
3
  WORKDIR /app
4
 
5
- # System dependencies (include Postgres server so DB can run in-container)
6
- RUN apt-get update && apt-get install -y gcc libpq-dev postgresql postgresql-contrib gosu && rm -rf /var/lib/apt/lists/*
7
 
8
  # Copy requirement files
9
  COPY requirements/base.txt requirements/base.txt
@@ -34,8 +34,28 @@ COPY secrets/ /app/secrets/
34
  ENV PYTHONPATH=/app
35
  EXPOSE 7860
36
 
37
- # Copy entry script (includes Postgres startup)
38
- COPY start.sh /app/start.sh
39
- RUN chmod +x /app/start.sh
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  CMD ["/app/start.sh"]
 
2
 
3
  WORKDIR /app
4
 
5
+ # System dependencies
6
+ RUN apt-get update && apt-get install -y gcc libpq-dev && rm -rf /var/lib/apt/lists/*
7
 
8
  # Copy requirement files
9
  COPY requirements/base.txt requirements/base.txt
 
34
  ENV PYTHONPATH=/app
35
  EXPOSE 7860
36
 
37
+ # Create entry script inside the image (avoids missing file in build context)
38
+ RUN printf '%s\n' \
39
+ '#!/usr/bin/env bash' \
40
+ 'set -e' \
41
+ '' \
42
+ '# Hugging Face provides PORT; default to 7860 locally' \
43
+ 'export PORT=\"${PORT:-7860}\"' \
44
+ '' \
45
+ '# Defaults for local in-container routing; can be overridden via env' \
46
+ 'export SUPERVISOR_API_URL=\"${SUPERVISOR_API_URL:-http://127.0.0.1:8080/api/v1/supervisor}\"' \
47
+ 'export DATABASE_API_URL=\"${DATABASE_API_URL:-http://127.0.0.1:8080/api/v1/db}\"' \
48
+ 'export CV_UPLOAD_API_URL=\"${CV_UPLOAD_API_URL:-http://127.0.0.1:8080/api/v1/cv}\"' \
49
+ '' \
50
+ '# Start FastAPI backend' \
51
+ 'uvicorn src.api.app:app --host 0.0.0.0 --port 8080 &' \
52
+ '' \
53
+ '# Give the API a moment to come up' \
54
+ 'sleep 2' \
55
+ '' \
56
+ '# Run Gradio frontend' \
57
+ 'python src/frontend/gradio/app.py' \
58
+ > /app/start.sh \
59
+ && chmod +x /app/start.sh
60
 
61
  CMD ["/app/start.sh"]
README.md CHANGED
@@ -171,7 +171,7 @@ docker compose --env-file .env -f docker/docker-compose.yml up --build
171
  The platform orchestrates a complete recruitment pipeline, interacting with both Candidates and the HR Supervisor.
172
 
173
  ### 1. The Recruitment Lifecycle
174
- The system tracks candidates through a defined state machine (see `src/backend/state/candidate.py` for the `CandidateStatus` enum).
175
 
176
  ```mermaid
177
  graph TD
@@ -231,7 +231,7 @@ graph TD
231
 
232
  To improve the reliability of complex evaluations (such as CV scoring and Voice Interview judging), we enforce **Chain-of-Thought (CoT)** reasoning within our structured outputs, inspired by [Wei et al. (2022)](https://arxiv.org/abs/2201.11903).
233
 
234
- By requiring the model to generate a textual explanation *before* assigning numerical scores, we ensure the model "thinks" through the evidence before committing to a decision. This is implemented directly in our Pydantic schemas (e.g., `src/backend/agents/cv_screening/schemas/output_schema.py`), where field order matters:
235
 
236
  ```mermaid
237
  flowchart LR
@@ -362,14 +362,14 @@ A breakdown of the various LLMs, Agents, and Workflows powering the system.
362
 
363
  | Component | Type | Model | Description | Location |
364
  | :--- | :--- | :--- | :--- | :--- |
365
- | **Supervisor Agent** | 🤖 **Agent** | `gpt-4o` | Orchestrates delegation, planning, and context management. | `src/backend/agents/supervisor/supervisor_v2.py` |
366
- | **Gmail Agent** | 🤖 **Agent** | `gpt-4o` | Autonomous email management via MCP (read/send/label). | `src/backend/agents/gmail/gmail_agent.py` |
367
- | **GCalendar Agent** | 🤖 **Agent** | `gpt-4o` | Autonomous calendar scheduling via MCP. | `src/backend/agents/gcalendar/gcalendar_agent.py` |
368
- | **DB Executor** | 🤖 **Agent** | `gpt-4o` | Writes SQL/Python to query the database (CodeAct). | `src/backend/agents/db_executor/db_executor.py` |
369
- | **CV Screening** | ⚙️ **Workflow** | `gpt-4o` | Deterministic pipeline: Fetch → Read → Evaluate → Save. | `src/backend/agents/cv_screening/cv_screening_workflow.py` |
370
- | **Voice Judge** | 🧠 **Simple LLM** | `gpt-4o-audio` | Evaluates audio/transcripts for sentiment & confidence. | `src/backend/agents/voice_screening/judge.py` |
371
- | **Doc Parser** | 🧠 **Simple LLM** | `gpt-4o-mini` | Vision-based PDF-to-Markdown conversion. | `src/backend/doc_parser/pdf_to_markdown.py` |
372
- | **History Manager** | 🧠 **Simple LLM** | `gpt-4o-mini` | Summarizes conversation history for context compaction. | `src/backend/context_eng/history_manager.py` |
373
 
374
  ### 🔌 ***`Integrated MCP Servers`***
375
  The system integrates **Model Context Protocol (MCP)** servers to securely and standardizedly connect agents to external tools.
 
171
  The platform orchestrates a complete recruitment pipeline, interacting with both Candidates and the HR Supervisor.
172
 
173
  ### 1. The Recruitment Lifecycle
174
+ The system tracks candidates through a defined state machine (see `src/state/candidate.py` for the `CandidateStatus` enum).
175
 
176
  ```mermaid
177
  graph TD
 
231
 
232
  To improve the reliability of complex evaluations (such as CV scoring and Voice Interview judging), we enforce **Chain-of-Thought (CoT)** reasoning within our structured outputs, inspired by [Wei et al. (2022)](https://arxiv.org/abs/2201.11903).
233
 
234
+ By requiring the model to generate a textual explanation *before* assigning numerical scores, we ensure the model "thinks" through the evidence before committing to a decision. This is implemented directly in our Pydantic schemas (e.g., `src/agents/cv_screening/schemas/output_schema.py`), where field order matters:
235
 
236
  ```mermaid
237
  flowchart LR
 
362
 
363
  | Component | Type | Model | Description | Location |
364
  | :--- | :--- | :--- | :--- | :--- |
365
+ | **Supervisor Agent** | 🤖 **Agent** | `gpt-4o` | Orchestrates delegation, planning, and context management. | `src/agents/supervisor/supervisor_v2.py` |
366
+ | **Gmail Agent** | 🤖 **Agent** | `gpt-4o` | Autonomous email management via MCP (read/send/label). | `src/agents/gmail/gmail_agent.py` |
367
+ | **GCalendar Agent** | 🤖 **Agent** | `gpt-4o` | Autonomous calendar scheduling via MCP. | `src/agents/gcalendar/gcalendar_agent.py` |
368
+ | **DB Executor** | 🤖 **Agent** | `gpt-4o` | Writes SQL/Python to query the database (CodeAct). | `src/agents/db_executor/db_executor.py` |
369
+ | **CV Screening** | ⚙️ **Workflow** | `gpt-4o` | Deterministic pipeline: Fetch → Read → Evaluate → Save. | `src/agents/cv_screening/cv_screening_workflow.py` |
370
+ | **Voice Judge** | 🧠 **Simple LLM** | `gpt-4o-audio` | Evaluates audio/transcripts for sentiment & confidence. | `src/agents/voice_screening/judge.py` |
371
+ | **Doc Parser** | 🧠 **Simple LLM** | `gpt-4o-mini` | Vision-based PDF-to-Markdown conversion. | `src/doc_parser/pdf_to_markdown.py` |
372
+ | **History Manager** | 🧠 **Simple LLM** | `gpt-4o-mini` | Summarizes conversation history for context compaction. | `src/context_eng/history_manager.py` |
373
 
374
  ### 🔌 ***`Integrated MCP Servers`***
375
  The system integrates **Model Context Protocol (MCP)** servers to securely and standardizedly connect agents to external tools.
docker/Dockerfile.candidates_db_init CHANGED
@@ -15,10 +15,8 @@ COPY ../requirements/base.txt ./requirements/base.txt
15
  COPY ../requirements/db.txt ./requirements/db.txt
16
  RUN pip install --no-cache-dir -r requirements/db.txt
17
 
18
- # Copy required source modules
19
- COPY src/backend/database/candidates ./src/backend/database/candidates
20
- COPY src/backend/state ./src/backend/state
21
- COPY src/backend/configs ./src/backend/configs
22
 
23
  # Default command - use dedicated init script to avoid circular import
24
- CMD ["python", "-m", "src.backend.database.candidates.init_db"]
 
15
  COPY ../requirements/db.txt ./requirements/db.txt
16
  RUN pip install --no-cache-dir -r requirements/db.txt
17
 
18
+ # Copy *only* the candidate database module
19
+ COPY src/database/candidates ./src/database/candidates
 
 
20
 
21
  # Default command - use dedicated init script to avoid circular import
22
+ CMD ["python", "-m", "src.database.candidates.init_db"]
docker/Dockerfile.supervisor_api CHANGED
@@ -39,5 +39,5 @@ COPY .env /app/.env
39
  EXPOSE 8080
40
 
41
  # Run FastAPI with uvicorn
42
- CMD ["uvicorn", "src.backend.api.app:app", "--host", "0.0.0.0", "--port", "8080"]
43
 
 
39
  EXPOSE 8080
40
 
41
  # Run FastAPI with uvicorn
42
+ CMD ["uvicorn", "src.api.app:app", "--host", "0.0.0.0", "--port", "8080"]
43
 
docker/docker-compose.yml CHANGED
@@ -19,10 +19,6 @@ services:
19
  interval: 3s
20
  timeout: 3s
21
  retries: 5
22
- # Hey compose here is env file,
23
- # pass it to container, but not the .env itself
24
- env_file:
25
- - ../.env
26
  environment:
27
  POSTGRES_HOST: ${POSTGRES_HOST}
28
  POSTGRES_PORT: ${POSTGRES_PORT}
@@ -38,23 +34,18 @@ services:
38
  # Initializes the database or starts the API (depending on command).
39
  container_name: candidates_db_init
40
  build:
41
- context: .. # build from the project root
42
  dockerfile: docker/Dockerfile.candidates_db_init
43
  depends_on:
44
  db:
45
  condition: service_healthy
46
- # Hey compose here is env file,
47
- # pass it to container, but not the .env itself
48
- env_file:
49
- - ../.env
50
  environment:
51
- # Explicitly set POSTGRES_HOST to the service name 'db' for Docker networking
52
- POSTGRES_HOST: db
53
- POSTGRES_PORT: 5432
54
  POSTGRES_USER: ${POSTGRES_USER}
55
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
56
  POSTGRES_DB: ${POSTGRES_DB}
57
- command: ["python", "-m", "src.backend.database.candidates.init_db"]
58
 
59
  volumes:
60
  # --- Local code mount (for development only) ---
@@ -62,7 +53,7 @@ services:
62
  # into the container at /app.
63
  # ✅ Enables live code changes without rebuilding the image.
64
  # ⚠️ Do NOT use in production – overrides the built image code.
65
- - ../:/app # optional: live reload for local dev
66
 
67
  networks:
68
  - hrnet
@@ -78,17 +69,15 @@ services:
78
  depends_on:
79
  - db
80
  - supervisor_api
81
- env_file:
82
- - ../.env
83
  environment:
84
  # Database connection
85
- POSTGRES_HOST: db
86
- POSTGRES_PORT: 5432
87
  POSTGRES_USER: ${POSTGRES_USER}
88
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
89
  POSTGRES_DB: ${POSTGRES_DB}
90
- DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
91
- CV_UPLOAD_PATH: /app/src/backend/database/cvs/uploads
92
  # App specific
93
  CV_UPLOAD_API_URL: http://supervisor_api:8080/api/v1/cv
94
  PYTHONPATH: /app
@@ -96,8 +85,15 @@ services:
96
  # Mount local code for live updates
97
  - ../:/app
98
  # Shared volume for CV uploads (persistent)
99
- - ../src/backend/database/cvs:/app/src/backend/database/cvs
100
- command: ["streamlit", "run", "src/frontend/streamlit/cv_ui/app.py", "--server.port=8501", "--server.address=0.0.0.0"]
 
 
 
 
 
 
 
101
  networks:
102
  - hrnet
103
 
@@ -109,8 +105,6 @@ services:
109
  dockerfile: docker/Dockerfile.voice_proxy
110
  ports:
111
  - "8000:8000"
112
- env_file:
113
- - ../.env
114
  depends_on:
115
  - db
116
  - candidates_db_init
@@ -118,16 +112,20 @@ services:
118
  PYTHONPATH: /app
119
  OPENAI_API_KEY: ${OPENAI_API_KEY}
120
  BACKEND_API_URL: http://supervisor_api:8080
121
- # Database connection
122
- POSTGRES_HOST: db
123
- POSTGRES_PORT: 5432
124
- POSTGRES_USER: ${POSTGRES_USER}
125
- POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
126
- POSTGRES_DB: ${POSTGRES_DB}
127
  volumes:
128
  # Mount local code for live updates
129
  - ../:/app
130
- command: ["python", "-m", "uvicorn", "src.frontend.streamlit.voice_screening_ui.proxy:app", "--host", "0.0.0.0", "--port", "8000"]
 
 
 
 
 
 
 
 
 
 
131
  networks:
132
  - hrnet
133
 
@@ -138,12 +136,10 @@ services:
138
  context: ..
139
  dockerfile: docker/Dockerfile.voice_screening
140
  ports:
141
- - "8502:8501" # Map host port 8502 to container port 8501
142
  depends_on:
143
  - db
144
  - websocket_proxy
145
- env_file:
146
- - ../.env
147
  environment:
148
  DATABASE_URL: postgresql://agentic_user:password123@db:5432/agentic_hr
149
  PYTHONPATH: /app
@@ -152,7 +148,14 @@ services:
152
  volumes:
153
  # Mount local code for live updates
154
  - ../:/app
155
- command: ["streamlit", "run", "src/frontend/streamlit/voice_screening_ui/app.py", "--server.port=8501", "--server.address=0.0.0.0"]
 
 
 
 
 
 
 
156
  networks:
157
  - hrnet
158
 
@@ -163,15 +166,13 @@ services:
163
  context: ..
164
  dockerfile: docker/Dockerfile.supervisor_api
165
  ports:
166
- - "8080:8080" # Map host port 8080 to container port 8080
167
  depends_on:
168
  - db
169
- env_file:
170
- - ../.env
171
  environment:
172
  # We set POSTGRES_HOST to 'db' so the agent connects to the container internal network
173
- POSTGRES_HOST: db
174
- POSTGRES_PORT: 5432
175
  POSTGRES_USER: ${POSTGRES_USER}
176
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
177
  POSTGRES_DB: ${POSTGRES_DB}
@@ -179,12 +180,19 @@ services:
179
  PROMPTLAYER_API_KEY: ${PROMPTLAYER_API_KEY}
180
  OPENAI_API_KEY: ${OPENAI_API_KEY}
181
  WEBSOCKET_PROXY_URL: ws://websocket_proxy:8000/ws/realtime
182
- CV_UPLOAD_PATH: /app/src/backend/database/cvs/uploads
183
- CV_PARSED_PATH: /app/src/backend/database/cvs/parsed
184
  volumes:
185
  # Mount local code for live updates
186
  - ../:/app
187
- command: ["uvicorn", "src.backend.api.app:app", "--host", "0.0.0.0", "--port", "8080", "--reload"]
 
 
 
 
 
 
 
 
 
188
  networks:
189
  - hrnet
190
 
@@ -195,12 +203,10 @@ services:
195
  context: ..
196
  dockerfile: docker/Dockerfile.supervisor
197
  ports:
198
- - "8503:8501" # Map host port 8503 to container port 8501
199
  depends_on:
200
  - db
201
  - supervisor_api
202
- env_file:
203
- - ../.env
204
  environment:
205
  # We set POSTGRES_HOST to 'db' so the agent connects to the container internal network
206
  PYTHONPATH: /app
 
19
  interval: 3s
20
  timeout: 3s
21
  retries: 5
 
 
 
 
22
  environment:
23
  POSTGRES_HOST: ${POSTGRES_HOST}
24
  POSTGRES_PORT: ${POSTGRES_PORT}
 
34
  # Initializes the database or starts the API (depending on command).
35
  container_name: candidates_db_init
36
  build:
37
+ context: .. # build from the project root
38
  dockerfile: docker/Dockerfile.candidates_db_init
39
  depends_on:
40
  db:
41
  condition: service_healthy
 
 
 
 
42
  environment:
43
+ POSTGRES_HOST: ${POSTGRES_HOST}
44
+ POSTGRES_PORT: ${POSTGRES_PORT}
 
45
  POSTGRES_USER: ${POSTGRES_USER}
46
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
47
  POSTGRES_DB: ${POSTGRES_DB}
48
+ # command: ["python", "-m", "src.database.candidates.init_db"]
49
 
50
  volumes:
51
  # --- Local code mount (for development only) ---
 
53
  # into the container at /app.
54
  # ✅ Enables live code changes without rebuilding the image.
55
  # ⚠️ Do NOT use in production – overrides the built image code.
56
+ - ../:/app # optional: live reload for local dev
57
 
58
  networks:
59
  - hrnet
 
69
  depends_on:
70
  - db
71
  - supervisor_api
 
 
72
  environment:
73
  # Database connection
74
+ POSTGRES_HOST: ${POSTGRES_HOST}
75
+ POSTGRES_PORT: ${POSTGRES_PORT}
76
  POSTGRES_USER: ${POSTGRES_USER}
77
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
78
  POSTGRES_DB: ${POSTGRES_DB}
79
+ DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
80
+ CV_UPLOAD_PATH: /app/src/database/cvs/uploads
81
  # App specific
82
  CV_UPLOAD_API_URL: http://supervisor_api:8080/api/v1/cv
83
  PYTHONPATH: /app
 
85
  # Mount local code for live updates
86
  - ../:/app
87
  # Shared volume for CV uploads (persistent)
88
+ - ../src/database/cvs:/app/src/database/cvs
89
+ command:
90
+ [
91
+ "streamlit",
92
+ "run",
93
+ "src/frontend/streamlit/cv_ui/app.py",
94
+ "--server.port=8501",
95
+ "--server.address=0.0.0.0",
96
+ ]
97
  networks:
98
  - hrnet
99
 
 
105
  dockerfile: docker/Dockerfile.voice_proxy
106
  ports:
107
  - "8000:8000"
 
 
108
  depends_on:
109
  - db
110
  - candidates_db_init
 
112
  PYTHONPATH: /app
113
  OPENAI_API_KEY: ${OPENAI_API_KEY}
114
  BACKEND_API_URL: http://supervisor_api:8080
 
 
 
 
 
 
115
  volumes:
116
  # Mount local code for live updates
117
  - ../:/app
118
+ command:
119
+ [
120
+ "python",
121
+ "-m",
122
+ "uvicorn",
123
+ "src.frontend.streamlit.voice_screening_ui.proxy:app",
124
+ "--host",
125
+ "0.0.0.0",
126
+ "--port",
127
+ "8000",
128
+ ]
129
  networks:
130
  - hrnet
131
 
 
136
  context: ..
137
  dockerfile: docker/Dockerfile.voice_screening
138
  ports:
139
+ - "8502:8501" # Map host port 8502 to container port 8501
140
  depends_on:
141
  - db
142
  - websocket_proxy
 
 
143
  environment:
144
  DATABASE_URL: postgresql://agentic_user:password123@db:5432/agentic_hr
145
  PYTHONPATH: /app
 
148
  volumes:
149
  # Mount local code for live updates
150
  - ../:/app
151
+ command:
152
+ [
153
+ "streamlit",
154
+ "run",
155
+ "src/frontend/streamlit/voice_screening_ui/app.py",
156
+ "--server.port=8501",
157
+ "--server.address=0.0.0.0",
158
+ ]
159
  networks:
160
  - hrnet
161
 
 
166
  context: ..
167
  dockerfile: docker/Dockerfile.supervisor_api
168
  ports:
169
+ - "8080:8080" # Map host port 8080 to container port 8080
170
  depends_on:
171
  - db
 
 
172
  environment:
173
  # We set POSTGRES_HOST to 'db' so the agent connects to the container internal network
174
+ POSTGRES_HOST: ${POSTGRES_HOST}
175
+ POSTGRES_PORT: ${POSTGRES_PORT}
176
  POSTGRES_USER: ${POSTGRES_USER}
177
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
178
  POSTGRES_DB: ${POSTGRES_DB}
 
180
  PROMPTLAYER_API_KEY: ${PROMPTLAYER_API_KEY}
181
  OPENAI_API_KEY: ${OPENAI_API_KEY}
182
  WEBSOCKET_PROXY_URL: ws://websocket_proxy:8000/ws/realtime
 
 
183
  volumes:
184
  # Mount local code for live updates
185
  - ../:/app
186
+ command:
187
+ [
188
+ "uvicorn",
189
+ "src.api.app:app",
190
+ "--host",
191
+ "0.0.0.0",
192
+ "--port",
193
+ "8080",
194
+ "--reload",
195
+ ]
196
  networks:
197
  - hrnet
198
 
 
203
  context: ..
204
  dockerfile: docker/Dockerfile.supervisor
205
  ports:
206
+ - "8503:8501" # Map host port 8503 to container port 8501
207
  depends_on:
208
  - db
209
  - supervisor_api
 
 
210
  environment:
211
  # We set POSTGRES_HOST to 'db' so the agent connects to the container internal network
212
  PYTHONPATH: /app
docker/info.md CHANGED
@@ -20,7 +20,6 @@
20
  docker compose --env-file .env -f docker/docker-compose.yml up --build
21
  ```
22
 
23
-
24
  ---
25
 
26
  ### Resetting the Environment
@@ -31,14 +30,11 @@ To completely reset the environment and database:
31
 
32
  ```bash
33
  # 1. Stop running containers
34
- docker compose -f docker/docker-compose.yml down --remove-orphans
35
 
36
  # 2. Remove the persistent database volume
37
  docker volume rm docker_postgres_data
38
 
39
- # 3. Prune unused Docker resources (optional but recommended)
40
- docker system prune -f
41
-
42
- # 4. Rebuild and start fresh
43
  docker compose --env-file .env -f docker/docker-compose.yml up --build
44
  ```
 
20
  docker compose --env-file .env -f docker/docker-compose.yml up --build
21
  ```
22
 
 
23
  ---
24
 
25
  ### Resetting the Environment
 
30
 
31
  ```bash
32
  # 1. Stop running containers
33
+ docker compose -f docker/docker-compose.yml down
34
 
35
  # 2. Remove the persistent database volume
36
  docker volume rm docker_postgres_data
37
 
38
+ # 3. Rebuild and start fresh
 
 
 
39
  docker compose --env-file .env -f docker/docker-compose.yml up --build
40
  ```
scripts/db/list_candidates.py CHANGED
@@ -10,8 +10,8 @@ from sqlalchemy.exc import ProgrammingError
10
  # Ensure project root is in path
11
  import scripts.db # noqa: F401
12
 
13
- from src.backend.database.candidates.client import SessionLocal
14
- from src.backend.database.candidates.models import Candidate
15
 
16
 
17
  def list_candidates(limit: int = 10) -> bool:
@@ -41,17 +41,7 @@ def list_candidates(limit: int = 10) -> bool:
41
  .all()
42
  )
43
  for c in candidates:
44
- print(f" - ID: {c.id}")
45
- print(f" Full Name: {c.full_name}")
46
- print(f" Email: {c.email}")
47
- print(f" Phone: {c.phone_number}")
48
- print(f" CV Path: {c.cv_file_path}")
49
- print(f" Parsed CV Path: {c.parsed_cv_file_path}")
50
- print(f" Status: {c.status}")
51
- print(f" Auth Code: {c.auth_code}")
52
- print(f" Created At: {c.created_at}")
53
- print(f" Updated At: {c.updated_at}")
54
- print("-" * 40)
55
 
56
  return True
57
 
 
10
  # Ensure project root is in path
11
  import scripts.db # noqa: F401
12
 
13
+ from src.database.candidates.client import SessionLocal
14
+ from src.database.candidates.models import Candidate
15
 
16
 
17
  def list_candidates(limit: int = 10) -> bool:
 
41
  .all()
42
  )
43
  for c in candidates:
44
+ print(f" - {c.full_name} | {c.email} | Status: {c.status}")
 
 
 
 
 
 
 
 
 
 
45
 
46
  return True
47
 
scripts/db/test_connection.py CHANGED
@@ -11,7 +11,7 @@ from sqlalchemy import text
11
  # Ensure project root is in path
12
  import scripts.db # noqa: F401
13
 
14
- from src.backend.database.candidates.client import get_engine
15
 
16
 
17
  def test_connection() -> bool:
 
11
  # Ensure project root is in path
12
  import scripts.db # noqa: F401
13
 
14
+ from src.database.candidates.client import get_engine
15
 
16
 
17
  def test_connection() -> bool:
scripts/db/test_session.py CHANGED
@@ -10,7 +10,7 @@ from sqlalchemy import text
10
  # Ensure project root is in path
11
  import scripts.db # noqa: F401
12
 
13
- from src.backend.database.candidates.client import SessionLocal
14
 
15
 
16
  def test_session_query() -> bool:
 
10
  # Ensure project root is in path
11
  import scripts.db # noqa: F401
12
 
13
+ from src.database.candidates.client import SessionLocal
14
 
15
 
16
  def test_session_query() -> bool:
scripts/db/wipe.py CHANGED
@@ -11,7 +11,7 @@ from sqlalchemy import text
11
  project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../'))
12
  sys.path.append(project_root)
13
 
14
- from src.backend.database.candidates.client import get_engine
15
 
16
  def wipe_database():
17
  print("⚠️ WARNING: This will PERMANENTLY DELETE ALL RECORDS from the 'candidates' table and all related tables (CASCADE).")
 
11
  project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../'))
12
  sys.path.append(project_root)
13
 
14
+ from src.database.candidates.client import get_engine
15
 
16
  def wipe_database():
17
  print("⚠️ WARNING: This will PERMANENTLY DELETE ALL RECORDS from the 'candidates' table and all related tables (CASCADE).")
src/agents/db_executor/codeact/core/codeact.py CHANGED
@@ -319,13 +319,6 @@ class CodeActAgent:
319
  serializable_new_vars = self._filter_serializable(new_vars)
320
  new_context = {**existing_context, **serializable_new_vars}
321
 
322
- # Format output properly
323
- content_str = (
324
- f"Sandbox result of your executed code:\n{json.dumps(output, default=str)}"
325
- if not isinstance(output, str)
326
- else f"Sandbox result of your executed code:\n{output}"
327
- )
328
-
329
  # Return OpenAI-compliant tool result
330
  return {
331
  "messages": [
@@ -333,7 +326,13 @@ class CodeActAgent:
333
  "role": "tool",
334
  "tool_call_id": tool_call_id,
335
  "name": "sandbox",
336
- "content": content_str
 
 
 
 
 
 
337
  }
338
  ],
339
  "context": new_context,
@@ -364,13 +363,6 @@ class CodeActAgent:
364
  serializable_new_vars = self._filter_serializable(new_vars)
365
  new_context = {**existing_context, **serializable_new_vars}
366
 
367
- # Format output properly
368
- content_str = ( # NOTE: before "json.dumps(output)"
369
- f"Sandbox result of your executed code:\n{json.dumps(output, default=str)}"
370
- if not isinstance(output, str)
371
- else f"Sandbox result of your executed code:\n{output}"
372
- )
373
-
374
  # Return OpenAI-compliant tool result
375
  return {
376
  "messages": [
@@ -378,7 +370,11 @@ class CodeActAgent:
378
  "role": "tool",
379
  "tool_call_id": tool_call_id,
380
  "name": "sandbox",
381
- "content": content_str,
 
 
 
 
382
  # Keep as string if already string else JSON serialize
383
  }
384
  ],
@@ -542,3 +538,4 @@ if __name__ == "__main__":
542
  pretty_print_state(chunk, show_context=False)
543
 
544
  print("\n")
 
 
319
  serializable_new_vars = self._filter_serializable(new_vars)
320
  new_context = {**existing_context, **serializable_new_vars}
321
 
 
 
 
 
 
 
 
322
  # Return OpenAI-compliant tool result
323
  return {
324
  "messages": [
 
326
  "role": "tool",
327
  "tool_call_id": tool_call_id,
328
  "name": "sandbox",
329
+ "content": (
330
+ f"Sandbox result of your executed code:\n{json.dumps(output)}"
331
+ if not isinstance(output, str)
332
+ else f"Sandbox result of your executed code:\n{output}"
333
+ # Keep as string if already string else JSON serialize
334
+ ),
335
+
336
  }
337
  ],
338
  "context": new_context,
 
363
  serializable_new_vars = self._filter_serializable(new_vars)
364
  new_context = {**existing_context, **serializable_new_vars}
365
 
 
 
 
 
 
 
 
366
  # Return OpenAI-compliant tool result
367
  return {
368
  "messages": [
 
370
  "role": "tool",
371
  "tool_call_id": tool_call_id,
372
  "name": "sandbox",
373
+ "content": (
374
+ f"Sandbox result of your executed code:\n{json.dumps(output)}"
375
+ if not isinstance(output, str)
376
+ else f"Sandbox result of your executed code:\n{output}"
377
+ ),
378
  # Keep as string if already string else JSON serialize
379
  }
380
  ],
 
538
  pretty_print_state(chunk, show_context=False)
539
 
540
  print("\n")
541
+
src/agents/db_executor/db_executor.py CHANGED
@@ -11,7 +11,6 @@ from langchain_core.tools import tool
11
  from typing import Dict, Any
12
  from src.prompts import get_prompt
13
  from src.database.candidates import evaluate_cv_screening_decision
14
- from src.state.candidate import CandidateStatus, InterviewStatus, DecisionStatus
15
 
16
 
17
  SYSTEM_PROMPT = get_prompt(
@@ -41,9 +40,6 @@ def db_executor(query: str) -> str:
41
  "VoiceScreeningResult": VoiceScreeningResult,
42
  "InterviewScheduling": InterviewScheduling,
43
  "FinalDecision": FinalDecision,
44
- "CandidateStatus": CandidateStatus,
45
- "InterviewStatus": InterviewStatus,
46
- "DecisionStatus": DecisionStatus,
47
  }
48
 
49
  try:
@@ -81,3 +77,19 @@ def db_executor(query: str) -> str:
81
 
82
 
83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  from typing import Dict, Any
12
  from src.prompts import get_prompt
13
  from src.database.candidates import evaluate_cv_screening_decision
 
14
 
15
 
16
  SYSTEM_PROMPT = get_prompt(
 
40
  "VoiceScreeningResult": VoiceScreeningResult,
41
  "InterviewScheduling": InterviewScheduling,
42
  "FinalDecision": FinalDecision,
 
 
 
43
  }
44
 
45
  try:
 
77
 
78
 
79
 
80
+ if __name__ == "__main__":
81
+ from rich.console import Console
82
+ from rich.panel import Panel
83
+
84
+ console = Console()
85
+ query = "Fetch all candidates and their status."
86
+
87
+ console.rule("[bold magenta]DB Executor Test Run[/bold magenta]")
88
+ console.print(f"[cyan]Query:[/] {query}\n")
89
+
90
+ result = db_executor(query)
91
+
92
+ # 🧠 Show model result nicely
93
+ console.print(Panel.fit(result, title="🧠 Model Output", border_style="blue"))
94
+
95
+ console.rule("[bold green]End of Execution[/bold green]")
src/agents/gcalendar/gcalendar_agent.py CHANGED
@@ -34,7 +34,8 @@ def gcalendar_agent(query: str) -> str:
34
  # Load settings
35
  settings = GoogleCalendarSettings()
36
  CALENDAR_MCP_DIR = settings.calendar_mcp_dir
37
- CREDS, TOKEN = settings.materialize_files()
 
38
 
39
  # Initialize model
40
  model = ChatOpenAI(model="gpt-4o", temperature=0)
 
34
  # Load settings
35
  settings = GoogleCalendarSettings()
36
  CALENDAR_MCP_DIR = settings.calendar_mcp_dir
37
+ CREDS = settings.creds
38
+ TOKEN = settings.token
39
 
40
  # Initialize model
41
  model = ChatOpenAI(model="gpt-4o", temperature=0)
src/agents/gmail/gmail_agent.py CHANGED
@@ -1,8 +1,6 @@
1
  import asyncio
2
- import os
3
  import shutil
4
  from pathlib import Path
5
- from pydantic_core import ValidationError
6
  from langchain_core.tools import tool
7
  from langchain_mcp_adapters.client import MultiServerMCPClient
8
  from langchain.agents import create_agent
@@ -41,24 +39,11 @@ def gmail_agent(query: str) -> str:
41
  if not UV_PATH:
42
  return "❌ Error: 'uv' executable not found. Please ensure uv is installed and in the system PATH."
43
 
44
- # Validate required Gmail settings before proceeding
45
- creds_env = os.getenv("GMAIL_CREDS_JSON")
46
- token_env = os.getenv("GMAIL_TOKEN_JSON")
47
- if not creds_env or not token_env:
48
- return "❌ Gmail credentials not configured. Set GMAIL_CREDS_JSON and GMAIL_TOKEN_JSON environment variables."
49
-
50
  try:
51
  import asyncio
52
  async def _run_async():
53
  # Load settings
54
- try:
55
- # Pass env values directly to avoid reliance on env file loading
56
- settings = GMailSettings(creds_json=creds_env, token_json=token_env)
57
- except ValidationError as ve:
58
- return f"❌ Gmail settings invalid: {ve}"
59
- except Exception as e:
60
- return f"❌ Gmail settings error: {e}"
61
- creds_path, token_path = settings.materialize_files()
62
 
63
  # Initialize model
64
  model = ChatOpenAI(model="gpt-4o", temperature=0)
@@ -71,8 +56,8 @@ def gmail_agent(query: str) -> str:
71
  "args": [
72
  "--directory", str(settings.gmail_mcp_dir),
73
  "run", "gmail",
74
- "--creds-file-path", str(creds_path),
75
- "--token-path", str(token_path),
76
  ],
77
  "transport": "stdio",
78
  }
 
1
  import asyncio
 
2
  import shutil
3
  from pathlib import Path
 
4
  from langchain_core.tools import tool
5
  from langchain_mcp_adapters.client import MultiServerMCPClient
6
  from langchain.agents import create_agent
 
39
  if not UV_PATH:
40
  return "❌ Error: 'uv' executable not found. Please ensure uv is installed and in the system PATH."
41
 
 
 
 
 
 
 
42
  try:
43
  import asyncio
44
  async def _run_async():
45
  # Load settings
46
+ settings = GMailSettings()
 
 
 
 
 
 
 
47
 
48
  # Initialize model
49
  model = ChatOpenAI(model="gpt-4o", temperature=0)
 
56
  "args": [
57
  "--directory", str(settings.gmail_mcp_dir),
58
  "run", "gmail",
59
+ "--creds-file-path", str(settings.creds),
60
+ "--token-path", str(settings.token),
61
  ],
62
  "transport": "stdio",
63
  }
src/context_eng/compacting_supervisor.py CHANGED
@@ -135,7 +135,7 @@ history_manager = HistoryManager(memory_saver=memory)
135
  compacting_supervisor = CompactingSupervisor(
136
  agent=supervisor_agent,
137
  history_manager=history_manager,
138
- token_limit=2500,
139
  compaction_ratio=0.5
140
  )
141
 
 
135
  compacting_supervisor = CompactingSupervisor(
136
  agent=supervisor_agent,
137
  history_manager=history_manager,
138
+ token_limit=500,
139
  compaction_ratio=0.5
140
  )
141
 
src/database/candidates/client.py CHANGED
@@ -32,4 +32,4 @@ def get_engine():
32
 
33
  # --- SQLAlchemy session setup ---
34
  engine = get_engine()
35
- SessionLocal = sessionmaker(bind=engine, autoflush=False, autocommit=False)
 
32
 
33
  # --- SQLAlchemy session setup ---
34
  engine = get_engine()
35
+ SessionLocal = sessionmaker(bind=engine, autoflush=False, autocommit=False)
src/database/candidates/init_db.py CHANGED
@@ -11,7 +11,7 @@ Usage:
11
 
12
  from src.database.candidates.client import engine
13
  from src.database.candidates.models import Base
14
- from sqlalchemy import inspect
15
 
16
  def init_db():
17
  """
@@ -19,24 +19,13 @@ def init_db():
19
  Intended for dev setup / Docker initialization.
20
  """
21
  try:
22
- print("🚀 Starting database initialization...")
23
  Base.metadata.create_all(bind=engine)
24
-
25
- # Verify tables
26
- inspector = inspect(engine)
27
- tables = inspector.get_table_names()
28
- print(f"📊 Found tables: {tables}")
29
-
30
- if "candidates" in tables:
31
- print("✅ Database initialized successfully.")
32
- return True
33
- else:
34
- print("❌ Error: 'candidates' table was not created!")
35
-
36
  except Exception as e:
37
  print(f"❌ Failed to initialize database: {e}")
38
  raise
39
 
40
 
41
  if __name__ == "__main__":
42
- init_db()
 
 
11
 
12
  from src.database.candidates.client import engine
13
  from src.database.candidates.models import Base
14
+
15
 
16
  def init_db():
17
  """
 
19
  Intended for dev setup / Docker initialization.
20
  """
21
  try:
 
22
  Base.metadata.create_all(bind=engine)
23
+ print("✅ Database initialized successfully.")
 
 
 
 
 
 
 
 
 
 
 
24
  except Exception as e:
25
  print(f"❌ Failed to initialize database: {e}")
26
  raise
27
 
28
 
29
  if __name__ == "__main__":
30
+ init_db()
31
+
src/database/candidates/models.py CHANGED
@@ -22,11 +22,8 @@ Base = declarative_base()
22
 
23
 
24
  def generate_auth_code() -> str:
25
- """Generate a 6-digit random authentication code.
26
- """
27
- return "".join(
28
- secrets.choice(string.digits) for _ in range(6)
29
- )
30
 
31
  # --- TABLES ---
32
 
@@ -129,4 +126,4 @@ class FinalDecision(Base):
129
  human_notes = Column(Text)
130
  timestamp = Column(DateTime, default=datetime.utcnow)
131
 
132
- candidate = relationship("Candidate", back_populates="final_decision")
 
22
 
23
 
24
  def generate_auth_code() -> str:
25
+ """Generate a 6-digit random authentication code."""
26
+ return "".join(secrets.choice(string.digits) for _ in range(6))
 
 
 
27
 
28
  # --- TABLES ---
29
 
 
126
  human_notes = Column(Text)
127
  timestamp = Column(DateTime, default=datetime.utcnow)
128
 
129
+ candidate = relationship("Candidate", back_populates="final_decision")
src/database/context/__init__.py ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from sqlalchemy import (
2
+ Column, String, Integer, Float, Enum, DateTime, Text, ForeignKey, JSON
3
+ )
4
+ from sqlalchemy.dialects.postgresql import UUID
5
+ from sqlalchemy.orm import declarative_base, relationship
6
+ from datetime import datetime
7
+ import enum
8
+ import uuid
9
+
10
+ Base = declarative_base()
11
+
12
+
13
+ # ==============================================================
14
+ # ENUM DEFINITIONS
15
+ # ==============================================================
16
+
17
+ class CandidateStatus(enum.Enum):
18
+ APPLIED = "applied"
19
+ CV_SCREENED = "cv_screened"
20
+ INVITED_VOICE = "invited_voice"
21
+ VOICE_DONE = "voice_done"
22
+ SCHEDULED_HR = "scheduled_hr"
23
+ DECISION_PENDING = "decision_pending"
24
+ REJECTED = "rejected"
25
+ HIRED = "hired"
26
+
27
+
28
+ class InterviewStatus(enum.Enum):
29
+ SCHEDULED = "scheduled"
30
+ COMPLETED = "completed"
31
+ CANCELLED = "cancelled"
32
+
33
+
34
+ class Decision(enum.Enum):
35
+ HIRE = "hire"
36
+ REJECT = "reject"
37
+ MAYBE = "maybe"
38
+
39
+
40
+ # ==============================================================
41
+ # MAIN TABLES
42
+ # ==============================================================
43
+
44
+ class Candidate(Base):
45
+ __tablename__ = "candidates"
46
+
47
+ id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
48
+ full_name = Column(String(255), nullable=False)
49
+ email = Column(String(255), nullable=False, unique=True)
50
+ phone_number = Column(String(50), nullable=True)
51
+ cv_file_path = Column(String(500), nullable=True)
52
+ parsed_cv_json = Column(JSON, nullable=True)
53
+ status = Column(Enum(CandidateStatus), default=CandidateStatus.APPLIED)
54
+ created_at = Column(DateTime, default=datetime.utcnow)
55
+ updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
56
+
57
+ # Relationships
58
+ cv_results = relationship("CVScreeningResult", back_populates="candidate", cascade="all, delete-orphan")
59
+ voice_results = relationship("VoiceScreeningResult", back_populates="candidate", cascade="all, delete-orphan")
60
+ interviews = relationship("InterviewScheduling", back_populates="candidate", cascade="all, delete-orphan")
61
+ decision = relationship("FinalDecision", back_populates="candidate", uselist=False, cascade="all, delete-orphan")
62
+
63
+
64
+ # ==============================================================
65
+ # CV SCREENING RESULTS
66
+ # ==============================================================
67
+
68
+ class CVScreeningResult(Base):
69
+ __tablename__ = "cv_screening_results"
70
+
71
+ id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
72
+ candidate_id = Column(UUID(as_uuid=True), ForeignKey("candidates.id"), nullable=False)
73
+ job_title = Column(String(255), nullable=True)
74
+
75
+ skills_match_score = Column(Float, nullable=True)
76
+ experience_match_score = Column(Float, nullable=True)
77
+ education_match_score = Column(Float, nullable=True)
78
+ overall_fit_score = Column(Float, nullable=True)
79
+
80
+ llm_feedback = Column(Text, nullable=True)
81
+ reasoning_trace = Column(JSON, nullable=True)
82
+
83
+ timestamp = Column(DateTime, default=datetime.utcnow)
84
+
85
+ candidate = relationship("Candidate", back_populates="cv_results")
86
+
87
+
88
+ # ==============================================================
89
+ # VOICE SCREENING RESULTS
90
+ # ==============================================================
91
+
92
+ class VoiceScreeningResult(Base):
93
+ __tablename__ = "voice_screening_results"
94
+
95
+ id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
96
+ candidate_id = Column(UUID(as_uuid=True), ForeignKey("candidates.id"), nullable=False)
97
+
98
+ call_sid = Column(String(255), nullable=True)
99
+ transcript_text = Column(Text, nullable=True)
100
+
101
+ sentiment_score = Column(Float, nullable=True)
102
+ confidence_score = Column(Float, nullable=True)
103
+ communication_score = Column(Float, nullable=True)
104
+
105
+ llm_summary = Column(Text, nullable=True)
106
+ llm_judgment_json = Column(JSON, nullable=True)
107
+ audio_url = Column(String(500), nullable=True)
108
+
109
+ timestamp = Column(DateTime, default=datetime.utcnow)
110
+
111
+ candidate = relationship("Candidate", back_populates="voice_results")
112
+
113
+
114
+ # ==============================================================
115
+ # INTERVIEW SCHEDULING
116
+ # ==============================================================
117
+
118
+ class InterviewScheduling(Base):
119
+ __tablename__ = "interview_scheduling"
120
+
121
+ id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
122
+ candidate_id = Column(UUID(as_uuid=True), ForeignKey("candidates.id"), nullable=False)
123
+
124
+ calendar_event_id = Column(String(255), nullable=True)
125
+ event_summary = Column(String(255), nullable=True)
126
+
127
+ start_time = Column(DateTime, nullable=True)
128
+ end_time = Column(DateTime, nullable=True)
129
+ status = Column(Enum(InterviewStatus), default=InterviewStatus.SCHEDULED)
130
+
131
+ timestamp = Column(DateTime, default=datetime.utcnow)
132
+
133
+ candidate = relationship("Candidate", back_populates="interviews")
134
+
135
+
136
+ # ==============================================================
137
+ # FINAL DECISION
138
+ # ==============================================================
139
+
140
+ class FinalDecision(Base):
141
+ __tablename__ = "final_decision"
142
+
143
+ id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
144
+ candidate_id = Column(UUID(as_uuid=True), ForeignKey("candidates.id"), nullable=False, unique=True)
145
+
146
+ overall_score = Column(Float, nullable=True)
147
+ decision = Column(Enum(Decision), default=Decision.MAYBE)
148
+ llm_rationale = Column(Text, nullable=True)
149
+ human_notes = Column(Text, nullable=True)
150
+
151
+ timestamp = Column(DateTime, default=datetime.utcnow)
152
+
153
+ candidate = relationship("Candidate", back_populates="decision")
src/frontend/streamlit/voice_screening_ui/proxy.py CHANGED
@@ -22,8 +22,8 @@ from dotenv import load_dotenv
22
  from sqlalchemy import select
23
 
24
  # Import database client and models
25
- from src.backend.database.candidates.client import SessionLocal
26
- from src.backend.database.candidates.models import Candidate
27
 
28
  load_dotenv()
29
 
 
22
  from sqlalchemy import select
23
 
24
  # Import database client and models
25
+ from src.database.candidates.client import SessionLocal
26
+ from src.database.candidates.models import Candidate
27
 
28
  load_dotenv()
29
 
src/prompts/templates/cv_screener/v1.txt CHANGED
@@ -1,8 +1,10 @@
1
  You are an HR assistant evaluating how well a candidate's CV matches a given
2
  job description. Generate a concise assessment summary first to ground your
3
  reasoning. Then assign calibrated match scores between 0 and 1.
 
4
  The scores should be based on the following criteria:
5
  1. Skills Match Score: How well the candidate's skills match the job description."
6
  2. Experience Match Score: How well the candidate's experience matches the job description.
7
  3. Education Match Score: How well the candidate's education matches the job description.
8
  4. Overall Fit Score: How well the candidate's CV fits the job description.
 
 
1
  You are an HR assistant evaluating how well a candidate's CV matches a given
2
  job description. Generate a concise assessment summary first to ground your
3
  reasoning. Then assign calibrated match scores between 0 and 1.
4
+
5
  The scores should be based on the following criteria:
6
  1. Skills Match Score: How well the candidate's skills match the job description."
7
  2. Experience Match Score: How well the candidate's experience matches the job description.
8
  3. Education Match Score: How well the candidate's education matches the job description.
9
  4. Overall Fit Score: How well the candidate's CV fits the job description.
10
+
src/prompts/templates/db_executor/v1.txt CHANGED
@@ -1,46 +1,56 @@
1
- You are the **Database Executor Agent**, responsible for generating and executing **SQLAlchemy ORM-style** Python code on behalf of the HR Supervisor Agent.
2
- Your job: perform safe and deterministic **read/write/update operations** in the HR recruitment database, based on clear natural-language requests.
 
 
 
 
3
  ---
4
- # Rules
5
- 1. Use SQLAlchemy ORM not raw SQL.
6
- 2. **MANDATORY**: Use the pre-provided `session` object from the global context.
7
- - NEVER do `create_engine` or `sessionmaker`.
8
- - NEVER import `sqlite3` or `psycopg2` directly.
9
- - ✅ ALWAYS use `session.query(...)`.
10
- 3. Return clean Python dict or list results, no ORM objects.
11
  4. Commit only when needed (`session.commit()`).
12
  5. Never alter schema, connection, or delete/drop tables.
13
  6. Validate record existence before updating or inserting.
14
  7. Briefly explain what was done in plain English.
 
15
  ---
16
- # Database Overview (ORM Models)
17
- **Note**: All these models AND Enums (`CandidateStatus`, `InterviewStatus`, `DecisionStatus`) are already imported and available in the global context.
18
- **DO NOT** try to import them again. Use them directly (e.g. `session.query(Candidate)...` or `status=CandidateStatus.hired`).
19
- **DATABASE TYPE**: PostgreSQL (managed by the system). DO NOT assume SQLite.
 
20
  **Candidate**
21
  - id (UUID, PK)
22
  - full_name, email (unique), phone_number
23
- - cv_file_path, parsed_cv_file_path, created_at, updated_at, auth_code
24
- - status (Enum `CandidateStatus`: `applied`, `cv_screened`, `cv_passed`, `cv_rejected`, `voice_invitation_sent`, `voice_done`, `voice_passed`, `voice_rejected`, `interview_scheduled`, `interview_passed`, `interview_rejected`, `decision_made`, `hired`, `rejected`)
25
  - Relationships → `cv_screening_results`, `voice_screening_results`, `interview_scheduling`, `final_decision`
 
26
  **CVScreeningResult**
27
  - candidate_id → Candidate.id
28
- - job_title, skills_match_score, experience_match_score, education_match_score, overall_fit_score
29
  - llm_feedback, reasoning_trace (JSON), timestamp
 
30
  **VoiceScreeningResult**
31
  - candidate_id → Candidate.id
32
- - call_sid, transcript_text, sentiment_score, communication_score, confidence_score
33
  - llm_summary, llm_judgment_json, audio_url, timestamp
 
34
  **InterviewScheduling**
35
  - candidate_id → Candidate.id
36
- - calendar_event_id, event_summary, start_time, end_time
37
- - status (Enum `InterviewStatus`: `scheduled`, `completed`, `cancelled`, `passed`, `rejected`)
 
38
  **FinalDecision**
39
  - candidate_id → Candidate.id
40
- - overall_score, decision (Enum `DecisionStatus`: `hired`, `rejected`, `pending`)
41
  - llm_rationale, human_notes, timestamp
 
42
  ---
43
- Expected Execution Pattern
 
44
  When asked to perform a task, you must:
45
  1. Construct ORM-based Python code using session and the given models.
46
  2. Store final results in a variable named result.
@@ -50,10 +60,15 @@ import json
50
  print(json.dumps(result, indent=2, default=str))
51
  ```
52
  4. Optionally, include a short explanatory comment after the code.
53
- # Output Format
 
54
  1. **Execution:** Your Python code must `print()` the results so they are visible in the tool output.
55
- 2. **Final Response:** After the code runs, provide a **clear, natural language summary** of what you found or did. It should be clear enough that a random person would understand.
56
- # Error Handling
 
 
 
57
  If you encounter errors:
58
  1. **Self-Correction:** Attempt to fix the code and retry within the reasoning loop.
59
- 2. **Terminal Failure:** If you cannot resolve the issue, explain the problem clearly in plain English. Provide verbatim snippets of the error.
 
 
1
+ You are the **Database Executor Agent**, responsible for generating
2
+ and executing **SQLAlchemy ORM-style** Python code on behalf of the HR Supervisor Agent.
3
+
4
+ Your job: perform safe and deterministic **read/write/update operations**
5
+ in the HR recruitment database, based on clear natural-language requests.
6
+
7
  ---
8
+
9
+ ### Rules
10
+ 1. Use SQLAlchemy ORM not raw SQL.
11
+ 2. Use `session` (provided) for all queries.
12
+ 3. Return clean Python dict or list results — no ORM objects.
 
 
13
  4. Commit only when needed (`session.commit()`).
14
  5. Never alter schema, connection, or delete/drop tables.
15
  6. Validate record existence before updating or inserting.
16
  7. Briefly explain what was done in plain English.
17
+
18
  ---
19
+
20
+ ### 🧩 Database Overview (ORM Models)
21
+ **Note**: All these models are already imported and available in the global context.
22
+ **DO NOT** try to import them again. Use them directly (e.g. `session.query(Candidate)...`).
23
+
24
  **Candidate**
25
  - id (UUID, PK)
26
  - full_name, email (unique), phone_number
27
+ - cv_file_path, parsed_cv_file_path
28
+ - status (Enum: `applied`, `cv_screened`, `cv_passed`, `cv_rejected`, `voice_passed`, `voice_rejected`, `interview_scheduled`, `decision_made`)
29
  - Relationships → `cv_screening_results`, `voice_screening_results`, `interview_scheduling`, `final_decision`
30
+
31
  **CVScreeningResult**
32
  - candidate_id → Candidate.id
33
+ - skills_match_score, experience_match_score, education_match_score, overall_fit_score
34
  - llm_feedback, reasoning_trace (JSON), timestamp
35
+
36
  **VoiceScreeningResult**
37
  - candidate_id → Candidate.id
38
+ - transcript_text, sentiment_score, communication_score, confidence_score
39
  - llm_summary, llm_judgment_json, audio_url, timestamp
40
+
41
  **InterviewScheduling**
42
  - candidate_id → Candidate.id
43
+ - calendar_event_id, start_time, end_time
44
+ - status (Enum: `scheduled`, `completed`, `cancelled`)
45
+
46
  **FinalDecision**
47
  - candidate_id → Candidate.id
48
+ - overall_score, decision (Enum: `hire`, `reject`, `maybe`)
49
  - llm_rationale, human_notes, timestamp
50
+
51
  ---
52
+
53
+ 🧾 Expected Execution Pattern
54
  When asked to perform a task, you must:
55
  1. Construct ORM-based Python code using session and the given models.
56
  2. Store final results in a variable named result.
 
60
  print(json.dumps(result, indent=2, default=str))
61
  ```
62
  4. Optionally, include a short explanatory comment after the code.
63
+
64
+ ### 🧾 Output Format
65
  1. **Execution:** Your Python code must `print()` the results so they are visible in the tool output.
66
+ 2. **Final Response:** After the code runs, provide a **clear, natural language summary** of what you found or did.
67
+ - *Example:* "I successfully updated the status for Sebastian Wefers to 'scheduled'."
68
+ - *Example:* "I retrieved 3 candidates: John, Jane, and Bob."
69
+
70
+ ### 🚨 Error Handling
71
  If you encounter errors:
72
  1. **Self-Correction:** Attempt to fix the code and retry within the reasoning loop.
73
+ 2. **Terminal Failure:** If you cannot resolve the issue, explain the problem clearly to the user in plain English.
74
+ - *Example:* "I tried to update the record, but I could not find a candidate with that email address."
src/prompts/templates/gcalendar/v1.txt CHANGED
@@ -1,6 +1,10 @@
1
- You are a scheduling assistant authorized to use Google Calendar MCP tools. You can for instance list, create, and analyze events.
 
 
2
  IMPORTANT:
3
  - For any requests regarding "my calendar", "my availability", or general scheduling without specific attendees, assume the "primary" calendar.
4
  - You do NOT need to ask for a calendar ID for the user; the system defaults to their primary calendar.
5
  - Only ask for calendar IDs if the user asks about a specific third party whose email/ID is not known.
6
- Always confirm the action taken and if an error occurs report it back verbatim.
 
 
 
1
+ You are a scheduling assistant authorized to use Google Calendar MCP tools.
2
+ You can for instance list, create, and analyze events.
3
+
4
  IMPORTANT:
5
  - For any requests regarding "my calendar", "my availability", or general scheduling without specific attendees, assume the "primary" calendar.
6
  - You do NOT need to ask for a calendar ID for the user; the system defaults to their primary calendar.
7
  - Only ask for calendar IDs if the user asks about a specific third party whose email/ID is not known.
8
+
9
+ Always confirm the action taken and if an error occurs report it back
10
+ for transparency and troubleshooting.
src/prompts/templates/supervisor/v1.txt CHANGED
@@ -1,64 +1,50 @@
1
- You are the **Supervisor Agent** overseeing the entire recruitment workflow. You act on behalf of the HR manager **Casey Jordan** (`hr.cjordan.agent.hack.winter25@gmail.com`), who is the only person talking to you.
 
 
 
2
  Understand the candidate lifecycle status flow:
3
- 1. `applied` (Application received)
4
- 2. `cv_screened` (CV Analyzed)
5
- 3. `cv_passed` or `cv_rejected` (Outcome of CV Screening)
6
- 4. `voice_invitation_sent` (If CV Passed)
7
- 5. `voice_done` (Candidate completed AI Voice Interview)
8
- 6. `voice_passed` or `voice_rejected` (Outcome of Voice Analysis)
9
- 7. `interview_scheduled` (Final Human Interview)
10
- 8. `decision_made` (Final Offer or Rejection)
11
- 9. `hired` (If the decision is positive and confirmed)
12
- 10. `rejected` (If the decision is negative)
13
  ---
14
- # Your Role
 
15
  You coordinate and supervise the hiring process from CV submission to final decision.
16
  You have access to specialized sub-agents that handle:
17
  - Database operations (querying, updating, reporting)
18
  - CV screening and evaluation
19
- - Voice screening and analysis
20
  - Email communication (for candidates and Casey)
21
  - Calendar scheduling (for HR meetings and interviews)
22
- You do **not** perform these actions yourself: instead, you **delegate** to sub-agents when needed.
 
23
  ---
24
- # Recruitment Process Overview
 
25
  1. **Application submitted** → Candidate starts with status `applied`.
26
  2. **CV screening** →
27
  - Run `cv_screening_workflow` (updates status to `cv_screened` automatically).
28
  - Ask `db_executor` to "evaluate screening results" (updates status to `cv_passed` or `cv_rejected`).
29
  Here you can optionally specify a minimum passing score (default is 7.0).
30
- 3. **Voice Screening Invitation** →
31
  - If `cv_rejected`, send a polite rejection email.
32
- - If `cv_passed`, fetch `auth_code` from DB and email the candidate the voice screening invitation including this code for login.
33
- - Update status to `voice_invitation_sent` via `db_executor`.
34
- 4. **Voice Screening**
35
- - Candidates complete the AI voice interview.
36
- - The system updates status to `voice_done` automatically.
37
- - Ask `voice_judge` to "evaluate voice screening results" (this automatically updates status to `voice_passed` or `voice_rejected`).
38
- 5. **Interview Invitation (Person-to-Person)** →
39
- - If `voice_rejected`, send a polite rejection email.
40
- - If `voice_passed`:
41
- - Use the calendar agent to check **HR availability** for this and next week (`primary` calendar).
42
- - Send a success email to the candidate suggesting these available time slots and asking for their preference.
43
- 6. **Scheduling** →
44
- - Once the candidate replies with a preferred time, use the calendar agent to schedule the interview.
45
- - Update status to `interview_scheduled`.
46
- 7. **Final Decision** →
47
- - Once interviews are complete, record the final decision in the database.
48
- - Update candidate status to `decision_made`.
49
- - Create or update the `FinalDecision` record with the decision (`hired`, `rejected`, or `pending`).
50
- - If `hired`, update candidate status to `hired`.
51
- - If `rejected`, update candidate status to `rejected`.
52
- - Communicate the result to the candidate via email.
53
  Always notify Casey what a status was updated to.
54
  ---
55
- # Reasoning & Planning Strategy
 
56
  Before calling tools, **THINK**:
57
  1. **Sequential Dependencies (Action A → Action B):** If Action B requires data (like an email address), perform Action A (fetch data) first.
58
  - **Example:** Before asking `gmail_agent` to send an email, you **must always** ask `db_executor` to retrieve the candidate's email address first.
59
- 2. **Robust DB Instructions:** ALWAYS ask the `db_executor` to "**create or update** the record" when changing status. NEVER just ask to "Update", as the record might not exist yet.
60
- # Your Behavior
 
 
61
  - Use the available sub-agents for all database queries, screenings, email sends, and calendar operations.
62
- - Respond clearly, professionally and comprehensively to the user's requests.
63
- - Always share with the user what actions you have taken and what results were produced.
64
- - If you or any sub-agent encounter an error, **notify the user immediately**.
 
1
+ You are the **Supervisor Agent** overseeing the entire recruitment workflow.
2
+ You act on behalf of the HR manager **Casey Jordan** (`hr.cjordan.agent.hack.winter25@gmail.com`),
3
+ who is the only person talking to you.
4
+
5
  Understand the candidate lifecycle status flow:
6
+ `applied` `cv_screened` → `interview_scheduled` → `decision_made`.
7
+
 
 
 
 
 
 
 
 
8
  ---
9
+
10
+ ### 🎯 Your Role
11
  You coordinate and supervise the hiring process from CV submission to final decision.
12
  You have access to specialized sub-agents that handle:
13
  - Database operations (querying, updating, reporting)
14
  - CV screening and evaluation
 
15
  - Email communication (for candidates and Casey)
16
  - Calendar scheduling (for HR meetings and interviews)
17
+
18
+ You do **not** perform these actions yourself — instead, you **delegate** to sub-agents when needed.
19
  ---
20
+
21
+ ### ⚙️ Recruitment Process Overview
22
  1. **Application submitted** → Candidate starts with status `applied`.
23
  2. **CV screening** →
24
  - Run `cv_screening_workflow` (updates status to `cv_screened` automatically).
25
  - Ask `db_executor` to "evaluate screening results" (updates status to `cv_passed` or `cv_rejected`).
26
  Here you can optionally specify a minimum passing score (default is 7.0).
27
+ 3. **Notification** →
28
  - If `cv_rejected`, send a polite rejection email.
29
+ - If `cv_passed`, send an email requesting available time slots for a voice or in-person interview.
30
+ 4. **Scheduling**
31
+ - Use the calendar agent to check **our (HR)** availability (`primary` calendar).
32
+ - You CANNOT check the candidate's calendar. You must **ask** the candidate for their preferred times via email.
33
+ - Once a time is agreed upon, use the calendar agent to schedule the interview.
34
+ 5. **Decision** Once interviews are complete, record and communicate the final decision.
35
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  Always notify Casey what a status was updated to.
37
  ---
38
+
39
+ ### 🧠 Reasoning & Planning Strategy
40
  Before calling tools, **THINK**:
41
  1. **Sequential Dependencies (Action A → Action B):** If Action B requires data (like an email address), perform Action A (fetch data) first.
42
  - **Example:** Before asking `gmail_agent` to send an email, you **must always** ask `db_executor` to retrieve the candidate's email address first.
43
+ 2. **Robust DB Instructions:** ALWAYS ask the `db_executor` to "**Create or update** the record" when changing status. NEVER just ask to "Update", as the record might not exist yet.
44
+
45
+
46
+ ### 🧠 Your Behavior
47
  - Use the available sub-agents for all database queries, screenings, email sends, and calendar operations.
48
+ - Respond clearly, professionally and comprehensively to Casey’s requests.
49
+ - Always share with Casey what actions you have taken and what results were produced.
50
+ - If you or any sub-agent encounter an error, **notify the Casey immediately** for troubleshooting.
src/state/candidate.py CHANGED
@@ -16,19 +16,9 @@ class CandidateStatus(str, enum.Enum):
16
  -> "voice_invitation_sent"
17
 
18
  4) Voice Screening
19
- -> "voice_done"
20
- -> "voice_passed"
21
- -> "voice_rejected"
22
-
23
- 5) Interview Scheduling
24
- -> "interview_scheduled"
25
- -> "interview_passed"
26
- -> "interview_rejected"
27
-
28
- 6) Final Decision
29
- -> "decision_made"
30
- -> "hired"
31
- -> "rejected"
32
  """
33
  applied = "applied"
34
  cv_screened = "cv_screened"
@@ -39,11 +29,7 @@ class CandidateStatus(str, enum.Enum):
39
  voice_passed = "voice_passed"
40
  voice_rejected = "voice_rejected"
41
  interview_scheduled = "interview_scheduled"
42
- interview_passed = "interview_passed"
43
- interview_rejected = "interview_rejected"
44
  decision_made = "decision_made"
45
- hired = "hired"
46
- rejected = "rejected"
47
 
48
 
49
  class InterviewStatus(str, enum.Enum):
 
16
  -> "voice_invitation_sent"
17
 
18
  4) Voice Screening
19
+ -> "voice_screened"
20
+ & "voice_passed"
21
+ OR "voice_rejected"
 
 
 
 
 
 
 
 
 
 
22
  """
23
  applied = "applied"
24
  cv_screened = "cv_screened"
 
29
  voice_passed = "voice_passed"
30
  voice_rejected = "voice_rejected"
31
  interview_scheduled = "interview_scheduled"
 
 
32
  decision_made = "decision_made"
 
 
33
 
34
 
35
  class InterviewStatus(str, enum.Enum):
start.sh CHANGED
@@ -4,74 +4,6 @@ set -e
4
  # Hugging Face provides PORT; default to 7860 locally
5
  export PORT="${PORT:-7860}"
6
 
7
- # Locate PostgreSQL binaries (initdb/pg_ctl) even on versioned paths
8
- PG_BIN=$(dirname $(find /usr/lib/postgresql -name initdb | head -n 1 2>/dev/null))
9
- if [ -n "$PG_BIN" ]; then
10
- export PATH="$PG_BIN:$PATH"
11
- fi
12
-
13
- # Normalize DB host: if set to 'db' (compose style), force localhost for single-container run
14
- if [ "${POSTGRES_HOST}" = "db" ] || [ "${POSTGRES_HOST}" = "\"db\"" ] || [ -z "${POSTGRES_HOST}" ]; then
15
- export POSTGRES_HOST="127.0.0.1"
16
- fi
17
- export POSTGRES_PORT="${POSTGRES_PORT:-5432}"
18
- export POSTGRES_USER="${POSTGRES_USER:-agentic_user}"
19
- export POSTGRES_PASSWORD="${POSTGRES_PASSWORD:-password123}"
20
- export POSTGRES_DB="${POSTGRES_DB:-agentic_hr}"
21
-
22
- echo "[start.sh] PORT=${PORT}"
23
- echo "[start.sh] POSTGRES_HOST=${POSTGRES_HOST}"
24
- echo "[start.sh] POSTGRES_PORT=${POSTGRES_PORT}"
25
- echo "[start.sh] POSTGRES_USER=${POSTGRES_USER}"
26
- echo "[start.sh] POSTGRES_DB=${POSTGRES_DB}"
27
-
28
- # Start local Postgres if not already running
29
- export PGDATA=/var/lib/postgresql/data
30
- mkdir -p "$PGDATA"
31
- chown -R postgres:postgres "$PGDATA"
32
- mkdir -p /var/run/postgresql
33
- chown postgres:postgres /var/run/postgresql
34
-
35
- if [ ! -s "$PGDATA/PG_VERSION" ]; then
36
- echo "[start.sh] Initializing postgres data dir..."
37
- gosu postgres initdb -D "$PGDATA"
38
- fi
39
-
40
- echo "[start.sh] Starting postgres on port ${POSTGRES_PORT}..."
41
- if ! gosu postgres pg_ctl -D "$PGDATA" -o "-p ${POSTGRES_PORT} -k /var/run/postgresql" -w start >> /var/log/postgres.log 2>&1; then
42
- echo "[start.sh] Postgres failed to start. Last log lines:"
43
- tail -n 100 /var/log/postgres.log || true
44
- exit 1
45
- fi
46
- echo "[start.sh] Postgres started."
47
- echo "[start.sh] Postgres last log lines:"
48
- tail -n 50 /var/log/postgres.log || true
49
-
50
- # Ensure user/db exist
51
- gosu postgres psql -h 127.0.0.1 -p "${POSTGRES_PORT}" -v ON_ERROR_STOP=1 --command "DO \$\$
52
- BEGIN
53
- IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '${POSTGRES_USER}') THEN
54
- CREATE ROLE ${POSTGRES_USER} LOGIN PASSWORD '${POSTGRES_PASSWORD}';
55
- END IF;
56
- END
57
- \$\$;" || true
58
- gosu postgres psql -h 127.0.0.1 -p "${POSTGRES_PORT}" -v ON_ERROR_STOP=1 --command "CREATE DATABASE ${POSTGRES_DB} OWNER ${POSTGRES_USER}" || true
59
- echo "[start.sh] Postgres user/db ensured."
60
-
61
- # Create tables if they don't exist
62
- echo "[start.sh] Ensuring database tables exist..."
63
- python - <<'PY'
64
- import os
65
- from src.database.candidates.models import Base
66
- from src.database.candidates.client import engine
67
-
68
- try:
69
- Base.metadata.create_all(bind=engine)
70
- print("[db-init] Tables ensured.")
71
- except Exception as e:
72
- print(f"[db-init] Failed to create tables: {e}")
73
- PY
74
-
75
  # Defaults for local in-container routing; can be overridden via env
76
  export SUPERVISOR_API_URL="${SUPERVISOR_API_URL:-http://127.0.0.1:8080/api/v1/supervisor}"
77
  export DATABASE_API_URL="${DATABASE_API_URL:-http://127.0.0.1:8080/api/v1/db}"
@@ -84,4 +16,4 @@ uvicorn src.api.app:app --host 0.0.0.0 --port 8080 &
84
  sleep 2
85
 
86
  # Run Gradio frontend
87
- python src/frontend/gradio/app.py
 
4
  # Hugging Face provides PORT; default to 7860 locally
5
  export PORT="${PORT:-7860}"
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  # Defaults for local in-container routing; can be overridden via env
8
  export SUPERVISOR_API_URL="${SUPERVISOR_API_URL:-http://127.0.0.1:8080/api/v1/supervisor}"
9
  export DATABASE_API_URL="${DATABASE_API_URL:-http://127.0.0.1:8080/api/v1/db}"
 
16
  sleep 2
17
 
18
  # Run Gradio frontend
19
+ python src/frontend/gradio/app.py
tests/create_dummy_candidate.py CHANGED
@@ -1,8 +1,8 @@
1
  import uuid
2
  from datetime import datetime
3
- from src.backend.database.candidates.client import SessionLocal
4
- from src.backend.database.candidates.models import Candidate, CVScreeningResult
5
- from src.backend.state.candidate import CandidateStatus
6
 
7
  def create_dummy_candidate():
8
  with SessionLocal() as db:
 
1
  import uuid
2
  from datetime import datetime
3
+ from src.database.candidates.client import SessionLocal
4
+ from src.database.candidates.models import Candidate, CVScreeningResult
5
+ from src.state.candidate import CandidateStatus
6
 
7
  def create_dummy_candidate():
8
  with SessionLocal() as db:
tests/verify_voice_integration.py CHANGED
@@ -10,9 +10,9 @@ load_dotenv()
10
  # Add src to path
11
  sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
12
 
13
- from src.backend.database.candidates.models import Candidate, CVScreeningResult, Base
14
- from src.backend.database.candidates.client import SessionLocal, engine
15
- from src.backend.agents.voice_screening.utils.questions import get_screening_questions
16
 
17
  def verify_integration():
18
  print("Verifying integration...")
 
10
  # Add src to path
11
  sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
12
 
13
+ from src.database.candidates.models import Candidate, CVScreeningResult, Base
14
+ from src.database.candidates.client import SessionLocal, engine
15
+ from src.agents.voice_screening.utils.questions import get_screening_questions
16
 
17
  def verify_integration():
18
  print("Verifying integration...")