backend
This commit is contained in:
commit
d7666f7b2c
44 changed files with 2246 additions and 0 deletions
6
backend/.env.example
Normal file
6
backend/.env.example
Normal file
|
@ -0,0 +1,6 @@
|
|||
DATABASE_URL="postgresql://user:password@host:port/allonsy_db" # Remplacez 'host', 'port', 'user', 'password' par les infos de votre DB distante
|
||||
SECRET_KEY="your_super_secret_key_for_jwt_and_hashing" # Générez une clé forte et longue
|
||||
ALGORITHM="HS256"
|
||||
GEMINI_API_KEY="your_gemini_api_key_here" # Clé API Google Gemini
|
||||
MISTRAL_API_KEY="your_mistral_api_key_here" # Clé API Mistral AI
|
||||
FILES_UPLOAD_PATH="./uploads" # Chemin local pour le stockage des fichiers uploadés
|
71
backend/README.md
Normal file
71
backend/README.md
Normal file
|
@ -0,0 +1,71 @@
|
|||
# Backend - Allons-y API
|
||||
|
||||
Ce dossier contient le code source de l'API RESTful pour l'application "Allons-y - Assistant de Candidature IA". Il est développé en Python avec le framework FastAPI.
|
||||
|
||||
## Technologies
|
||||
|
||||
* **Framework :** FastAPI
|
||||
* **Base de Données ORM :** SQLAlchemy
|
||||
* **Base de Données :** PostgreSQL (connexion à une base de données distante)
|
||||
* **Authentification :** JWT avec `python-jose` et hachage de mot de passe avec `passlib[bcrypt]`
|
||||
* **Analyse de Fichiers :** `pypdf` pour les PDF, `python-docx` pour les DOCX
|
||||
* **IA APIs :** `google-generativeai` pour Gemini, `mistralai` pour Mistral
|
||||
* **Gestion des variables d'environnement :** `python-dotenv`
|
||||
|
||||
## Comment Démarrer le Backend (Développement Local)
|
||||
|
||||
### Pré-requis :
|
||||
|
||||
* Python 3.9+
|
||||
* pip (gestionnaire de paquets Python)
|
||||
* Accès à votre base de données PostgreSQL distante.
|
||||
|
||||
### Étapes :
|
||||
|
||||
1. **Naviguer dans le répertoire du backend :**
|
||||
```bash
|
||||
cd backend
|
||||
```
|
||||
2. **Créer et activer un environnement virtuel (recommandé) :**
|
||||
```bash
|
||||
python -m venv venv
|
||||
source venv/bin/activate # Sur Linux/macOS
|
||||
# venv\Scripts\activate # Sur Windows
|
||||
```
|
||||
3. **Installer les dépendances :**
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
4. **Configurer les variables d'environnement :**
|
||||
* Copiez `.env.example` en `.env` :
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
* **Éditez le fichier `.env`** et renseignez les valeurs exactes pour `DATABASE_URL` (avec le `host`, `port`, `user`, `password` de votre base de données distante), `SECRET_KEY`, `GEMINI_API_KEY`, `MISTRAL_API_KEY`, et `FILES_UPLOAD_PATH`.
|
||||
* **Important :** La `SECRET_KEY` doit être une chaîne de caractères longue et aléatoire pour la sécurité de vos JWT.
|
||||
* Pour `FILES_UPLOAD_PATH`, assurez-vous que le répertoire `./uploads` existe ou sera créé et qu'il a les permissions d'écriture.
|
||||
5. **Lancer l'application FastAPI :**
|
||||
```bash
|
||||
uvicorn main:app --reload --host 0.0.0.0 --port 8000
|
||||
```
|
||||
L'API sera accessible à `http://localhost:8000`. La documentation Swagger UI est disponible à `http://localhost:8000/docs`.
|
||||
|
||||
## Structure du Code (à venir)
|
||||
|
||||
* `main.py` : Point d'entrée de l'application FastAPI.
|
||||
* `routers/` : Contient les routes de l'API (authentification, fichiers, IA).
|
||||
* `models/` : Contient les modèles de base de données SQLAlchemy.
|
||||
* `schemas/` : Contient les Pydantic schemas pour la validation des données.
|
||||
* `crud/` : Contient les opérations CRUD pour la base de données.
|
||||
* `utils/` : Fonctions utilitaires (sécurité, extraction de texte, etc.).
|
||||
* `core/` : Configuration, dépendances.
|
||||
|
||||
## Endpoints Principaux (MVP)
|
||||
|
||||
* `POST /auth/register` : Inscription d'un nouvel utilisateur.
|
||||
* `POST /auth/login` : Connexion de l'utilisateur.
|
||||
* `GET /users/me` : Récupérer le profil utilisateur (authentifié).
|
||||
* `POST /files/upload_cv` : Upload d'un CV.
|
||||
* `POST /ia/analyze_offer` : Analyse d'offre d'emploi (scoring).
|
||||
|
||||
---
|
0
backend/__init__.py
Normal file
0
backend/__init__.py
Normal file
143
backend/alembic.ini
Normal file
143
backend/alembic.ini
Normal file
|
@ -0,0 +1,143 @@
|
|||
# A generic, single database configuration.
|
||||
|
||||
[alembic]
|
||||
# path to migration scripts.
|
||||
# this is typically a path given in POSIX (e.g. forward slashes)
|
||||
# format, relative to the token %(here)s which refers to the location of this
|
||||
# ini file
|
||||
script_location = %(here)s/alembic
|
||||
|
||||
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
|
||||
# Uncomment the line below if you want the files to be prepended with date and time
|
||||
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
|
||||
# for all available tokens
|
||||
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
|
||||
|
||||
# sys.path path, will be prepended to sys.path if present.
|
||||
# defaults to the current working directory. for multiple paths, the path separator
|
||||
# is defined by "path_separator" below.
|
||||
prepend_sys_path = .
|
||||
|
||||
|
||||
# timezone to use when rendering the date within the migration file
|
||||
# as well as the filename.
|
||||
# If specified, requires the python>=3.9 or backports.zoneinfo library and tzdata library.
|
||||
# Any required deps can installed by adding `alembic[tz]` to the pip requirements
|
||||
# string value is passed to ZoneInfo()
|
||||
# leave blank for localtime
|
||||
# timezone =
|
||||
|
||||
# max length of characters to apply to the "slug" field
|
||||
# truncate_slug_length = 40
|
||||
|
||||
# set to 'true' to run the environment during
|
||||
# the 'revision' command, regardless of autogenerate
|
||||
# revision_environment = false
|
||||
|
||||
# set to 'true' to allow .pyc and .pyo files without
|
||||
# a source .py file to be detected as revisions in the
|
||||
# versions/ directory
|
||||
# sourceless = false
|
||||
|
||||
# version location specification; This defaults
|
||||
# to <script_location>/versions. When using multiple version
|
||||
# directories, initial revisions must be specified with --version-path.
|
||||
# The path separator used here should be the separator specified by "path_separator"
|
||||
# below.
|
||||
# version_locations = %(here)s/bar:%(here)s/bat:%(here)s/alembic/versions
|
||||
|
||||
# path_separator; This indicates what character is used to split lists of file
|
||||
# paths, including version_locations and prepend_sys_path within configparser
|
||||
# files such as alembic.ini.
|
||||
# The default rendered in new alembic.ini files is "os", which uses os.pathsep
|
||||
# to provide os-dependent path splitting.
|
||||
#
|
||||
# Note that in order to support legacy alembic.ini files, this default does NOT
|
||||
# take place if path_separator is not present in alembic.ini. If this
|
||||
# option is omitted entirely, fallback logic is as follows:
|
||||
#
|
||||
# 1. Parsing of the version_locations option falls back to using the legacy
|
||||
# "version_path_separator" key, which if absent then falls back to the legacy
|
||||
# behavior of splitting on spaces and/or commas.
|
||||
# 2. Parsing of the prepend_sys_path option falls back to the legacy
|
||||
# behavior of splitting on spaces, commas, or colons.
|
||||
#
|
||||
# Valid values for path_separator are:
|
||||
#
|
||||
# path_separator = :
|
||||
# path_separator = ;
|
||||
# path_separator = space
|
||||
# path_separator = newline
|
||||
#
|
||||
# Use os.pathsep. Default configuration used for new projects.
|
||||
path_separator = os
|
||||
|
||||
# set to 'true' to search source files recursively
|
||||
# in each "version_locations" directory
|
||||
# new in Alembic version 1.10
|
||||
# recursive_version_locations = false
|
||||
|
||||
# the output encoding used when revision files
|
||||
# are written from script.py.mako
|
||||
# output_encoding = utf-8
|
||||
|
||||
# alembic.ini (partie à modifier)
|
||||
# [...]
|
||||
# Mettez votre chaîne de connexion de base de données ici.
|
||||
# par exemple, 'postgresql://user:password@host:port/dbname'
|
||||
sqlalchemy.url = ${DATABASE_URL}
|
||||
# [...]
|
||||
|
||||
|
||||
[post_write_hooks]
|
||||
# post_write_hooks defines scripts or Python functions that are run
|
||||
# on newly generated revision scripts. See the documentation for further
|
||||
# detail and examples
|
||||
|
||||
# format using "black" - use the console_scripts runner, against the "black" entrypoint
|
||||
# hooks = black
|
||||
# black.type = console_scripts
|
||||
# black.entrypoint = black
|
||||
# black.options = -l 79 REVISION_SCRIPT_FILENAME
|
||||
|
||||
# lint with attempts to fix using "ruff" - use the exec runner, execute a binary
|
||||
# hooks = ruff
|
||||
# ruff.type = exec
|
||||
# ruff.executable = %(here)s/.venv/bin/ruff
|
||||
# ruff.options = check --fix REVISION_SCRIPT_FILENAME
|
||||
|
||||
# Logging configuration. This is also consumed by the user-maintained
|
||||
# env.py script only.
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARNING
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARNING
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
1
backend/alembic/README
Normal file
1
backend/alembic/README
Normal file
|
@ -0,0 +1 @@
|
|||
Generic single-database configuration.
|
90
backend/alembic/env.py
Normal file
90
backend/alembic/env.py
Normal file
|
@ -0,0 +1,90 @@
|
|||
import os
|
||||
import sys
|
||||
from logging.config import fileConfig
|
||||
|
||||
from sqlalchemy import engine_from_config
|
||||
from sqlalchemy import pool
|
||||
|
||||
from alembic import context
|
||||
|
||||
# Ceci ajoute le répertoire 'backend' (où se trouve 'alembic.ini' et 'main.py')
|
||||
# au chemin de recherche Python, permettant d'importer vos modules.
|
||||
sys.path.append(os.path.abspath("."))
|
||||
|
||||
# Importez votre objet Base de core.database
|
||||
from core.database import Base
|
||||
|
||||
# Importez tous vos modèles SQLAlchemy ici pour qu'Alembic puisse les détecter.
|
||||
from models import user
|
||||
from models import document
|
||||
|
||||
# this is the Alembic Config object, which provides
|
||||
# access to values within the .ini file in use.
|
||||
config = context.config
|
||||
|
||||
# Interpret the config file for Python logging.
|
||||
# This line sets up loggers basically.
|
||||
if config.config_file_name is not None:
|
||||
fileConfig(config.config_file_name)
|
||||
|
||||
# add your model's MetaData object here
|
||||
# for 'autogenerate' support
|
||||
# from myapp import Base
|
||||
# target_metadata = Base.metadata
|
||||
target_metadata = Base.metadata
|
||||
|
||||
# other values from the config, defined by the needs of env.py,
|
||||
# can be acquired a number of ways.
|
||||
# in this example, we want to override the sqlalchemy.url from the ini file
|
||||
# if a DATABASE_URL environment variable is present.
|
||||
# Note: config.get_main_option() reads from alembic.ini, which we updated.
|
||||
url = os.environ.get("DATABASE_URL") or config.get_main_option("sqlalchemy.url")
|
||||
if url:
|
||||
config.set_main_option("sqlalchemy.url", url)
|
||||
|
||||
def run_migrations_offline() -> None:
|
||||
"""Run migrations in 'offline' mode.
|
||||
|
||||
This configures the context with just a URL
|
||||
and not an actual DBAPI connection. By doing this,
|
||||
migrations can be run without a database present.
|
||||
Methods can be called instead to produce a string
|
||||
of content to be executed later,
|
||||
e.g. env.py's Alembic.configure with a SQLAlchemy connection string.
|
||||
|
||||
"""
|
||||
context.configure(
|
||||
url=url,
|
||||
target_metadata=target_metadata,
|
||||
literal_binds=True,
|
||||
dialect_opts={"paramstyle": "named"},
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
def run_migrations_online() -> None:
|
||||
"""Run migrations in 'online' mode.
|
||||
|
||||
In this scenario we need to create a connection
|
||||
to the database before configuring Alembic.
|
||||
|
||||
"""
|
||||
connectable = engine_from_config(
|
||||
config.get_section(config.config_ini_section, {}),
|
||||
prefix="sqlalchemy.",
|
||||
poolclass=pool.NullPool,
|
||||
)
|
||||
|
||||
with connectable.connect() as connection:
|
||||
context.configure(
|
||||
connection=connection, target_metadata=target_metadata
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
if context.is_offline_mode():
|
||||
run_migrations_offline()
|
||||
else:
|
||||
run_migrations_online()
|
28
backend/alembic/script.py.mako
Normal file
28
backend/alembic/script.py.mako
Normal file
|
@ -0,0 +1,28 @@
|
|||
"""${message}
|
||||
|
||||
Revision ID: ${up_revision}
|
||||
Revises: ${down_revision | comma,n}
|
||||
Create Date: ${create_date}
|
||||
|
||||
"""
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
${imports if imports else ""}
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = ${repr(up_revision)}
|
||||
down_revision: Union[str, Sequence[str], None] = ${repr(down_revision)}
|
||||
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
|
||||
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Upgrade schema."""
|
||||
${upgrades if upgrades else "pass"}
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade schema."""
|
||||
${downgrades if downgrades else "pass"}
|
|
@ -0,0 +1,32 @@
|
|||
"""Initial database setup with users and documents tables
|
||||
|
||||
Revision ID: 1eb03e5de010
|
||||
Revises:
|
||||
Create Date: 2025-06-20 23:49:35.265344
|
||||
|
||||
"""
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = '1eb03e5de010'
|
||||
down_revision: Union[str, Sequence[str], None] = None
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Upgrade schema."""
|
||||
# ### commands auto generated by Alembic - please adjust! ###
|
||||
pass
|
||||
# ### end Alembic commands ###
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade schema."""
|
||||
# ### commands auto generated by Alembic - please adjust! ###
|
||||
pass
|
||||
# ### end Alembic commands ###
|
0
backend/core/__init__.py
Normal file
0
backend/core/__init__.py
Normal file
35
backend/core/config.py
Normal file
35
backend/core/config.py
Normal file
|
@ -0,0 +1,35 @@
|
|||
import os
|
||||
from pydantic_settings import BaseSettings, SettingsConfigDict
|
||||
from typing import Optional
|
||||
|
||||
class Settings(BaseSettings):
|
||||
# Chemin absolu vers le répertoire des uploads
|
||||
# Par défaut, un dossier 'uploads' dans le répertoire 'backend'
|
||||
UPLOADS_DIR: str = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "uploads")
|
||||
# Secret key pour les JWT (à générer une valeur forte en production)
|
||||
SECRET_KEY: str = os.getenv("SECRET_KEY") # Assurez-vous que c'est le même que celui utilisé dans security.py si vous l'avez hardcodé là-bas
|
||||
ALGORITHM: str = "HS256"
|
||||
ACCESS_TOKEN_EXPIRE_MINUTES: int = 30
|
||||
MISTRAL_API_KEY: Optional[str] = None
|
||||
GEMINI_API_KEY: Optional[str] = None
|
||||
LLM_PROVIDER: str = "gemini" # Votre choix par défaut
|
||||
|
||||
# --- AJOUTEZ CES DEUX LIGNES ---
|
||||
GEMINI_MODEL_NAME: Optional[str] = "gemini-1.5-flash" # Ou le nom de modèle Gemini que vous utilisez
|
||||
MISTRAL_MODEL_NAME: Optional[str] = "mistral-tiny" # Ou le nom de modèle Mistral par défaut si vous l'utilisez
|
||||
|
||||
|
||||
model_config = SettingsConfigDict(env_file=".env", extra="ignore")
|
||||
|
||||
# --- Nouvelles variables pour l'API France Travail ---
|
||||
FRANCE_TRAVAIL_CLIENT_ID: str
|
||||
FRANCE_TRAVAIL_CLIENT_SECRET: str
|
||||
FRANCE_TRAVAIL_TOKEN_URL: str = "https://francetravail.io/connexion/oauth2/access_token?realm=%2Fpartenaire"
|
||||
FRANCE_TRAVAIL_API_BASE_URL: str = "https://api.francetravail.io/partenaire/offresdemploi"
|
||||
FRANCE_TRAVAIL_API_SCOPE: str = "o2dsoffre api_offresdemploiv2" # Les scopes requis par l'API
|
||||
|
||||
settings = Settings()
|
||||
print(f"DEBUG: FRANCE_TRAVAIL_CLIENT_ID chargé: {settings.FRANCE_TRAVAIL_CLIENT_ID}")
|
||||
print(f"DEBUG: FRANCE_TRAVAIL_CLIENT_SECRET chargé: {settings.FRANCE_TRAVAIL_CLIENT_SECRET}")
|
||||
# Créer le dossier d'uploads s'il n'existe pas
|
||||
os.makedirs(settings.UPLOADS_DIR, exist_ok=True)
|
34
backend/core/database.py
Normal file
34
backend/core/database.py
Normal file
|
@ -0,0 +1,34 @@
|
|||
import os
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Charger les variables d'environnement depuis le fichier .env
|
||||
load_dotenv()
|
||||
|
||||
DATABASE_URL = os.getenv("DATABASE_URL")
|
||||
|
||||
if not DATABASE_URL:
|
||||
raise ValueError("DATABASE_URL non défini dans les variables d'environnement.")
|
||||
|
||||
# Configuration de l'engine de la base de données
|
||||
# Le paramètre connect_args={"check_same_thread": False} est nécessaire pour SQLite,
|
||||
# mais peut être omis pour PostgreSQL en production. Gardons-le pour la flexibilité initiale.
|
||||
engine = create_engine(
|
||||
DATABASE_URL
|
||||
)
|
||||
|
||||
# Configuration de la session de la base de données
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
|
||||
# Base de déclaration pour les modèles SQLAlchemy
|
||||
Base = declarative_base()
|
||||
|
||||
# Fonction d'utilité pour obtenir une session de base de données (dépendance FastAPI)
|
||||
def get_db():
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
11
backend/core/hashing.py
Normal file
11
backend/core/hashing.py
Normal file
|
@ -0,0 +1,11 @@
|
|||
from passlib.context import CryptContext
|
||||
|
||||
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
|
||||
|
||||
def verify_password(plain_password: str, hashed_password: str) -> bool:
|
||||
"""Vérifie si un mot de passe clair correspond à un mot de passe haché."""
|
||||
return pwd_context.verify(plain_password, hashed_password)
|
||||
|
||||
def get_password_hash(password: str) -> str:
|
||||
"""Hache un mot de passe clair."""
|
||||
return pwd_context.hash(password)
|
55
backend/core/security.py
Normal file
55
backend/core/security.py
Normal file
|
@ -0,0 +1,55 @@
|
|||
# backend/core/security.py
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
|
||||
from jose import JWTError, jwt
|
||||
|
||||
# Importations pour get_current_user
|
||||
from fastapi import Depends, HTTPException, status
|
||||
from fastapi.security import OAuth2PasswordBearer
|
||||
from sqlalchemy.orm import Session
|
||||
from schemas.token import TokenData
|
||||
from crud import user as crud_user
|
||||
from core.database import get_db
|
||||
|
||||
# Importation ABSOLUE
|
||||
from core.config import settings
|
||||
|
||||
# Nouvelle importation pour les fonctions de hachage
|
||||
from core.hashing import verify_password, get_password_hash # <-- NOUVEAU
|
||||
|
||||
# Schéma OAuth2
|
||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="auth/login")
|
||||
|
||||
def create_access_token(data: dict, expires_delta: Optional[timedelta] = None):
|
||||
to_encode = data.copy()
|
||||
if expires_delta:
|
||||
expire = datetime.utcnow() + expires_delta
|
||||
else:
|
||||
expire = datetime.utcnow() + timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES)
|
||||
to_encode.update({"exp": expire})
|
||||
encoded_jwt = jwt.encode(to_encode, settings.SECRET_KEY, algorithm=settings.ALGORITHM)
|
||||
return encoded_jwt
|
||||
|
||||
# Fonction get_current_user
|
||||
async def get_current_user(token: str = Depends(oauth2_scheme), db: Session = Depends(get_db)):
|
||||
credentials_exception = HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Could not validate credentials",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
try:
|
||||
payload = jwt.decode(token, settings.SECRET_KEY, algorithms=[settings.ALGORITHM])
|
||||
username: str = payload.get("sub")
|
||||
if username is None:
|
||||
raise credentials_exception
|
||||
token_data = TokenData(email=username)
|
||||
except JWTError:
|
||||
raise credentials_exception
|
||||
user = crud_user.get_user_by_email(db, email=token_data.email)
|
||||
if user is None:
|
||||
raise credentials_exception
|
||||
return user
|
||||
|
||||
# LIGNE DE DÉBOGAGE CORRECTEMENT INDENTÉE (au niveau du module)
|
||||
print(f"DEBUG_SECURITY: Noms définis dans core.security.py : {dir()}")
|
0
backend/crud/__init__py
Normal file
0
backend/crud/__init__py
Normal file
29
backend/crud/ai_interaction.py
Normal file
29
backend/crud/ai_interaction.py
Normal file
|
@ -0,0 +1,29 @@
|
|||
from sqlalchemy.orm import Session
|
||||
from models import ai_interaction as models_ai_interaction
|
||||
from schemas import ai_interaction as schemas_ai_interaction
|
||||
|
||||
def create_ai_interaction(db: Session, ai_interaction: schemas_ai_interaction.AiInteractionCreate):
|
||||
"""Crée une nouvelle interaction IA dans la base de données."""
|
||||
db_ai_interaction = models_ai_interaction.AiInteraction(
|
||||
user_id=ai_interaction.user_id,
|
||||
document_id=ai_interaction.document_id,
|
||||
job_offer_text=ai_interaction.job_offer_text,
|
||||
cv_text_used=ai_interaction.cv_text_used,
|
||||
ai_request=ai_interaction.ai_request,
|
||||
ai_response=ai_interaction.ai_response,
|
||||
score=ai_interaction.score,
|
||||
analysis_results=ai_interaction.analysis_results,
|
||||
interaction_type=ai_interaction.interaction_type
|
||||
)
|
||||
db.add(db_ai_interaction)
|
||||
db.commit()
|
||||
db.refresh(db_ai_interaction)
|
||||
return db_ai_interaction
|
||||
|
||||
def get_ai_interactions_by_user(db: Session, user_id: int):
|
||||
"""Récupère toutes les interactions IA d'un utilisateur."""
|
||||
return db.query(models_ai_interaction.AiInteraction).filter(models_ai_interaction.AiInteraction.user_id == user_id).all()
|
||||
|
||||
def get_ai_interaction_by_id(db: Session, interaction_id: int):
|
||||
"""Récupère une interaction IA par son ID."""
|
||||
return db.query(models_ai_interaction.AiInteraction).filter(models_ai_interaction.AiInteraction.id == interaction_id).first()
|
38
backend/crud/document.py
Normal file
38
backend/crud/document.py
Normal file
|
@ -0,0 +1,38 @@
|
|||
# backend/crud/document.py
|
||||
from sqlalchemy.orm import Session
|
||||
# Importations ABSOLUES
|
||||
from models import document as models_document
|
||||
from schemas import document as schemas_document
|
||||
from typing import Optional
|
||||
|
||||
def create_document(db: Session, document: schemas_document.DocumentCreate, filepath: str, owner_id: int):
|
||||
db_document = models_document.Document(
|
||||
filename=document.filename,
|
||||
filepath=filepath,
|
||||
owner_id=owner_id
|
||||
)
|
||||
db.add(db_document)
|
||||
db.commit()
|
||||
db.refresh(db_document)
|
||||
return db_document
|
||||
|
||||
def get_documents_by_owner(db: Session, owner_id: int):
|
||||
return db.query(models_document.Document).filter(models_document.Document.owner_id == owner_id).all()
|
||||
|
||||
# Ceci est la définition correcte et finale de get_document_by_id
|
||||
def get_document_by_id(db: Session, document_id: int, owner_id: int) -> Optional[models_document.Document]:
|
||||
"""
|
||||
Récupère un document par son ID et l'ID de son propriétaire.
|
||||
Cela garantit qu'un utilisateur ne peut accéder qu'à ses propres documents.
|
||||
"""
|
||||
return db.query(models_document.Document).filter(
|
||||
models_document.Document.id == document_id,
|
||||
models_document.Document.owner_id == owner_id
|
||||
).first()
|
||||
|
||||
def delete_document(db: Session, document_id: int):
|
||||
db_document = db.query(models_document.Document).filter(models_document.Document.id == document_id).first()
|
||||
if db_document:
|
||||
db.delete(db_document)
|
||||
db.commit()
|
||||
return db_document
|
20
backend/crud/user.py
Normal file
20
backend/crud/user.py
Normal file
|
@ -0,0 +1,20 @@
|
|||
from sqlalchemy.orm import Session
|
||||
# Importations ABSOLUES
|
||||
from models import user as models_user
|
||||
from schemas import user as schemas_user
|
||||
from core.hashing import get_password_hash # <-- NOUVEAU
|
||||
|
||||
def get_user_by_email(db: Session, email: str):
|
||||
return db.query(models_user.User).filter(models_user.User.email == email).first()
|
||||
|
||||
def create_user(db: Session, user: schemas_user.UserCreate):
|
||||
hashed_password = get_password_hash(user.password)
|
||||
db_user = models_user.User(
|
||||
email=user.email,
|
||||
hashed_password=hashed_password,
|
||||
name=user.name
|
||||
)
|
||||
db.add(db_user)
|
||||
db.commit()
|
||||
db.refresh(db_user)
|
||||
return db_user
|
35
backend/dependencies.py
Normal file
35
backend/dependencies.py
Normal file
|
@ -0,0 +1,35 @@
|
|||
from fastapi import Depends, HTTPException, status
|
||||
from fastapi.security import OAuth2PasswordBearer
|
||||
from jose import JWTError, jwt
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from core.config import settings
|
||||
from core.database import get_db
|
||||
from crud import user as crud_user
|
||||
from schemas import user as schemas_user # Pour la validation du modèle de l'utilisateur
|
||||
|
||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="auth/login") # URL où le client peut obtenir un token
|
||||
|
||||
async def get_current_user(
|
||||
token: str = Depends(oauth2_scheme),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
credentials_exception = HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Impossible de valider les identifiants",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
try:
|
||||
payload = jwt.decode(token, settings.SECRET_KEY, algorithms=[settings.ALGORITHM])
|
||||
username: str = payload.get("sub")
|
||||
if username is None:
|
||||
raise credentials_exception
|
||||
except JWTError:
|
||||
raise credentials_exception
|
||||
|
||||
user = crud_user.get_user_by_email(db, email=username)
|
||||
if user is None:
|
||||
raise credentials_exception
|
||||
|
||||
# Retourne l'utilisateur sous forme de Pydantic model pour la réponse
|
||||
return schemas_user.UserResponse.model_validate(user)
|
55
backend/main.py
Normal file
55
backend/main.py
Normal file
|
@ -0,0 +1,55 @@
|
|||
# backend/main.py
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Ajoutez le répertoire parent du fichier actuel (qui est 'backend/') au PYTHONPATH
|
||||
# Cela permet d'importer des modules depuis 'backend.services', 'backend.routers', etc.
|
||||
# sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
sys.path.insert(0, os.path.abspath(os.path.dirname(__file__)))
|
||||
|
||||
# AJOUTEZ CETTE LIGNE TEMPORAIREMENT POUR LE DEBUG
|
||||
print(f"DEBUG: sys.path = {sys.path}")
|
||||
# FIN DE L'AJOUT TEMPORAIRE
|
||||
|
||||
|
||||
# Le reste de vos imports
|
||||
from fastapi import FastAPI
|
||||
from contextlib import asynccontextmanager
|
||||
from core.database import Base, engine
|
||||
from models import user
|
||||
from models import document
|
||||
from models import ai_interaction
|
||||
from routers import auth
|
||||
from routers import document as document_router
|
||||
from routers import ai as ai_router
|
||||
from routers import france_travail_offers
|
||||
|
||||
# Cette fonction sera appelée au démarrage et à l'arrêt de l'application
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
print("L'application démarre. Les migrations de DB sont gérées par Alembic.")
|
||||
yield
|
||||
print("L'application s'arrête.")
|
||||
|
||||
app = FastAPI(
|
||||
title="Allons-y API",
|
||||
description="API pour l'assistant de candidature basé sur l'IA.",
|
||||
version="0.1.0",
|
||||
lifespan=lifespan,
|
||||
openapi_tags=[
|
||||
{"name": "Authentication", "description": "Opérations liées à l'authentification des utilisateurs."},
|
||||
{"name": "Documents", "description": "Gestion des CVs et autres documents de l'utilisateur."},
|
||||
{"name": "Offers (France Travail)", "description": "Recherche et consultation des offres d'emploi via l'API France Travail."}, # <-- NOUVELLE SECTION TAG
|
||||
{"name": "AI Analysis", "description": "Endpoints pour l'analyse IA de CVs et offres d'emploi."},
|
||||
]
|
||||
)
|
||||
|
||||
# Inclure les routeurs
|
||||
app.include_router(auth.router)
|
||||
app.include_router(document_router.router)
|
||||
app.include_router(ai_router.router)
|
||||
app.include_router(france_travail_offers.router, prefix="/france-travail/offers", tags=["Offers (France Travail)"])
|
||||
|
||||
@app.get("/")
|
||||
async def read_root():
|
||||
return {"message": "Bienvenue sur l'API Allons-y Alonzo!"}
|
0
backend/models/__init__.py
Normal file
0
backend/models/__init__.py
Normal file
26
backend/models/ai_interaction.py
Normal file
26
backend/models/ai_interaction.py
Normal file
|
@ -0,0 +1,26 @@
|
|||
from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey, Float
|
||||
from sqlalchemy.sql import func
|
||||
from sqlalchemy.orm import relationship
|
||||
from core.database import Base
|
||||
|
||||
class AiInteraction(Base):
|
||||
__tablename__ = "ai_interactions"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
user_id = Column(Integer, ForeignKey("users.id"), nullable=True) # Utilisateur ayant effectué l'interaction (peut être NULL pour anonyme)
|
||||
document_id = Column(Integer, ForeignKey("documents.id"), nullable=True) # Document utilisé pour l'interaction (si pertinent)
|
||||
job_offer_text = Column(Text, nullable=False) # Le texte de l'offre d'emploi analysée
|
||||
cv_text_used = Column(Text, nullable=True) # Le texte du CV utilisé pour l'analyse (stocké pour l'historique)
|
||||
ai_request = Column(Text, nullable=False) # Le prompt envoyé à l'IA
|
||||
ai_response = Column(Text, nullable=False) # La réponse brute de l'IA
|
||||
score = Column(Float, nullable=True) # Le score de pertinence calculé par l'IA ou le backend
|
||||
analysis_results = Column(Text, nullable=True) # Les détails de l'analyse (ex: points forts/faibles)
|
||||
interaction_type = Column(String, nullable=False, default="scoring") # Type d'interaction (e.g., 'scoring', 'paragraph_gen')
|
||||
created_at = Column(DateTime, default=func.now())
|
||||
|
||||
# Relations optionnelles
|
||||
user = relationship("User", back_populates="ai_interactions")
|
||||
document = relationship("Document") # Pas de back_populates ici car Document n'a pas de relation "ai_interactions"
|
||||
|
||||
def __repr__(self):
|
||||
return f"<AiInteraction(id={self.id}, user_id={self.user_id}, type='{self.interaction_type}')>"
|
19
backend/models/document.py
Normal file
19
backend/models/document.py
Normal file
|
@ -0,0 +1,19 @@
|
|||
from sqlalchemy import Column, Integer, String, DateTime, ForeignKey
|
||||
from sqlalchemy.sql import func
|
||||
from sqlalchemy.orm import relationship
|
||||
from core.database import Base
|
||||
|
||||
class Document(Base):
|
||||
__tablename__ = "documents"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
filename = Column(String, nullable=False)
|
||||
filepath = Column(String, unique=True, nullable=False) # Chemin unique pour le stockage
|
||||
owner_id = Column(Integer, ForeignKey("users.id")) # Clé étrangère vers l'utilisateur
|
||||
uploaded_at = Column(DateTime, default=func.now())
|
||||
|
||||
# Relation avec l'utilisateur propriétaire
|
||||
owner = relationship("User", back_populates="documents")
|
||||
|
||||
def __repr__(self):
|
||||
return f"<Document(filename='{self.filename}', owner_id={self.owner_id})>"
|
22
backend/models/user.py
Normal file
22
backend/models/user.py
Normal file
|
@ -0,0 +1,22 @@
|
|||
from sqlalchemy import Column, Integer, String, Boolean, DateTime
|
||||
from sqlalchemy.sql import func
|
||||
from sqlalchemy.orm import relationship # <-- NOUVELLE IMPORTATION
|
||||
from core.database import Base
|
||||
|
||||
class User(Base):
|
||||
__tablename__ = "users"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
email = Column(String, unique=True, index=True, nullable=False)
|
||||
hashed_password = Column(String, nullable=False)
|
||||
name = Column(String, nullable=True)
|
||||
is_active = Column(Boolean, default=True)
|
||||
created_at = Column(DateTime, default=func.now())
|
||||
updated_at = Column(DateTime, default=func.now(), onupdate=func.now())
|
||||
|
||||
# Relation avec les documents de l'utilisateur
|
||||
documents = relationship("Document", back_populates="owner") # <-- NOUVELLE LIGNE
|
||||
ai_interactions = relationship("AiInteraction", back_populates="user") # <-- NOUVELLE LIGNE
|
||||
|
||||
def __repr__(self):
|
||||
return f"<User(email='{self.email}', id={self.id})>"
|
30
backend/repositories/document_repository.py
Normal file
30
backend/repositories/document_repository.py
Normal file
|
@ -0,0 +1,30 @@
|
|||
# backend/repositories/document_repository.py
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy.future import select
|
||||
from models.document import Document
|
||||
from typing import Optional, List
|
||||
|
||||
class DocumentRepository:
|
||||
def __init__(self, db: AsyncSession):
|
||||
self.db = db
|
||||
|
||||
async def get_document_by_id(self, document_id: int, owner_id: int) -> Optional[Document]:
|
||||
"""
|
||||
Récupère un document par son ID et l'ID de son propriétaire.
|
||||
Cela garantit qu'un utilisateur ne peut accéder qu'à ses propres documents.
|
||||
"""
|
||||
result = await self.db.execute(
|
||||
select(Document).where(Document.id == document_id, Document.owner_id == owner_id)
|
||||
)
|
||||
return result.scalars().first()
|
||||
|
||||
async def get_all_documents_by_owner_id(self, owner_id: int) -> List[Document]:
|
||||
"""
|
||||
Récupère tous les documents pour un propriétaire donné.
|
||||
"""
|
||||
result = await self.db.execute(
|
||||
select(Document).where(Document.owner_id == owner_id)
|
||||
)
|
||||
return result.scalars().all()
|
||||
|
||||
# Vous pourriez ajouter ici d'autres méthodes comme create_document, delete_document, etc.
|
12
backend/requirements.txt
Normal file
12
backend/requirements.txt
Normal file
|
@ -0,0 +1,12 @@
|
|||
fastapi
|
||||
uvicorn[standard]
|
||||
sqlalchemy
|
||||
psycopg2-binary
|
||||
python-jose[cryptography]
|
||||
passlib[bcrypt]
|
||||
python-dotenv
|
||||
aiofiles
|
||||
pypdf
|
||||
python-docx
|
||||
google-generativeai
|
||||
mistralai
|
0
backend/routers/__init__.py
Normal file
0
backend/routers/__init__.py
Normal file
213
backend/routers/ai.py
Normal file
213
backend/routers/ai.py
Normal file
|
@ -0,0 +1,213 @@
|
|||
# backend/routers/ai.py (Mise à jour avec extraction de texte)
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
from services.ai_service import ai_service
|
||||
from core.security import get_current_user
|
||||
from models.user import User
|
||||
from typing import Optional
|
||||
|
||||
# NOUVELLE IMPORTATION pour le service France Travail
|
||||
from services.france_travail_offer_service import france_travail_offer_service
|
||||
|
||||
# NOUVELLES IMPORTATIONS pour les documents et la base de données
|
||||
from crud import document as crud_document
|
||||
from models.document import Document
|
||||
from core.database import get_db
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
# NOUVELLES IMPORTATIONS pour l'extraction de texte
|
||||
import os
|
||||
import pypdf # Pour les fichiers PDF
|
||||
import docx # Pour les fichiers DOCX (pip install python-docx)
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
# Modèle de requête pour l'analyse d'offre
|
||||
class AnalyzeRequest(BaseModel):
|
||||
cv_id: Optional[int] = Field(None, description="ID du CV de l'utilisateur déjà stocké. Si fourni, cv_text sera ignoré.")
|
||||
cv_text: Optional[str] = Field(None, description="Texte brut du CV à analyser. Utilisé si cv_id n'est pas fourni (ex: pour analyse anonyme).")
|
||||
|
||||
job_offer_text: Optional[str] = Field(None, description="Le texte complet de l'offre d'emploi à analyser (si pas d'offer_id).")
|
||||
france_travail_offer_id: Optional[str] = Field(None, description="L'ID de l'offre France Travail à analyser (si pas de job_offer_text).")
|
||||
|
||||
@model_validator(mode='after')
|
||||
def check_inputs_provided(self) -> 'AnalyzeRequest':
|
||||
if not (self.cv_id or self.cv_text):
|
||||
raise ValueError("Veuillez fournir un 'cv_id' ou un 'cv_text'.")
|
||||
|
||||
if not (self.job_offer_text or self.france_travail_offer_id):
|
||||
raise ValueError("Au moins 'job_offer_text' ou 'france_travail_offer_id' doit être fourni pour l'offre d'emploi.")
|
||||
return self
|
||||
|
||||
# Fonction utilitaire pour extraire le texte d'un fichier
|
||||
def extract_text_from_file(filepath: str) -> str:
|
||||
file_extension = os.path.splitext(filepath)[1].lower()
|
||||
text_content = ""
|
||||
|
||||
if not os.path.exists(filepath):
|
||||
raise FileNotFoundError(f"Le fichier n'existe pas : {filepath}")
|
||||
|
||||
if file_extension == ".pdf":
|
||||
try:
|
||||
with open(filepath, 'rb') as f:
|
||||
reader = pypdf.PdfReader(f)
|
||||
for page in reader.pages:
|
||||
text_content += page.extract_text() or ""
|
||||
if not text_content.strip(): # Vérifie si le texte extrait est vide ou ne contient que des espaces
|
||||
logger.warning(f"Le fichier PDF {filepath} a été lu mais aucun texte significatif n'a été extrait.")
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur lors de l'extraction du texte du PDF {filepath}: {e}")
|
||||
raise ValueError(f"Impossible d'extraire le texte du fichier PDF. Erreur: {e}")
|
||||
elif file_extension == ".docx":
|
||||
try:
|
||||
document = docx.Document(filepath)
|
||||
for paragraph in document.paragraphs:
|
||||
text_content += paragraph.text + "\n"
|
||||
if not text_content.strip():
|
||||
logger.warning(f"Le fichier DOCX {filepath} a été lu mais aucun texte significatif n'a été extrait.")
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur lors de l'extraction du texte du DOCX {filepath}: {e}")
|
||||
raise ValueError(f"Impossible d'extraire le texte du fichier DOCX. Erreur: {e}")
|
||||
else: # Tente de lire comme un fichier texte
|
||||
try:
|
||||
with open(filepath, 'r', encoding='utf-8') as f:
|
||||
text_content = f.read()
|
||||
except UnicodeDecodeError:
|
||||
# Si UTF-8 échoue, tente latin-1
|
||||
try:
|
||||
with open(filepath, 'r', encoding='latin-1') as f:
|
||||
text_content = f.read()
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur lors de la lecture du fichier texte {filepath} avec UTF-8 et Latin-1: {e}")
|
||||
raise ValueError(f"Impossible de lire le fichier texte (problème d'encodage). Erreur: {e}")
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur inattendue lors de la lecture du fichier texte {filepath}: {e}")
|
||||
raise ValueError(f"Impossible de lire le fichier texte. Erreur: {e}")
|
||||
|
||||
return text_content
|
||||
|
||||
|
||||
@router.post("/analyze-job-offer-and-cv", summary="Analyse la pertinence d'un CV pour une offre d'emploi", response_model=dict)
|
||||
async def analyze_job_offer_and_cv_route(
|
||||
request: AnalyzeRequest,
|
||||
current_user: User = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Analyse la pertinence d'un CV par rapport à une offre d'emploi en utilisant l'IA.
|
||||
Prend en entrée soit les textes bruts, soit les IDs des documents.
|
||||
"""
|
||||
cv_text_to_analyze: Optional[str] = request.cv_text
|
||||
|
||||
if request.cv_id:
|
||||
cv_document: Optional[Document] = crud_document.get_document_by_id(db, request.cv_id, current_user.id)
|
||||
|
||||
if not cv_document:
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="CV non trouvé ou non accessible par cet utilisateur.")
|
||||
|
||||
try:
|
||||
# Utilise la nouvelle fonction d'extraction de texte
|
||||
cv_text_to_analyze = extract_text_from_file(cv_document.filepath)
|
||||
if not cv_text_to_analyze.strip(): # Vérifier après extraction si le contenu est vide
|
||||
raise ValueError("Le fichier CV est vide ou l'extraction de texte a échoué.")
|
||||
except FileNotFoundError as e:
|
||||
logger.error(f"Fichier CV introuvable: {e}")
|
||||
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Fichier CV introuvable sur le serveur: {e}")
|
||||
except ValueError as e:
|
||||
logger.error(f"Erreur lors de l'extraction/lecture du CV {cv_document.filepath}: {e}")
|
||||
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Erreur lors de la lecture ou de l'extraction du CV: {e}")
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur inattendue lors du traitement du CV {cv_document.filepath}: {e}")
|
||||
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Erreur interne lors du traitement du CV: {e}")
|
||||
|
||||
# Le reste du code pour l'offre d'emploi reste inchangé
|
||||
job_offer_text_to_analyze: Optional[str] = request.job_offer_text
|
||||
if request.france_travail_offer_id:
|
||||
try:
|
||||
offer_details = await france_travail_offer_service.get_offer_details(request.france_travail_offer_id)
|
||||
job_offer_text_to_analyze = offer_details.description
|
||||
if not job_offer_text_to_analyze:
|
||||
raise ValueError("La description de l'offre France Travail est vide.")
|
||||
except RuntimeError as e:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Erreur lors de la récupération de l'offre France Travail: {e}"
|
||||
)
|
||||
|
||||
if not job_offer_text_to_analyze:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="Impossible d'obtenir le texte de l'offre d'emploi pour l'analyse."
|
||||
)
|
||||
|
||||
if not cv_text_to_analyze:
|
||||
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Le texte du CV n'a pas pu être obtenu.")
|
||||
|
||||
try:
|
||||
analysis_result = await ai_service.analyze_job_offer_and_cv(
|
||||
job_offer_text=job_offer_text_to_analyze,
|
||||
cv_text=cv_text_to_analyze
|
||||
)
|
||||
return analysis_result
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
|
||||
|
||||
|
||||
# L'endpoint /score-offer-anonymous
|
||||
@router.post("/score-offer-anonymous", summary="Analyse la pertinence d'un CV pour une offre d'emploi (anonyme)", response_model=dict)
|
||||
async def score_offer_anonymous(
|
||||
request: AnalyzeRequest,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Analyse la pertinence d'un CV par rapport à une offre d'emploi sans nécessiter d'authentification.
|
||||
Prend uniquement le texte de l'offre d'emploi.
|
||||
"""
|
||||
if not request.job_offer_text and not request.france_travail_offer_id:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="Au moins 'job_offer_text' ou 'france_travail_offer_id' doit être fourni pour l'offre d'emploi."
|
||||
)
|
||||
|
||||
if request.cv_id:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="Le 'cv_id' n'est pas autorisé pour les analyses anonymes."
|
||||
)
|
||||
|
||||
job_offer_text_to_analyze: Optional[str] = request.job_offer_text
|
||||
if request.france_travail_offer_id:
|
||||
try:
|
||||
offer_details = await france_travail_offer_service.get_offer_details(request.france_travail_offer_id)
|
||||
job_offer_text_to_analyze = offer_details.description
|
||||
if not job_offer_text_to_analyze:
|
||||
raise ValueError("La description de l'offre France Travail est vide.")
|
||||
except RuntimeError as e:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Erreur lors de la récupération de l'offre France Travail: {e}"
|
||||
)
|
||||
|
||||
if not job_offer_text_to_analyze:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="Impossible d'obtenir le texte de l'offre d'emploi pour l'analyse."
|
||||
)
|
||||
|
||||
if not request.cv_text:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="'cv_text' est requis pour l'analyse anonyme si le CV n'est pas stocké."
|
||||
)
|
||||
|
||||
try:
|
||||
analysis_result = await ai_service.analyze_job_offer_and_cv(
|
||||
job_offer_text=job_offer_text_to_analyze,
|
||||
cv_text=request.cv_text
|
||||
)
|
||||
return analysis_result
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
|
45
backend/routers/auth.py
Normal file
45
backend/routers/auth.py
Normal file
|
@ -0,0 +1,45 @@
|
|||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
from sqlalchemy.orm import Session
|
||||
from fastapi.security import OAuth2PasswordRequestForm
|
||||
from datetime import timedelta
|
||||
|
||||
# Importations ABSOLUES
|
||||
from core.database import get_db
|
||||
from core.security import verify_password, create_access_token
|
||||
from crud import user as crud_user # Était déjà correcte pour "crud", mais assure la cohérence
|
||||
from schemas import user as schemas_user
|
||||
from core.config import settings
|
||||
from core.hashing import verify_password
|
||||
|
||||
router = APIRouter(
|
||||
prefix="/auth",
|
||||
tags=["Authentication"],
|
||||
responses={404: {"description": "Not found"}},
|
||||
)
|
||||
|
||||
ACCESS_TOKEN_EXPIRE_MINUTES = settings.ACCESS_TOKEN_EXPIRE_MINUTES
|
||||
|
||||
@router.post("/register", response_model=schemas_user.UserResponse, status_code=status.HTTP_201_CREATED)
|
||||
def register_user(user: schemas_user.UserCreate, db: Session = Depends(get_db)):
|
||||
db_user = crud_user.get_user_by_email(db, email=user.email)
|
||||
if db_user:
|
||||
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Email déjà enregistré.")
|
||||
|
||||
new_user = crud_user.create_user(db=db, user=user)
|
||||
return new_user
|
||||
|
||||
@router.post("/login", response_model=dict)
|
||||
def login_for_access_token(form_data: OAuth2PasswordRequestForm = Depends(), db: Session = Depends(get_db)):
|
||||
user = crud_user.get_user_by_email(db, email=form_data.username)
|
||||
if not user or not verify_password(form_data.password, user.hashed_password):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Identifiants incorrects",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
access_token_expires = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
|
||||
access_token = create_access_token(
|
||||
data={"sub": user.email}, expires_delta=access_token_expires
|
||||
)
|
||||
return {"access_token": access_token, "token_type": "bearer"}
|
119
backend/routers/document.py
Normal file
119
backend/routers/document.py
Normal file
|
@ -0,0 +1,119 @@
|
|||
from fastapi import APIRouter, Depends, HTTPException, status, UploadFile, File
|
||||
from fastapi.responses import FileResponse
|
||||
from sqlalchemy.orm import Session
|
||||
import os
|
||||
import uuid # Pour générer des noms de fichiers uniques
|
||||
|
||||
from core.database import get_db
|
||||
from core.security import create_access_token # Non utilisé directement ici mais potentiellement dans d'autres routers
|
||||
from core.config import settings # Pour accéder au chemin d'upload
|
||||
from crud import document as crud_document
|
||||
from crud import user as crud_user # Pour récupérer l'utilisateur courant
|
||||
from schemas import document as schemas_document
|
||||
from schemas import user as schemas_user # Pour le modèle UserInDBBase ou UserResponse
|
||||
from dependencies import get_current_user # Pour la protection des routes
|
||||
|
||||
router = APIRouter(
|
||||
prefix="/documents",
|
||||
tags=["Documents"],
|
||||
responses={404: {"description": "Not found"}},
|
||||
)
|
||||
|
||||
@router.post("/upload-cv", response_model=schemas_document.DocumentResponse, status_code=status.HTTP_201_CREATED)
|
||||
async def upload_cv(
|
||||
file: UploadFile = File(...),
|
||||
db: Session = Depends(get_db),
|
||||
current_user: schemas_user.UserResponse = Depends(get_current_user)
|
||||
):
|
||||
"""
|
||||
Permet à un utilisateur authentifié d'uploader un CV.
|
||||
Le fichier est stocké sur le serveur et ses métadonnées sont enregistrées en base de données.
|
||||
"""
|
||||
if not file.filename.lower().endswith(('.pdf', '.doc', '.docx')):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="Seuls les fichiers PDF, DOC, DOCX sont autorisés."
|
||||
)
|
||||
|
||||
# Créer un nom de fichier unique pour éviter les collisions et les problèmes de sécurité
|
||||
unique_filename = f"{uuid.uuid4()}_{file.filename}"
|
||||
file_path = os.path.join(settings.UPLOADS_DIR, unique_filename)
|
||||
|
||||
# S'assurer que le répertoire d'uploads existe
|
||||
os.makedirs(settings.UPLOADS_DIR, exist_ok=True)
|
||||
|
||||
try:
|
||||
with open(file_path, "wb") as buffer:
|
||||
# Écrit le fichier par morceaux pour les gros fichiers
|
||||
while content := await file.read(1024 * 1024): # Lire par blocs de 1MB
|
||||
buffer.write(content)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Erreur lors de l'enregistrement du fichier: {e}"
|
||||
)
|
||||
finally:
|
||||
await file.close()
|
||||
|
||||
# Enregistrer les métadonnées du document dans la base de données
|
||||
document_data = schemas_document.DocumentCreate(filename=file.filename)
|
||||
db_document = crud_document.create_document(db, document_data, file_path, current_user.id)
|
||||
|
||||
return db_document
|
||||
|
||||
@router.get("/", response_model=list[schemas_document.DocumentResponse])
|
||||
def get_user_documents(
|
||||
db: Session = Depends(get_db),
|
||||
current_user: schemas_user.UserResponse = Depends(get_current_user)
|
||||
):
|
||||
"""
|
||||
Récupère tous les documents uploadés par l'utilisateur authentifié.
|
||||
"""
|
||||
documents = crud_document.get_documents_by_owner(db, current_user.id)
|
||||
return documents
|
||||
|
||||
@router.get("/{document_id}", response_model=schemas_document.DocumentResponse)
|
||||
def get_document_details(
|
||||
document_id: int,
|
||||
db: Session = Depends(get_db),
|
||||
current_user: schemas_user.UserResponse = Depends(get_current_user)
|
||||
):
|
||||
"""
|
||||
Récupère les détails d'un document spécifique de l'utilisateur authentifié.
|
||||
"""
|
||||
document = crud_document.get_document_by_id(db, document_id)
|
||||
if not document:
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Document non trouvé.")
|
||||
if document.owner_id != current_user.id:
|
||||
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Vous n'avez pas accès à ce document.")
|
||||
return document
|
||||
|
||||
@router.delete("/{document_id}", status_code=status.HTTP_204_NO_CONTENT)
|
||||
async def delete_document(
|
||||
document_id: int,
|
||||
db: Session = Depends(get_db),
|
||||
current_user: schemas_user.UserResponse = Depends(get_current_user)
|
||||
):
|
||||
"""
|
||||
Supprime un document spécifique de l'utilisateur authentifié,
|
||||
à la fois de la base de données et du système de fichiers.
|
||||
"""
|
||||
db_document = crud_document.get_document_by_id(db, document_id)
|
||||
if not db_document:
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Document non trouvé.")
|
||||
if db_document.owner_id != current_user.id:
|
||||
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Vous n'avez pas la permission de supprimer ce document.")
|
||||
|
||||
# Supprimer le fichier du système de fichiers
|
||||
if os.path.exists(db_document.filepath):
|
||||
try:
|
||||
os.remove(db_document.filepath)
|
||||
except OSError as e:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Erreur lors de la suppression du fichier sur le serveur: {e}"
|
||||
)
|
||||
|
||||
# Supprimer l'entrée de la base de données
|
||||
crud_document.delete_document(db, document_id)
|
||||
return {"message": "Document supprimé avec succès."}
|
97
backend/routers/france_travail_offers.py
Normal file
97
backend/routers/france_travail_offers.py
Normal file
|
@ -0,0 +1,97 @@
|
|||
# backend/routers/france_travail_offers.py
|
||||
from typing import List, Optional
|
||||
from fastapi import APIRouter, Depends, HTTPException, status, Query
|
||||
|
||||
from services.france_travail_offer_service import france_travail_offer_service
|
||||
from core.security import get_current_user
|
||||
from models.user import User
|
||||
from schemas.france_travail import FranceTravailSearchResponse, OffreDetail, Offre
|
||||
|
||||
import logging
|
||||
|
||||
router = APIRouter()
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@router.get("/search", response_model=FranceTravailSearchResponse)
|
||||
async def search_france_travail_offers(
|
||||
motsCles: Optional[str] = Query(None, description="Mots-clés de recherche (ex: 'développeur full stack')"),
|
||||
commune_nom_ou_code: Optional[str] = Query(None, alias="commune", description="Nom, code postal ou code INSEE de la commune"),
|
||||
distance: Optional[int] = Query(10, description="Distance maximale en km autour de la commune"),
|
||||
page: int = Query(0, description="Numéro de la page de résultats (commence à 0)"),
|
||||
limit: int = Query(15, description="Nombre d'offres par page (max 100 pour l'API France Travail)"), # Max 100 est une limite courante pour une seule requête à l'API FT
|
||||
contrat: Optional[str] = Query(None, description="Type de contrat (ex: 'CDI', 'CDD', 'MIS')"),
|
||||
experience: Optional[str] = Query(None, description="Niveau d'expérience (ex: '1' pour débutant, '2' pour 1-3 ans, '3' pour >3 ans)"),
|
||||
current_user: User = Depends(get_current_user)
|
||||
):
|
||||
"""
|
||||
Recherche des offres d'emploi via l'API France Travail.
|
||||
Convertit le nom de ville en code INSEE si nécessaire et gère la pagination.
|
||||
Nécessite une authentification.
|
||||
"""
|
||||
if limit > 100: # La limite de l'API France Travail pour 'range' est souvent 150 ou 100 items par requête.
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="La limite de résultats par page ne peut pas dépasser 100 pour une seule requête API."
|
||||
)
|
||||
|
||||
commune_param_for_api = None
|
||||
|
||||
if commune_nom_ou_code:
|
||||
if commune_nom_ou_code.isdigit() and len(commune_nom_ou_code) == 5:
|
||||
commune_param_for_api = commune_nom_ou_code
|
||||
logger.info(f"Recherche par code postal: {commune_nom_ou_code}")
|
||||
else:
|
||||
logger.info(f"Tentative de récupération du code INSEE pour la ville: {commune_nom_ou_code}")
|
||||
insee_code = await france_travail_offer_service.get_insee_code_for_commune(commune_nom_ou_code)
|
||||
if not insee_code:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail=f"Code INSEE non trouvé pour la ville '{commune_nom_ou_code}'. Veuillez vérifier l'orthographe ou utiliser un code postal."
|
||||
)
|
||||
commune_param_for_api = insee_code
|
||||
logger.info(f"Code INSEE '{insee_code}' trouvé pour '{commune_nom_ou_code}'.")
|
||||
|
||||
if (commune_param_for_api is not None) and (distance is None):
|
||||
distance = 10
|
||||
|
||||
# Calcul du paramètre 'range' pour l'API France Travail
|
||||
start_index = page * limit
|
||||
end_index = start_index + limit - 1
|
||||
api_range_param = f"{start_index}-{end_index}"
|
||||
logger.info(f"Paramètre 'range' calculé pour l'API France Travail: {api_range_param}")
|
||||
|
||||
try:
|
||||
response = await france_travail_offer_service.search_offers(
|
||||
motsCles=motsCles,
|
||||
commune=commune_param_for_api,
|
||||
distance=distance,
|
||||
range=api_range_param, # On passe le 'range' calculé
|
||||
typeContrat=contrat,
|
||||
# experience=experience # Vérifiez si ce paramètre est géré par l'API France Travail ou doit être mappé
|
||||
)
|
||||
return response
|
||||
except RuntimeError as e:
|
||||
logger.error(f"Erreur lors de la recherche d'offres France Travail: {e}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Impossible de récupérer les offres de France Travail: {e}"
|
||||
)
|
||||
|
||||
@router.get("/{offer_id}", response_model=OffreDetail)
|
||||
async def get_france_travail_offer_details(
|
||||
offer_id: str,
|
||||
current_user: User = Depends(get_current_user)
|
||||
):
|
||||
"""
|
||||
Récupère les détails d'une offre d'emploi spécifique de l'API France Travail par son ID.
|
||||
Nécessite une authentification.
|
||||
"""
|
||||
try:
|
||||
details = await france_travail_offer_service.get_offer_details(offer_id)
|
||||
return details
|
||||
except RuntimeError as e:
|
||||
logger.error(f"Erreur lors de la récupération des détails de l'offre {offer_id} de France Travail: {e}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Impossible de récupérer les détails de l'offre: {e}"
|
||||
)
|
0
backend/schemas/__init__.py
Normal file
0
backend/schemas/__init__.py
Normal file
23
backend/schemas/ai_interaction.py
Normal file
23
backend/schemas/ai_interaction.py
Normal file
|
@ -0,0 +1,23 @@
|
|||
from pydantic import BaseModel, Field
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
class AiInteractionBase(BaseModel):
|
||||
job_offer_text: str
|
||||
cv_text_used: Optional[str] = None
|
||||
interaction_type: str = "scoring" # Valeur par défaut
|
||||
|
||||
class AiInteractionCreate(AiInteractionBase):
|
||||
ai_request: str
|
||||
ai_response: str
|
||||
score: Optional[float] = None
|
||||
analysis_results: Optional[str] = None
|
||||
user_id: Optional[int] = None
|
||||
document_id: Optional[int] = None
|
||||
|
||||
class AiInteractionResponse(AiInteractionCreate):
|
||||
id: int
|
||||
created_at: datetime
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
23
backend/schemas/document.py
Normal file
23
backend/schemas/document.py
Normal file
|
@ -0,0 +1,23 @@
|
|||
from pydantic import BaseModel, Field
|
||||
from datetime import datetime
|
||||
|
||||
class DocumentBase(BaseModel):
|
||||
filename: str
|
||||
|
||||
class DocumentCreate(DocumentBase):
|
||||
# Pas besoin de filepath ici, il sera généré par le backend
|
||||
pass
|
||||
|
||||
class DocumentResponse(DocumentBase):
|
||||
id: int
|
||||
filepath: str
|
||||
owner_id: int
|
||||
uploaded_at: datetime
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
class DocumentDeleteResponse(BaseModel):
|
||||
detail: str
|
||||
filename: str
|
||||
id: int
|
110
backend/schemas/france_travail.py
Normal file
110
backend/schemas/france_travail.py
Normal file
|
@ -0,0 +1,110 @@
|
|||
# backend/schemas/france_travail.py
|
||||
from datetime import datetime
|
||||
from typing import List, Optional, Dict, Any, Union
|
||||
from pydantic import BaseModel, Field, field_validator, computed_field
|
||||
|
||||
# Modèles de données pour les structures communes (Lieu, Entreprise, etc.)
|
||||
class LieuTravail(BaseModel):
|
||||
libelle: Optional[str] = Field(None, example="Paris")
|
||||
codePostal: Optional[str] = Field(None, example="75001")
|
||||
commune: Optional[str] = Field(None, example="Paris")
|
||||
|
||||
class TypeContrat(BaseModel):
|
||||
code: Optional[str] = Field(None, example="CDI")
|
||||
libelle: Optional[str] = Field(None, example="Contrat à durée indéterminée")
|
||||
|
||||
class Appellation(BaseModel):
|
||||
code: Optional[str] = Field(None, example="10034")
|
||||
libelle: Optional[str] = Field(None, example="Développeur informatique")
|
||||
|
||||
class OrigineOffre(BaseModel):
|
||||
url: Optional[str] = Field(None, example="https://candidat.francetravail.fr/candidature/offre/1234567")
|
||||
typeOrigine: Optional[str] = Field(None, example="ONLINE")
|
||||
|
||||
class Entreprise(BaseModel):
|
||||
nom: Optional[str] = Field(None, example="Ma Super Entreprise")
|
||||
description: Optional[str] = None
|
||||
url: Optional[str] = None
|
||||
id: Optional[str] = None
|
||||
|
||||
class Salaire(BaseModel):
|
||||
libelle: Optional[str] = Field(None, example="2500 EUR brut/mois")
|
||||
commentaire: Optional[str] = None
|
||||
typeForfait: Optional[str] = None
|
||||
periode: Optional[str] = None
|
||||
min: Optional[float] = None
|
||||
max: Optional[float] = None
|
||||
|
||||
class Competence(BaseModel):
|
||||
code: Optional[str] = None
|
||||
libelle: Optional[str] = None
|
||||
description: Optional[str] = None
|
||||
exigence: Optional[str] = None
|
||||
|
||||
class Experience(BaseModel):
|
||||
libelle: Optional[str] = Field(None, example="Débutant accepté")
|
||||
code: Optional[str] = None
|
||||
|
||||
class Formation(BaseModel):
|
||||
domaineLibelle: Optional[str] = None
|
||||
niveaulibelle: Optional[str] = None
|
||||
codeFormation: Optional[str] = None
|
||||
|
||||
class Permis(BaseModel):
|
||||
libelle: Optional[str] = None
|
||||
code: Optional[str] = None
|
||||
|
||||
# Modèle pour une offre individuelle
|
||||
class Offre(BaseModel):
|
||||
id: str = Field(..., example="1234567")
|
||||
intitule: str = Field(..., example="Développeur Full Stack")
|
||||
description: Optional[str] = None
|
||||
dateCreation: datetime
|
||||
dateActualisation: datetime
|
||||
lieuTravail: Optional[LieuTravail] = None
|
||||
typeContrat: Optional[Union[TypeContrat, str]] = None
|
||||
romeCode: Optional[str] = None
|
||||
romeLibelle: Optional[str] = None
|
||||
appellationLibelle: Optional[str] = None
|
||||
entreprise: Optional[Entreprise] = None
|
||||
origineOffre: Optional[OrigineOffre] = None
|
||||
nbPostes: Optional[int] = None
|
||||
nbResultats: Optional[int] = None
|
||||
|
||||
@field_validator('typeContrat', mode='before')
|
||||
@classmethod
|
||||
def validate_type_contrat(cls, v: Any) -> Any:
|
||||
if isinstance(v, str):
|
||||
return TypeContrat(code=v, libelle=None)
|
||||
return v
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
# AJOUTEZ CETTE PROPRIÉTÉ CALCULÉE
|
||||
@computed_field
|
||||
def url_francetravail(self) -> str:
|
||||
"""Génère l'URL de l'offre sur le site candidat.francetravail.fr."""
|
||||
return f"https://candidat.francetravail.fr/offres/recherche/detail/{self.id}"
|
||||
|
||||
# Modèle pour les détails complets d'une offre
|
||||
class OffreDetail(Offre):
|
||||
# OffreDetail hérite de Offre, donc il aura automatiquement la propriété url_francetravail
|
||||
description: str = Field(..., example="Description détaillée du poste...")
|
||||
complementExercice: Optional[str] = None
|
||||
urlDossierCandidature: Optional[str] = None # Ce champ vient directement de l'API s'il est fourni
|
||||
qualification: Optional[str] = None
|
||||
appellations: Optional[List[Appellation]] = None
|
||||
competences: Optional[List[Competence]] = None
|
||||
entreprise: Optional[Entreprise] = None
|
||||
formations: Optional[List[Formation]] = None
|
||||
langues: Optional[List[Dict[str, Any]]] = None
|
||||
permis: Optional[List[Permis]] = None
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
class FranceTravailSearchResponse(BaseModel):
|
||||
resultats: List[Offre] = Field(default_factory=list)
|
||||
totalResults: Optional[int] = Field(None, description="Nombre total d'offres correspondant aux critères")
|
||||
range: Optional[str] = Field(None, description="Plage des résultats actuels, ex: '0-14/100'")
|
10
backend/schemas/token.py
Normal file
10
backend/schemas/token.py
Normal file
|
@ -0,0 +1,10 @@
|
|||
# backend/schemas/token.py
|
||||
from pydantic import BaseModel
|
||||
from typing import Optional
|
||||
|
||||
class Token(BaseModel):
|
||||
access_token: str
|
||||
token_type: str
|
||||
|
||||
class TokenData(BaseModel):
|
||||
email: Optional[str] = None
|
23
backend/schemas/user.py
Normal file
23
backend/schemas/user.py
Normal file
|
@ -0,0 +1,23 @@
|
|||
from pydantic import BaseModel, EmailStr
|
||||
from typing import Optional
|
||||
from datetime import datetime
|
||||
|
||||
class UserBase(BaseModel):
|
||||
email: EmailStr
|
||||
|
||||
class UserCreate(UserBase):
|
||||
password: str
|
||||
name: Optional[str] = None # Conforme au PRD: nom/prénom optionnels
|
||||
|
||||
class UserLogin(UserBase):
|
||||
password: str
|
||||
|
||||
class UserResponse(UserBase):
|
||||
id: int
|
||||
is_active: bool
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
name: Optional[str] = None
|
||||
|
||||
class Config:
|
||||
from_attributes = True # Ancien orm_mode = True pour Pydantic v2+
|
0
backend/services/__init__.py
Normal file
0
backend/services/__init__.py
Normal file
184
backend/services/ai_service.py
Normal file
184
backend/services/ai_service.py
Normal file
|
@ -0,0 +1,184 @@
|
|||
import json
|
||||
import logging
|
||||
import sys
|
||||
from typing import Optional, Dict, Any
|
||||
|
||||
from google import genai
|
||||
from google.genai import types
|
||||
import mistralai
|
||||
from mistralai.client import MistralClient
|
||||
|
||||
from fastapi import HTTPException, status
|
||||
import anyio # <-- NOUVELLE IMPORTATION : Pour gérer les appels synchrones dans async
|
||||
|
||||
from core.config import settings
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# --- DEBUGGING PRINTS ---
|
||||
try:
|
||||
logger.info(f"Loaded mistralai package from: {mistralai.__file__}")
|
||||
logger.info(f"mistralai package version: {mistralai.__version__}")
|
||||
if hasattr(MistralClient, '__module__'):
|
||||
logger.info(f"MistralClient class module: {MistralClient.__module__}")
|
||||
client_module = sys.modules.get(MistralClient.__module__)
|
||||
if client_module and hasattr(client_module, '__file__'):
|
||||
logger.info(f"MistralClient class file: {client_module.__file__}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error during mistralai debug info collection: {e}")
|
||||
|
||||
class AIService:
|
||||
def __init__(self):
|
||||
self.provider = settings.LLM_PROVIDER
|
||||
self.model_name = settings.GEMINI_MODEL_NAME if self.provider == "gemini" else settings.MISTRAL_MODEL_NAME
|
||||
|
||||
self.raw_safety_settings = [
|
||||
{
|
||||
"category": "HARM_CATEGORY_HARASSMENT",
|
||||
"threshold": "BLOCK_NONE"
|
||||
},
|
||||
{
|
||||
"category": "HARM_CATEGORY_HATE_SPEECH",
|
||||
"threshold": "BLOCK_NONE"
|
||||
},
|
||||
{
|
||||
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
|
||||
"threshold": "BLOCK_NONE"
|
||||
},
|
||||
{
|
||||
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
|
||||
"threshold": "BLOCK_NONE"
|
||||
},
|
||||
]
|
||||
|
||||
self.raw_generation_config = {
|
||||
"temperature": 0.7,
|
||||
"top_p": 1,
|
||||
"top_k": 1,
|
||||
}
|
||||
|
||||
if self.provider == "gemini":
|
||||
try:
|
||||
self.client = genai.Client(api_key=settings.GEMINI_API_KEY)
|
||||
|
||||
self.gemini_config = types.GenerateContentConfig(
|
||||
temperature=self.raw_generation_config["temperature"],
|
||||
top_p=self.raw_generation_config["top_p"],
|
||||
top_k=self.raw_generation_config["top_k"],
|
||||
safety_settings=[
|
||||
types.SafetySetting(category=s["category"], threshold=s["threshold"])
|
||||
for s in self.raw_safety_settings
|
||||
]
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur d'initialisation du client Gemini: {e}")
|
||||
raise ValueError(f"Impossible d'initialiser le client Gemini. Vérifiez votre GEMINI_API_KEY. Erreur: {e}")
|
||||
|
||||
elif self.provider == "mistral":
|
||||
if not settings.MISTRAL_API_KEY:
|
||||
raise ValueError("MISTRAL_API_KEY n'est pas configurée dans les paramètres.")
|
||||
self.client = MistralClient(api_key=settings.MISTRAL_API_KEY)
|
||||
else:
|
||||
raise ValueError(f"Fournisseur LLM non supporté: {self.provider}")
|
||||
|
||||
logger.info(f"AI Service initialized with Provider: {self.provider}, Model: {self.model_name}")
|
||||
|
||||
async def analyze_job_offer_and_cv(self, job_offer_text: str, cv_text: str) -> Dict[str, Any]:
|
||||
prompt = f"""
|
||||
En tant qu'assistant spécialisé dans la rédaction de CV et de lettres de motivation, votre tâche est d'analyser une offre d'emploi et un CV fournis, puis de :
|
||||
1. Calculer un score de pertinence entre 0 et 100 indiquant à quel point le CV correspond à l'offre.
|
||||
2. Identifier les 3 à 5 points forts du CV en relation avec l'offre.
|
||||
3. Suggérer 3 à 5 améliorations clés pour le CV afin de mieux correspondre à l'offre.
|
||||
4. Proposer une brève phrase d'accroche pour une lettre de motivation, personnalisée pour cette offre et ce CV.
|
||||
5. Identifier 3 à 5 mots-clés ou phrases importants de l'offre d'emploi que l'on devrait retrouver dans le CV.
|
||||
|
||||
L'offre d'emploi est la suivante :
|
||||
---
|
||||
{job_offer_text}
|
||||
---
|
||||
|
||||
Le CV est le suivant :
|
||||
---
|
||||
{cv_text}
|
||||
---
|
||||
|
||||
Veuillez retourner votre analyse au format JSON, en respectant la structure suivante :
|
||||
{{
|
||||
"score_pertinence": int,
|
||||
"points_forts": ["string", "string", ...],
|
||||
"ameliorations_cv": ["string", "string", ...],
|
||||
"phrase_accroche_lm": "string",
|
||||
"mots_cles_offre": ["string", "string", ...]
|
||||
}}
|
||||
"""
|
||||
|
||||
response_content = ""
|
||||
if self.provider == "gemini":
|
||||
try:
|
||||
contents = [
|
||||
{"role": "user", "parts": [{"text": prompt}]}
|
||||
]
|
||||
|
||||
# MODIFIÉ ICI : Utilisation de anyio.to_thread.run_sync pour l'appel synchrone
|
||||
response = await anyio.to_thread.run_sync(
|
||||
self.client.models.generate_content,
|
||||
model=self.model_name,
|
||||
contents=contents,
|
||||
config=self.gemini_config,
|
||||
)
|
||||
response_content = response.text
|
||||
|
||||
# Nettoyage de la réponse pour retirer les blocs de code Markdown
|
||||
if response_content.startswith("```json") and response_content.endswith("```"):
|
||||
response_content = response_content[len("```json"): -len("```")].strip()
|
||||
elif response_content.startswith("```") and response_content.endswith("```"):
|
||||
response_content = response_content[len("```"): -len("```")].strip()
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur lors de l'appel à Gemini: {e}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Erreur lors de l'appel à l'API Gemini: {e}"
|
||||
)
|
||||
elif self.provider == "mistral":
|
||||
if not settings.MISTRAL_API_KEY:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="La clé API Mistral n'est pas configurée."
|
||||
)
|
||||
try:
|
||||
response = await self.client.chat_async(
|
||||
model=self.model_name,
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
temperature=0.7,
|
||||
max_tokens=1000
|
||||
)
|
||||
response_content = response.choices[0].message.content
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur lors de l'appel à Mistral: {e}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Erreur lors de l'appel à l'API Mistral: {e}"
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"Fournisseur LLM non supporté: {self.provider}")
|
||||
|
||||
logger.info(f"Réponse brute de l'IA (après nettoyage si nécessaire) ({self.provider}): {response_content}")
|
||||
|
||||
try:
|
||||
parsed_response = json.loads(response_content)
|
||||
return parsed_response
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Erreur de décodage JSON de la réponse IA ({self.provider}): {e}")
|
||||
logger.error(f"Contenu non-JSON reçu (après nettoyage): {response_content}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="La réponse de l'IA n'était pas au format JSON attendu."
|
||||
)
|
||||
|
||||
# Instanciation unique du service AI
|
||||
ai_service = AIService()
|
68
backend/services/france_travail_auth_service.py
Normal file
68
backend/services/france_travail_auth_service.py
Normal file
|
@ -0,0 +1,68 @@
|
|||
# backend/services/france_travail_auth_service.py
|
||||
import httpx
|
||||
import logging
|
||||
from core.config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class FranceTravailAuthService:
|
||||
_instance = None
|
||||
_token_cache = {} # Cache pour stocker le token
|
||||
|
||||
def __new__(cls):
|
||||
if cls._instance is None:
|
||||
cls._instance = super(FranceTravailAuthService, cls).__new__(cls)
|
||||
return cls._instance
|
||||
|
||||
async def get_access_token(self):
|
||||
# Vérifiez si le token est encore valide dans le cache
|
||||
if self._token_cache and self._token_cache.get("expires_at", 0) > httpx._compat.current_time():
|
||||
logger.info("Utilisation du token France Travail depuis le cache.")
|
||||
return self._token_cache["access_token"]
|
||||
|
||||
logger.info("Obtention d'un nouveau token France Travail...")
|
||||
token_url = settings.FRANCE_TRAVAIL_TOKEN_URL
|
||||
client_id = settings.FRANCE_TRAVAIL_CLIENT_ID
|
||||
client_secret = settings.FRANCE_TRAVAIL_CLIENT_SECRET
|
||||
scope = "o2dsoffre api_offresdemploiv2" # Assurez-vous que ces scopes sont activés pour votre application
|
||||
|
||||
data = {
|
||||
"grant_type": "client_credentials",
|
||||
"client_id": client_id,
|
||||
"client_secret": client_secret,
|
||||
"scope": scope
|
||||
}
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/x-www-form-urlencoded" # C'est très important !
|
||||
}
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.post(token_url, data=data, headers=headers)
|
||||
response.raise_for_status() # Lève une exception pour les codes d'erreur HTTP
|
||||
|
||||
token_data = response.json()
|
||||
access_token = token_data.get("access_token")
|
||||
expires_in = token_data.get("expires_in") # Durée de validité en secondes
|
||||
|
||||
if not access_token:
|
||||
raise ValueError("Le token d'accès n'a pas été trouvé dans la réponse de France Travail.")
|
||||
|
||||
# Mettre à jour le cache
|
||||
self._token_cache = {
|
||||
"access_token": access_token,
|
||||
"expires_at": httpx._compat.current_time() + expires_in - 60 # 60 secondes de marge de sécurité
|
||||
}
|
||||
logger.info("Nouveau token France Travail obtenu et mis en cache.")
|
||||
return access_token
|
||||
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(f"Erreur HTTP lors de l'obtention du token France Travail: {e.response.status_code} - {e.response.text}")
|
||||
# Re-raise une RuntimeError pour que le service appelant puisse la gérer
|
||||
raise RuntimeError(f"Erreur d'authentification France Travail: {e.response.text}")
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur inattendue lors de l'obtention du token France Travail: {e}")
|
||||
raise RuntimeError(f"Erreur inattendue lors de l'obtention du token France Travail: {e}")
|
||||
|
||||
france_travail_auth_service = FranceTravailAuthService()
|
197
backend/services/france_travail_offer_service.py
Normal file
197
backend/services/france_travail_offer_service.py
Normal file
|
@ -0,0 +1,197 @@
|
|||
# backend/services/france_travail_offer_service.py
|
||||
import httpx
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List, Optional, Dict, Any, Union
|
||||
from core.config import settings
|
||||
from schemas.france_travail import FranceTravailSearchResponse, OffreDetail, Offre, TypeContrat
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class FranceTravailOfferService:
|
||||
def __init__(self):
|
||||
self.client_id = settings.FRANCE_TRAVAIL_CLIENT_ID
|
||||
self.client_secret = settings.FRANCE_TRAVAIL_CLIENT_SECRET
|
||||
self.token_url = settings.FRANCE_TRAVAIL_TOKEN_URL
|
||||
self.api_base_url = settings.FRANCE_TRAVAIL_API_BASE_URL
|
||||
self.api_scope = settings.FRANCE_TRAVAIL_API_SCOPE
|
||||
self.access_token = None
|
||||
self.token_expires_at = None
|
||||
|
||||
async def _get_access_token(self):
|
||||
if self.access_token and self.token_expires_at and datetime.now() < self.token_expires_at:
|
||||
logger.info("Réutilisation du token France Travail existant.")
|
||||
return self.access_token
|
||||
|
||||
logger.info("Obtention d'un nouveau token d'accès France Travail...")
|
||||
headers = {
|
||||
"Content-Type": "application/x-www-form-urlencoded"
|
||||
}
|
||||
data = {
|
||||
"grant_type": "client_credentials",
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
"scope": self.api_scope
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
try:
|
||||
response = await client.post(self.token_url, headers=headers, data=data)
|
||||
response.raise_for_status()
|
||||
token_data = response.json()
|
||||
self.access_token = token_data["access_token"]
|
||||
expires_in = token_data.get("expires_in", 1500)
|
||||
self.token_expires_at = datetime.now() + timedelta(seconds=expires_in - 60)
|
||||
|
||||
logger.info("Token France Travail obtenu avec succès.")
|
||||
return self.access_token
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(f"Erreur HTTP lors de l'obtention du token France Travail: {e.response.status_code} - {e.response.text}")
|
||||
raise RuntimeError(f"Échec de l'obtention du token France Travail: {e.response.text}")
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur inattendue lors de l'obtention du token France Travail: {e}")
|
||||
raise RuntimeError(f"Échec inattendu lors de l'obtention du token France Travail: {e}")
|
||||
|
||||
async def get_insee_code_for_commune(self, commune_name: str) -> Optional[str]:
|
||||
"""
|
||||
Récupère le code INSEE d'une commune à partir de son nom.
|
||||
Recherche une correspondance exacte du libellé, ou un code spécifique pour Paris.
|
||||
"""
|
||||
token = await self._get_access_token()
|
||||
headers = {
|
||||
"Accept": "application/json",
|
||||
"Authorization": f"Bearer {token}"
|
||||
}
|
||||
params = {
|
||||
"q": commune_name
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
try:
|
||||
response = await client.get(
|
||||
f"{self.api_base_url}/v2/referentiel/communes",
|
||||
headers=headers,
|
||||
params=params
|
||||
)
|
||||
response.raise_for_status()
|
||||
communes_data = response.json()
|
||||
|
||||
found_code = None
|
||||
normalized_input_name = commune_name.upper().strip()
|
||||
|
||||
if communes_data and isinstance(communes_data, list):
|
||||
for commune_info in communes_data:
|
||||
if commune_info and "code" in commune_info and "libelle" in commune_info:
|
||||
normalized_libelle = commune_info["libelle"].upper().strip()
|
||||
|
||||
# Priorité 1: Recherche spécifique pour "PARIS" avec son code INSEE connu
|
||||
if normalized_input_name == "PARIS" and commune_info["code"] == "75056":
|
||||
found_code = commune_info["code"]
|
||||
break
|
||||
# Priorité 2: Correspondance exacte du libellé
|
||||
elif normalized_libelle == normalized_input_name:
|
||||
found_code = commune_info["code"]
|
||||
break
|
||||
# Priorité 3: Si c'est Paris, mais le libellé renvoyé n'est pas "PARIS" exactement,
|
||||
# mais le code est le bon, on le prend quand même.
|
||||
# Ceci peut arriver si l'API renvoie "Paris 01" par exemple.
|
||||
elif normalized_input_name == "PARIS" and commune_info["code"] in ["75056", "75101", "75102", "75103", "75104", "75105", "75106", "75107", "75108", "75109", "75110", "75111", "75112", "75113", "75114", "75115", "75116", "75117", "75118", "75119", "75120"]:
|
||||
# Note: Les codes 75101 à 75120 sont pour les arrondissements, mais l'API
|
||||
# France Travail utilise souvent le 75056 pour "Paris" globalement.
|
||||
# Cette condition est plus une sécurité, mais 75056 est la cible principale.
|
||||
if commune_info["code"] == "75056": # On préfère le code global de Paris
|
||||
found_code = commune_info["code"]
|
||||
break
|
||||
elif found_code is None: # Si on n'a pas encore trouvé 75056, on prend un arrondissement
|
||||
found_code = commune_info["code"] # Conserver le code d'arrondissement si c'est le seul "Paris" trouvé
|
||||
# Note: La logique ici est à affiner selon si vous voulez les arrondissements ou seulement le code global.
|
||||
# Pour la plupart des cas, "75056" est suffisant.
|
||||
|
||||
if found_code:
|
||||
logger.info(f"Code INSEE pour '{commune_name}' trouvé : {found_code}")
|
||||
return found_code
|
||||
|
||||
logger.warning(f"Aucun code INSEE exact trouvé pour la commune '{commune_name}' parmi les résultats de l'API. Vérifiez l'orthographe.")
|
||||
return None
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(f"Erreur HTTP lors de la récupération du code INSEE pour '{commune_name}': {e.response.status_code} - {e.response.text}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur inattendue lors de la récupération du code INSEE pour '{commune_name}': {e}")
|
||||
return None
|
||||
|
||||
async def search_offers(self,
|
||||
motsCles: Optional[str] = None,
|
||||
typeContrat: Optional[str] = None,
|
||||
codePostal: Optional[str] = None,
|
||||
commune: Optional[str] = None,
|
||||
distance: Optional[int] = None,
|
||||
alternance: Optional[bool] = None,
|
||||
offresManagerees: Optional[bool] = None,
|
||||
range: str = "0-14") -> FranceTravailSearchResponse:
|
||||
token = await self._get_access_token()
|
||||
headers = {
|
||||
"Accept": "application/json",
|
||||
"Authorization": f"Bearer {token}"
|
||||
}
|
||||
|
||||
params = {
|
||||
"range": range,
|
||||
}
|
||||
|
||||
if motsCles:
|
||||
params["motsCles"] = motsCles
|
||||
if typeContrat:
|
||||
params["typeContrat"] = typeContrat
|
||||
if alternance is not None:
|
||||
params["alternance"] = str(alternance).lower()
|
||||
if offresManagerees is not None:
|
||||
params["offresManagerees"] = str(offresManagerees).lower()
|
||||
|
||||
if codePostal:
|
||||
params["codePostal"] = codePostal
|
||||
if distance is not None:
|
||||
params["distance"] = distance
|
||||
else:
|
||||
params["distance"] = 10
|
||||
elif commune:
|
||||
params["commune"] = commune
|
||||
if distance is not None:
|
||||
params["distance"] = distance
|
||||
else:
|
||||
params["distance"] = 10
|
||||
|
||||
logger.info(f"Paramètres de recherche France Travail: {params}")
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
try:
|
||||
response = await client.get(f"{self.api_base_url}/v2/offres/search", headers=headers, params=params)
|
||||
response.raise_for_status()
|
||||
return FranceTravailSearchResponse(**response.json())
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(f"Erreur HTTP lors de la recherche d'offres France Travail: {e.response.status_code} - {e.response.text}")
|
||||
raise RuntimeError(f"Échec de la recherche d'offres France Travail: {e.response.text}")
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur inattendue lors de la recherche d'offres France Travail: {e}")
|
||||
raise RuntimeError(f"Échec inattendu lors de la recherche d'offres France Travail: {e}")
|
||||
|
||||
async def get_offer_details(self, offer_id: str) -> OffreDetail:
|
||||
token = await self._get_access_token()
|
||||
headers = {
|
||||
"Accept": "application/json",
|
||||
"Authorization": f"Bearer {token}"
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
try:
|
||||
response = await client.get(f"{self.api_base_url}/v2/offres/{offer_id}", headers=headers)
|
||||
response.raise_for_status()
|
||||
return OffreDetail(**response.json())
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(f"Erreur HTTP lors de la récupération des détails de l'offre {offer_id}: {e.response.status_code} - {e.response.text}")
|
||||
raise RuntimeError(f"Échec de la récupération des détails de l'offre {offer_id}: {e.response.text}")
|
||||
except Exception as e:
|
||||
logger.error(f"Erreur inattendue lors de la récupération des détails de l'offre {offer_id}: {e}")
|
||||
raise RuntimeError(f"Échec inattendu lors de la récupération des détails de l'offre {offer_id}: {e}")
|
||||
|
||||
france_travail_offer_service = FranceTravailOfferService()
|
Loading…
Add table
Add a link
Reference in a new issue