diff --git a/.gitignore b/.gitignore index bef93e3..8b2054a 100644 --- a/.gitignore +++ b/.gitignore @@ -8,4 +8,5 @@ __pycache__ **/static/2022-articles/*.json heritrix/jobs/* .vscode -backups/** \ No newline at end of file +backups/** +**/.env \ No newline at end of file diff --git a/chat/.dockerignore b/chat/.dockerignore new file mode 100644 index 0000000..6d016b9 --- /dev/null +++ b/chat/.dockerignore @@ -0,0 +1,3 @@ +.git +node_modules +frontend/node_modules \ No newline at end of file diff --git a/chat/README.md b/chat/README.md new file mode 100644 index 0000000..361fd96 --- /dev/null +++ b/chat/README.md @@ -0,0 +1,171 @@ +# Bron Chat + +Bron Chat is een tool voor journalisten en onderzoekers. Onze missie: openbare overheidsinformatie makkelijk en snel doorzoekbaar maken door middel van een AI chat. Dagelijks werken wij aan het uitbreiden en verbeteren van Bron chat. De tool is op het moment in beta. + +## Development + +Deze code is ontwikkeld door [linksmith](https://github.com/linksmith), tijdens SVDJ Incubator 2024-2025 programma van de SVDJ in samenwerking met Open State Foundation. + +## 🌟 Functies + +- **AI-gestuurde inzichten**: Ontdek verbanden en patronen in overheidsdata +- **Uitgebreide documentendatabase**: Doorzoek 3,5 miljoen overheidsdocumenten op één centrale plek +- **Bronverwijzingen**: Directe links naar originele documenten en downloadbare pdf's +- **Samenwerking**: Deel je zoekresultaten eenvoudig met collega's via deelbare links +- **Transparante data**: Ontsloten door Open State Foundation, een onafhankelijke stichting zonder winstoogmerk +- **Speciaal voor journalisten**: Een betrouwbare, flexibele tool ontwikkeld door SVDJ Incubator + +## 🚀 Projectstructuur + +Het project bestaat uit twee hoofdcomponenten: + +### Backend (FastAPI) + +- Gebouwd met FastAPI, een modern Python webframework +- Integreert met LLM-diensten (Cohere en LiteLLM) +- Gebruikt Qdrant voor vector search +- MySQL-database voor het opslaan van sessies en berichten +- Implementeert streaming responses voor real-time chat + +### Frontend (SvelteKit) + +- Gebouwd met SvelteKit, een modern JavaScript framework +- Responsieve UI met Tailwind CSS +- Real-time chatinterface +- Documentweergave en deelmogelijkheden + +## 🛠️ Technologiestack + +### Backend +- Python 3.x +- FastAPI +- SQLAlchemy +- Qdrant (Vector Database) +- Cohere/LiteLLM (LLM Services) +- MySQL +- Alembic (Database Migrations) + +### Frontend +- SvelteKit +- Tailwind CSS +- TypeScript/JavaScript +- Markdown-weergave + +### Infrastructuur +- Docker & Docker Compose +- Traefik (Reverse Proxy) +- Sentry (Error Tracking) +- Phoenix (Observability) + +## 🏗️ Ontwikkelingsomgeving + +### Vereisten +- Docker en Docker Compose +- Node.js (voor frontend-ontwikkeling) +- Python 3.x (voor backend-ontwikkeling) + +### Omgevingsvariabelen +Maak een `.env`-bestand aan in de hoofdmap met de volgende variabelen: + +``` +# Algemeen +ENVIRONMENT=development +PUBLIC_API_URL=http://localhost:8000/api + +# Database +MYSQL_ROOT_PASSWORD=your_root_password +MYSQL_DATABASE=bron_chat +MYSQL_USER=bron_user +MYSQL_PASSWORD=your_password + +# Qdrant +QDRANT_HOST=qdrant +QDRANT_PORT=6333 + +# LLM Services +COHERE_API_KEY=your_cohere_api_key +LLM_SERVICE=cohere # of litellm + +# Toegestane oorsprong +ALLOWED_ORIGINS=http://localhost:5173,http://localhost:8000 +``` + +### De applicatie draaien + +#### Ontwikkelingsmodus +```bash +# Start de applicatie in ontwikkelingsmodus met hot-reloading +docker-compose -f docker-compose.yml -f docker-compose.dev.yml up +``` + +#### Productiemodus +```bash +# Start de applicatie in productiemodus als achtergrondproces +docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d +``` + +#### Stagingmodus +```bash +# Start de applicatie in stagingmodus als achtergrondproces +docker-compose -f docker-compose.yml -f docker-compose.stag.yml up -d +``` + +## 📝 API-documentatie + +Bij het draaien van de applicatie is de API-documentatie beschikbaar op: +- Ontwikkeling: http://localhost:8000/docs +- Productie/Staging: https://your-domain.com/docs + +## 👥 Team + +Bron Chat is ontwikkeld door een team van SVDJ Incubator dat bestaat uit: +- Jeremy Crowlesmith [linksmith](https://github.com/linksmith) +- Henri Bouwmeester +- Joost van de Loo + +Het project maakt gebruik van de data van Bron, een product van Open State Foundation. + +## 🔮 Toekomstvisie + +Bron Chat democratiseert het doen van onderzoek, doordat zoeken in openbare overheidsdata nu flexibel en makkelijk wordt voor alle soorten journalisten en onderzoekers, zowel landelijk als in de regio. Hierdoor kan de journalistiek met minder middelen meer bereiken. + +Onze visie voor Bron Chat, en diensten die er mogelijk in de toekomst nog bij gaan komen, is dat iedere journalist in Nederland moet kunnen onderzoeken op een hoog niveau. + +## 📄 Licentie + +Dit project is gelicenseerd onder de MIT-licentie: + +``` +MIT License + +Copyright (c) 2025 Bron + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +``` + + +--- + +## 🤝 SVDJ Incubator 2024-2025 + +Bron chat is tot stand gekomen tijdens de SVDJ Incubator 2024-2025. De SVDJ Incubator is een subsidie- en begeleidingsprogramma van het Stimuleringsfonds voor de Journalistiek (SVDJ) gericht op het vinden van oplossingen voor gedeelde vraagstukken binnen de journalistieke sector. + +Het SVDJ stimuleert met kennisdeling, begeleiding en subsidie een onafhankelijke, diverse en toekomstbestendige journalistieke infrastructuur in Nederland. + +Voor vragen over de SVDJ Incubator of de oplossingen die hieruit voort zijn gekomen, ga naar www.svdj.nl/incubator. \ No newline at end of file diff --git a/chat/backend/Dockerfile.dev b/chat/backend/Dockerfile.dev new file mode 100644 index 0000000..623bd75 --- /dev/null +++ b/chat/backend/Dockerfile.dev @@ -0,0 +1,26 @@ +FROM python:3.9 + +# Install locales package and other necessary tools +RUN apt-get update && apt-get install -y locales locales-all + +# Generate and set the nl_NL.UTF-8 locale +RUN sed -i '/nl_NL.UTF-8/s/^# //g' /etc/locale.gen && \ + locale-gen nl_NL.UTF-8 && \ + update-locale LANG=nl_NL.UTF-8 LC_ALL=nl_NL.UTF-8 + +# Set the locale environment variables +ENV LANG nl_NL.UTF-8 +ENV LANGUAGE nl_NL:nl +ENV LC_ALL nl_NL.UTF-8 +ENV PYTHONPATH=/app/backend + +WORKDIR /app/backend + +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +COPY ./app ./app +COPY entrypoint.sh /entrypoint.sh +RUN chmod +x /entrypoint.sh + +ENTRYPOINT ["/entrypoint.sh"] \ No newline at end of file diff --git a/chat/backend/Dockerfile.prod b/chat/backend/Dockerfile.prod new file mode 100644 index 0000000..623bd75 --- /dev/null +++ b/chat/backend/Dockerfile.prod @@ -0,0 +1,26 @@ +FROM python:3.9 + +# Install locales package and other necessary tools +RUN apt-get update && apt-get install -y locales locales-all + +# Generate and set the nl_NL.UTF-8 locale +RUN sed -i '/nl_NL.UTF-8/s/^# //g' /etc/locale.gen && \ + locale-gen nl_NL.UTF-8 && \ + update-locale LANG=nl_NL.UTF-8 LC_ALL=nl_NL.UTF-8 + +# Set the locale environment variables +ENV LANG nl_NL.UTF-8 +ENV LANGUAGE nl_NL:nl +ENV LC_ALL nl_NL.UTF-8 +ENV PYTHONPATH=/app/backend + +WORKDIR /app/backend + +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +COPY ./app ./app +COPY entrypoint.sh /entrypoint.sh +RUN chmod +x /entrypoint.sh + +ENTRYPOINT ["/entrypoint.sh"] \ No newline at end of file diff --git a/chat/backend/alembic.ini b/chat/backend/alembic.ini new file mode 100644 index 0000000..19d4c90 --- /dev/null +++ b/chat/backend/alembic.ini @@ -0,0 +1,37 @@ +[alembic] +script_location = app/migrations +#sqlalchemy.url = mysql://%(MYSQL_USER)s:%(MYSQL_PASSWORD)s@%(MYSQL_HOST)s:%(MYSQL_PORT)s/%(MYSQL_DATABASE)s + +[loggers] +keys = root,sqlalchemy,alembic + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARN +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARN +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S \ No newline at end of file diff --git a/chat/backend/app/__init__.py b/chat/backend/app/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/chat/backend/app/config.py b/chat/backend/app/config.py new file mode 100644 index 0000000..b3785cf --- /dev/null +++ b/chat/backend/app/config.py @@ -0,0 +1,39 @@ +import os +from pydantic_settings import BaseSettings + +class Settings(BaseSettings): + QDRANT_HOST: str = os.getenv("QDRANT_HOST", "host.docker.internal") + QDRANT_PORT: int = int(os.getenv("QDRANT_PORT", 6333)) + DATABASE_URL: str = f"mysql://{os.getenv('MYSQL_USER')}:{os.getenv('MYSQL_PASSWORD')}@{os.getenv('MYSQL_HOST', 'mysql')}/{os.getenv('MYSQL_DATABASE')}" + COHERE_API_KEY: str = os.getenv("COHERE_API_KEY") + COHERE_EMBED_MODEL: str = os.getenv("COHERE_EMBED_MODEL") + COHERE_RERANK_MODEL: str = os.getenv("COHERE_RERANK_MODEL") + SPARSE_EMBED_MODEL: str = os.getenv("SPARSE_EMBED_MODEL") + QDRANT_HYBRID_SEARCH_TIMEOUT: int = int(os.getenv("QDRANT_HYBRID_SEARCH_TIMEOUT")) + EMBEDDING_QUANTIZATION: str = os.getenv("EMBEDDING_QUANTIZATION") + QDRANT_COLLECTION: str = os.getenv("QDRANT_COLLECTION") + + QDRANT_SPARSE_RETRIEVE_LIMIT: int = int(os.getenv("QDRANT_SPARSE_RETRIEVE_LIMIT")) + QDRANT_DENSE_RETRIEVE_LIMIT: int = int(os.getenv("QDRANT_DENSE_RETRIEVE_LIMIT")) + QDRANT_HYBRID_RETRIEVE_LIMIT: int = int(os.getenv("QDRANT_HYBRID_RETRIEVE_LIMIT")) + RERANK_DOC_RETRIEVE_LIMIT: int = int(os.getenv("RERANK_DOC_RETRIEVE_LIMIT")) + MMR_DOC_RETRIEVE_LIMIT: int = int(os.getenv("MMR_DOC_RETRIEVE_LIMIT")) + RERANK_RELEVANCE_THRESHOLD: float = float(os.getenv("RERANK_RELEVANCE_THRESHOLD")) + MMR_DOC_LAMBDA_PARAM: float = float(os.getenv("MMR_DOC_LAMBDA_PARAM")) + + ALLOWED_ORIGINS: str = os.getenv("ALLOWED_ORIGINS", "").split(",") + ENVIRONMENT: str = os.getenv("ENVIRONMENT") + # Qdrant settings + QDRANT_POOL_SIZE: int = int(os.getenv("QDRANT_POOL_SIZE")) + QDRANT_POOL_TIMEOUT: int = int(os.getenv("QDRANT_POOL_TIMEOUT")) + QDRANT_TIMEOUT: int = int(os.getenv("QDRANT_TIMEOUT")) + SENTRY_DSN: str = os.getenv("SENTRY_DSN") + + OTEL_EXPORTER_OTLP_HEADER: str = os.getenv("OTEL_EXPORTER_OTLP_HEADER") + PHOENIX_CLIENT_HEADERS: str = os.getenv("PHOENIX_CLIENT_HEADERS") + PHOENIX_COLLECTOR_ENDPOINT: str = os.getenv("PHOENIX_COLLECTOR_ENDPOINT") + PHOENIX_TRACER_ENDPOINT: str = os.getenv("PHOENIX_TRACER_ENDPOINT") + PHOENIX_PROJECT_NAME: str = os.getenv("PHOENIX_PROJECT_NAME") + LLM_SERVICE: str = os.getenv("LLM_SERVICE", "cohere") + +settings = Settings() diff --git a/chat/backend/app/database.py b/chat/backend/app/database.py new file mode 100644 index 0000000..e58b72d --- /dev/null +++ b/chat/backend/app/database.py @@ -0,0 +1,70 @@ +from sqlalchemy import create_engine +from sqlalchemy.orm import sessionmaker +from sqlalchemy.ext.declarative import declarative_base +from .config import settings +import logging + +logger = logging.getLogger(__name__) + +# Configure the connection pool +engine = create_engine( + settings.DATABASE_URL, + pool_size=10, # Increase from default of 5 + max_overflow=20, # Increase from default of 10 + pool_timeout=60, # Increase timeout + pool_pre_ping=True, # Enable connection health checks + pool_recycle=28000 # Recycle connections after 1 hour +) + +SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) + +Base = declarative_base() + +def init_db(): + # Import all models explicitly to ensure they're registered with SQLAlchemy + from .models import ( + Session, + SessionFeedback, + Message, + Document, + MessageFeedback, + DocumentFeedback, + MessageDocument + + ) + + def table_exists(table_name): + try: + with engine.connect() as conn: + return engine.dialect.has_table(conn, table_name) + except Exception: + return False + + try: + # Create all tables that don't exist + Base.metadata.create_all(bind=engine) + logger.info("Database tables initialized") + + # Log which tables exist for debugging + existing_tables = [name for name in Base.metadata.tables.keys()] + logger.info(f"Existing tables: {existing_tables}") + + except Exception as e: + logger.error(f"Error during database initialization: {str(e)}", exc_info=True) + +def get_db(): + db = SessionLocal() + try: + yield db + except Exception: + try: + db.rollback() + except Exception: + pass # Ignore rollback errors + raise + finally: + try: + db.close() + except Exception: + pass # Ignore close errors + diff --git a/chat/backend/app/index_cli.py b/chat/backend/app/index_cli.py new file mode 100644 index 0000000..6126b53 --- /dev/null +++ b/chat/backend/app/index_cli.py @@ -0,0 +1,85 @@ +import argparse +from qdrant_client import QdrantClient, models +from app.config import settings + +def create_payload_index(field_name: str, field_type: str): + """Create a payload index in Qdrant for the specified collection and field.""" + client = QdrantClient( + url=f"http://{settings.QDRANT_HOST}:{settings.QDRANT_PORT}", + timeout=settings.QDRANT_TIMEOUT * 2 # Increased timeout + ) + + try: + if field_type == 'KEYWORD': + print(f"Creating payload index for field '{field_name}' of type KEYWORD in collection '{settings.QDRANT_COLLECTION}'") + result = client.create_payload_index( + collection_name=settings.QDRANT_COLLECTION, + field_name=field_name, + field_schema=models.PayloadSchemaType.KEYWORD, + wait=False + ) + print(f"Result: {result}") + elif field_type == 'DATETIME': + print(f"Creating payload index for field '{field_name}' of type DATETIME in collection '{settings.QDRANT_COLLECTION}'") + result = client.create_payload_index( + collection_name=settings.QDRANT_COLLECTION, + field_name=field_name, + field_schema=models.PayloadSchemaType.DATETIME, + wait=False + ) + print(f"Result: {result}") + except Exception as e: + print(f"Error creating payload index: {str(e)}") + +def delete_payload_index(field_name: str): + """Delete a payload index in Qdrant for the specified collection and field.""" + client = QdrantClient( + url=f"http://{settings.QDRANT_HOST}:{settings.QDRANT_PORT}", + timeout=settings.QDRANT_TIMEOUT * 2 # Increased timeout + ) + + try: + result = client.delete_payload_index( + collection_name=settings.QDRANT_COLLECTION, + field_name=field_name, + wait=False + ) + print(f"Result: {result}") + except Exception as e: + print(f"Error deleting payload index: {str(e)}") + + +def main(): + parser = argparse.ArgumentParser(description='Create or delete a payload index in Qdrant') + + parser.add_argument( + '--field', + '-f', + type=str, + required=True, + help='Name of the field to index or delete' + ) + parser.add_argument( + '--type', + '-t', + type=str, + choices=['KEYWORD', 'DATETIME'], + default='KEYWORD', + help='Type of the field to index (KEYWORD or DATETIME)' + ) + parser.add_argument( + '--delete', + '-d', + action='store_true', + help='Delete the specified payload index instead of creating it' + ) + + args = parser.parse_args() + + if args.delete: + delete_payload_index(args.field) # Call delete function + else: + create_payload_index(args.field, args.type) # Call create function + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/chat/backend/app/main.py b/chat/backend/app/main.py new file mode 100644 index 0000000..c85c4c5 --- /dev/null +++ b/chat/backend/app/main.py @@ -0,0 +1,70 @@ +import os +from fastapi import FastAPI, Depends +from fastapi.middleware.cors import CORSMiddleware +from .routers import chat, sessions, feedback, data +from .config import settings +from .database import init_db +import asyncio +import sentry_sdk +from phoenix.otel import register +from openinference.instrumentation.litellm import LiteLLMInstrumentor + +sentry_sdk.init( + dsn=settings.SENTRY_DSN, + # Set traces_sample_rate to 1.0 to capture 100% + # of transactions for tracing. + traces_sample_rate=1.0, + _experiments={ + # Set continuous_profiling_auto_start to True + # to automatically start the profiler on when + # possible. + "continuous_profiling_auto_start": True, + }, +) + +# tracer_provider = register( +# project_name=settings.PHOENIX_PROJECT_NAME, +# endpoint=settings.PHOENIX_TRACER_ENDPOINT +# ) + +# LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider) + +app = FastAPI() + +# Configure CORS +app.add_middleware( + CORSMiddleware, + allow_origins=settings.ALLOWED_ORIGINS, + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + +# Include routers +app.include_router(data.router) +app.include_router(chat.router) +app.include_router(sessions.router) +app.include_router(feedback.router) + + +base_api_url = "/" +if settings.ENVIRONMENT == "development": + base_api_url = "/api/" + + +@app.on_event("startup") +async def startup_event(): + # await asyncio.sleep(10) + init_db() + +@app.get("/") +async def root(): + return {"message": "Welcome to the API"} + +@app.get("/health") +async def health_check(): + return {"status": "healthy"} + +@app.get(base_api_url + "sentry-debug") +async def trigger_error(): + division_by_zero = 1 / 0 \ No newline at end of file diff --git a/chat/backend/app/migrations.py b/chat/backend/app/migrations.py new file mode 100644 index 0000000..e792e52 --- /dev/null +++ b/chat/backend/app/migrations.py @@ -0,0 +1,368 @@ +from sqlalchemy import create_engine, text +from .config import settings +from .database import SessionLocal +from .models import Base, Session, Message, Document, MessageDocument +import json +import logging + +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +def drop_new_tables(db): + """Drop the new tables if they exist""" + try: + # Disable foreign key checks before dropping tables + db.execute(text("SET FOREIGN_KEY_CHECKS=0")) + db.execute(text("DROP TABLE IF EXISTS message_documents")) + db.execute(text("DROP TABLE IF EXISTS messages")) + db.execute(text("DROP TABLE IF EXISTS messages_feedback")) + db.execute(text("DROP TABLE IF EXISTS documents")) + db.execute(text("DROP TABLE IF EXISTS documents_feedback")) + db.execute(text("DROP TABLE IF EXISTS sessions_feedback")) + db.execute(text("SET FOREIGN_KEY_CHECKS=1")) + db.commit() + logger.info("Dropped existing tables") + except Exception as e: + logger.error(f"Error dropping tables: {str(e)}") + db.rollback() + +def migrate_up(): + db = SessionLocal() + try: + # Drop existing tables first + drop_new_tables(db) + + # Create new tables with updated schema + engine = create_engine(settings.DATABASE_URL) + Base.metadata.create_all(bind=engine) + db.commit() + logger.info("Created new tables") + # Add message_type column if it doesn't exist + result = db.execute(text(""" + SELECT COUNT(*) + FROM information_schema.columns + WHERE table_name = 'messages' + AND column_name = 'message_type' + AND table_schema = DATABASE() + """)) + + if result.scalar() == 0: + db.execute(text(""" + ALTER TABLE messages + ADD COLUMN message_type VARCHAR(50) NULL + """)) + db.commit() + logger.info("Added message_type column to messages table") + + + # Get all existing sessions with filter conditions + result = db.execute(text(""" + SELECT id, name, messages, documents + FROM sessions + WHERE (messages IS NOT NULL AND messages != '[]' AND messages != '') + AND created_at >= '2023-11-18' + AND messages IS NOT NULL + """)) + sessions_data = [(row.id, row.name, row.messages, row.documents) for row in result] + db.commit() + + logger.info(f"Found {len(sessions_data)} valid sessions to migrate") + + # Process each session + for session_id, name, messages_json, documents_json in sessions_data: + try: + messages = json.loads(messages_json) if messages_json else [] + documents = json.loads(documents_json) if documents_json else [] + + logger.info(f"Processing session {session_id} with {len(messages)} messages and {len(documents)} documents") + + # Process documents first + session_docs = {} # Keep track of processed documents + for doc in documents: + try: + chunk_id = doc.get("id") + if not chunk_id: + logger.warning(f"Skipping document without ID in session {session_id}") + continue + + result = db.execute(text(""" + SELECT id FROM documents WHERE chunk_id = :chunk_id + """), {"chunk_id": chunk_id}) + existing_id = result.scalar() + + if existing_id: + doc_id = existing_id + else: + # Execute insert + db.execute(text(""" + INSERT INTO documents ( + chunk_id, content, meta, score, title, url + ) VALUES ( + :chunk_id, :content, :meta, :score, :title, :url + ) ON DUPLICATE KEY UPDATE + content = VALUES(content), + meta = VALUES(meta), + score = VALUES(score), + title = VALUES(title), + url = VALUES(url) + """), { + "chunk_id": chunk_id, + "content": doc.get("content", ""), + "meta": json.dumps(doc.get("metadata", {})), + "score": float(doc.get("score", 0.0)), + "title": doc.get("title", ""), + "url": doc.get("url", "") + }) + + result = db.execute(text("SELECT LAST_INSERT_ID()")) + doc_id = result.scalar() + + db.commit() + session_docs[chunk_id] = doc_id # Store mapping of chunk_id to new numeric id + logger.info(f"Processed document {chunk_id} with new ID {doc_id}") + except Exception as e: + logger.error(f"Error processing document {doc.get('id', 'unknown')}: {str(e)}") + db.rollback() + + # Process messages + for idx, msg in enumerate(messages): + try: + # Insert message with auto-incrementing ID + result = db.execute(text(""" + INSERT INTO messages ( + session_id, sequence, role, content, formatted_content + ) VALUES ( + :session_id, :sequence, :role, :content, :formatted_content + ) + """), { + "session_id": session_id, + "sequence": idx, + "role": msg.get("role"), + "content": msg.get("content", ""), + "formatted_content": msg.get("formatted_content", "") + }) + db.commit() + + # Get the auto-generated message ID + message_id = result.lastrowid + logger.info(f"Processed message {message_id}") + + # Link documents to message + if msg.get("role") == "assistant" and session_docs: + for chunk_id, doc_id in session_docs.items(): + try: + db.execute(text(""" + INSERT IGNORE INTO message_documents + (message_id, document_id) + VALUES (:message_id, :document_id) + """), { + "message_id": message_id, + "document_id": doc_id # Using the new numeric ID + }) + db.commit() + except Exception as e: + logger.error(f"Error linking document {doc_id} to message {message_id}: {str(e)}") + db.rollback() + except Exception as e: + logger.error(f"Error processing message {idx} for session {session_id}: {str(e)}") + db.rollback() + + logger.info(f"Successfully processed session {session_id}") + + except Exception as e: + logger.error(f"Error processing session {session_id}: {str(e)}") + db.rollback() + + # Drop old columns + try: + # Check if columns exist before dropping + result = db.execute(text(""" + SELECT COUNT(*) + FROM information_schema.columns + WHERE table_name = 'sessions' + AND column_name IN ('messages', 'documents') + AND table_schema = DATABASE() + """)) + + if result.scalar() > 0: + db.execute(text("ALTER TABLE sessions DROP COLUMN messages")) + db.execute(text("ALTER TABLE sessions DROP COLUMN documents")) + db.commit() + logger.info("Successfully dropped old columns") + except Exception as e: + logger.error(f"Error dropping old columns: {str(e)}") + db.rollback() + + # Migrate feedback data + result = db.execute(text(""" + SELECT id, session_id, question, name, email, created_at + FROM feedback + WHERE id IS NOT NULL + """)) + feedback_data = [(row.id, row.session_id, row.question, row.name, row.email, row.created_at) + for row in result] + + # Drop and recreate feedback table with auto-incrementing ID + db.execute(text("DROP TABLE IF EXISTS sessions_feedback")) + db.execute(text(""" + CREATE TABLE sessions_feedback ( + id INT AUTO_INCREMENT PRIMARY KEY, + session_id VARCHAR(36) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, + question VARCHAR(2048) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, + name VARCHAR(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, + email VARCHAR(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, + created_at DATETIME, + CONSTRAINT sessions_feedback_ibfk_1 FOREIGN KEY (session_id) REFERENCES sessions (id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci + """)) + + # Reinsert feedback data without the old IDs + for _, session_id, question, name, email, created_at in feedback_data: + db.execute(text(""" + INSERT INTO sessions_feedback (session_id, question, name, email, created_at) + VALUES (:session_id, :question, :name, :email, :created_at) + """), { + "session_id": session_id, + "question": question, + "name": name, + "email": email, + "created_at": created_at + }) + + db.execute(text("DROP TABLE IF EXISTS feedback")) + db.commit() + logger.info(f"Migrated {len(feedback_data)} feedback entries") + + except Exception as e: + logger.error(f"Migration failed: {str(e)}") + db.rollback() + raise e + finally: + db.close() + +def migrate_down(): + db = SessionLocal() + try: + # Remove message_type column if it exists + result = db.execute(text(""" + SELECT COUNT(*) + FROM information_schema.columns + WHERE table_name = 'messages' + AND column_name = 'message_type' + AND table_schema = DATABASE() + """)) + + if result.scalar() > 0: + db.execute(text("ALTER TABLE messages DROP COLUMN message_type")) + db.commit() + logger.info("Removed message_type column from messages table") + + # Check if rollback is needed + result = db.execute(text(""" + SELECT EXISTS ( + SELECT 1 + FROM information_schema.columns + WHERE table_name = 'sessions' + AND column_name = 'messages' + ); + """)) + if result.scalar(): + logger.info("Tables already in old format, skipping rollback...") + return + + # Convert back to old format + sessions = db.execute(text("SELECT id FROM sessions")).fetchall() + + # Add back the columns + db.execute(text("ALTER TABLE sessions ADD COLUMN messages JSON")) + db.execute(text("ALTER TABLE sessions ADD COLUMN documents JSON")) + + for session_row in sessions: + session_id = session_row[0] + + # Get all messages for this session + messages = db.execute(text(""" + SELECT * FROM messages + WHERE session_id = :session_id + ORDER BY sequence + """), {"session_id": session_id}).fetchall() + + # Get all documents for this session using new schema + documents = db.execute(text(""" + SELECT DISTINCT d.* + FROM documents d + JOIN message_documents md ON md.document_id = d.id + JOIN messages m ON m.id = md.message_id + WHERE m.session_id = :session_id + """), {"session_id": session_id}).fetchall() + + # Convert to old format + messages_json = [] + for msg in messages: + messages_json.append({ + "role": msg.role, + "content": msg.content, + "formatted_content": msg.formatted_content + }) + + documents_json = [] + for doc in documents: + documents_json.append({ + "id": doc.chunk_id, # Use chunk_id instead of id + "content": doc.content, + "metadata": json.loads(doc.meta) if doc.meta else {} + }) + + # Update session + db.execute(text(""" + UPDATE sessions + SET messages = :messages, + documents = :documents + WHERE id = :id + """), { + "id": session_id, + "messages": json.dumps(messages_json), + "documents": json.dumps(documents_json) + }) + + # Drop new tables + db.execute(text("DROP TABLE IF EXISTS message_documents")) + db.execute(text("DROP TABLE IF EXISTS documents")) + db.execute(text("DROP TABLE IF EXISTS messages")) + + # For feedback table, we can't restore the original string IDs + # Just ensure the table exists with string ID format + db.execute(text("DROP TABLE IF EXISTS sessions_feedback")) + db.execute(text(""" + CREATE TABLE sessions_feedback ( + id VARCHAR(36) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL, + session_id VARCHAR(36) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, + question VARCHAR(2048) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, + name VARCHAR(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, + email VARCHAR(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, + created_at DATETIME, + PRIMARY KEY (id), + KEY session_id (session_id), + CONSTRAINT sessions_feedback_ibfk_1 FOREIGN KEY (session_id) REFERENCES sessions (id) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci + """)) + + db.commit() + except Exception as e: + logger.error(f"Migration failed: {str(e)}") + db.rollback() + raise e + finally: + db.close() + +if __name__ == "__main__": + import sys + if len(sys.argv) > 1: + if sys.argv[1] == "down": + migrate_down() + elif sys.argv[1] == "force": + logger.info("Forcing migration...") + drop_new_tables(SessionLocal()) + migrate_up() + else: + migrate_up() \ No newline at end of file diff --git a/chat/backend/app/migrations/__init__.py b/chat/backend/app/migrations/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/chat/backend/app/migrations/env.py b/chat/backend/app/migrations/env.py new file mode 100644 index 0000000..87f6fdf --- /dev/null +++ b/chat/backend/app/migrations/env.py @@ -0,0 +1,61 @@ +import os +from logging.config import fileConfig + +from sqlalchemy import engine_from_config +from sqlalchemy import pool + +from alembic import context + +from app.models import Base +from app.config import settings + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +if config.config_file_name is not None: + fileConfig(config.config_file_name) + +# add your model's MetaData object here +# for 'autogenerate' support +target_metadata = Base.metadata + +def run_migrations_offline() -> None: + """Run migrations in 'offline' mode.""" + + context.configure( + url=settings.DATABASE_URL, + target_metadata=target_metadata, + literal_binds=True, + dialect_opts={"paramstyle": "named"}, + ) + + with context.begin_transaction(): + context.run_migrations() + + +def run_migrations_online() -> None: + """Run migrations in 'online' mode.""" + configuration = config.get_section(config.config_ini_section) + configuration["sqlalchemy.url"] = settings.DATABASE_URL + connectable = engine_from_config( + configuration, + prefix="sqlalchemy.", + poolclass=pool.NullPool, + ) + + with connectable.connect() as connection: + context.configure( + connection=connection, target_metadata=target_metadata + ) + + with context.begin_transaction(): + context.run_migrations() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() \ No newline at end of file diff --git a/chat/backend/app/migrations/script.py.mako b/chat/backend/app/migrations/script.py.mako new file mode 100644 index 0000000..46200be --- /dev/null +++ b/chat/backend/app/migrations/script.py.mako @@ -0,0 +1,26 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision: str = ${repr(up_revision)} +down_revision: Union[str, None] = ${repr(down_revision)} +branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)} +depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)} + + +def upgrade() -> None: + ${upgrades if upgrades else "pass"} + + +def downgrade() -> None: + ${downgrades if downgrades else "pass"} \ No newline at end of file diff --git a/chat/backend/app/migrations/versions/8c9f2cc9c395_0001_add_search_filter_to_messages.py b/chat/backend/app/migrations/versions/8c9f2cc9c395_0001_add_search_filter_to_messages.py new file mode 100644 index 0000000..4a4f26d --- /dev/null +++ b/chat/backend/app/migrations/versions/8c9f2cc9c395_0001_add_search_filter_to_messages.py @@ -0,0 +1,31 @@ +"""0001_add_search_filter_to_messages + +Revision ID: 8c9f2cc9c395 +Revises: +Create Date: 2024-12-11 13:41:49.304106 + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa + + +# revision identifiers, used by Alembic. +revision: str = '8c9f2cc9c395' +down_revision: Union[str, None] = None +branch_labels: Union[str, Sequence[str], None] = None +depends_on: Union[str, Sequence[str], None] = None + + +def upgrade() -> None: + # ### commands auto generated by Alembic - please adjust! ### + op.add_column('messages', sa.Column('search_filters', sa.JSON(), nullable=True)) + # ### end Alembic commands ### + op.execute("ALTER TABLE messages MODIFY search_filters JSON AFTER formatted_content;") + + +def downgrade() -> None: + # ### commands auto generated by Alembic - please adjust! ### + op.drop_column('messages', 'search_filters') + # ### end Alembic commands ### \ No newline at end of file diff --git a/chat/backend/app/migrations/versions/9d8cd23216bd_0002_add_message_augmentations.py b/chat/backend/app/migrations/versions/9d8cd23216bd_0002_add_message_augmentations.py new file mode 100644 index 0000000..8275046 --- /dev/null +++ b/chat/backend/app/migrations/versions/9d8cd23216bd_0002_add_message_augmentations.py @@ -0,0 +1,42 @@ +"""0002_add_message_augmentations + +Revision ID: 9d8cd23216bd +Revises: 8c9f2cc9c395 +Create Date: 2024-12-11 19:48:45.886314 + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa + + +# revision identifiers, used by Alembic. +revision: str = '9d8cd23216bd' +down_revision: Union[str, None] = '8c9f2cc9c395' +branch_labels: Union[str, Sequence[str], None] = None +depends_on: Union[str, Sequence[str], None] = None + + +def upgrade() -> None: + # ### commands auto generated by Alembic - please adjust! ### + op.add_column('messages', sa.Column('user_query', sa.Text(), nullable=True)) + + op.execute("ALTER TABLE messages MODIFY user_query VARCHAR(255) AFTER search_filters;") + + op.add_column('messages', sa.Column('rewritten_query_for_vector_base', sa.Text(), nullable=True)) + + op.execute("ALTER TABLE messages MODIFY rewritten_query_for_vector_base VARCHAR(255) AFTER user_query;") + + op.add_column('messages', sa.Column('rewritten_query_for_llm', sa.Text(), nullable=True)) + + op.execute("ALTER TABLE messages MODIFY rewritten_query_for_llm VARCHAR(255) AFTER rewritten_query_for_vector_base;") + # ### end Alembic commands ### + + +def downgrade() -> None: + # ### commands auto generated by Alembic - please adjust! ### + op.drop_column('messages', 'rewritten_query_for_llm') + op.drop_column('messages', 'rewritten_query_for_vector_base') + op.drop_column('messages', 'user_query') + # ### end Alembic commands ### \ No newline at end of file diff --git a/chat/backend/app/migrations/versions/f0f56bbf8ad6_0003_add_document_rerank_score.py b/chat/backend/app/migrations/versions/f0f56bbf8ad6_0003_add_document_rerank_score.py new file mode 100644 index 0000000..5509095 --- /dev/null +++ b/chat/backend/app/migrations/versions/f0f56bbf8ad6_0003_add_document_rerank_score.py @@ -0,0 +1,57 @@ +"""0003_add_document_rerank_score + +Revision ID: f0f56bbf8ad6 +Revises: 9d8cd23216bd +Create Date: 2024-12-12 22:02:49.445285 + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa +from sqlalchemy.dialects import mysql + +# revision identifiers, used by Alembic. +revision: str = 'f0f56bbf8ad6' +down_revision: Union[str, None] = '9d8cd23216bd' +branch_labels: Union[str, Sequence[str], None] = None +depends_on: Union[str, Sequence[str], None] = None + + +def upgrade() -> None: + # ### commands auto generated by Alembic - please adjust! ### + op.add_column('documents', sa.Column('rerank_score', sa.Float(), nullable=True)) + + op.execute("ALTER TABLE documents MODIFY rerank_score FLOAT AFTER score;") + + op.alter_column('messages', 'user_query', + existing_type=mysql.VARCHAR(collation='utf8mb4_unicode_ci', length=255), + type_=sa.Text(), + existing_nullable=True) + op.alter_column('messages', 'rewritten_query_for_vector_base', + existing_type=mysql.VARCHAR(collation='utf8mb4_unicode_ci', length=255), + type_=sa.Text(), + existing_nullable=True) + op.alter_column('messages', 'rewritten_query_for_llm', + existing_type=mysql.VARCHAR(collation='utf8mb4_unicode_ci', length=255), + type_=sa.Text(), + existing_nullable=True) + # ### end Alembic commands ### + + +def downgrade() -> None: + # ### commands auto generated by Alembic - please adjust! ### + op.alter_column('messages', 'rewritten_query_for_llm', + existing_type=sa.Text(), + type_=mysql.VARCHAR(collation='utf8mb4_unicode_ci', length=255), + existing_nullable=True) + op.alter_column('messages', 'rewritten_query_for_vector_base', + existing_type=sa.Text(), + type_=mysql.VARCHAR(collation='utf8mb4_unicode_ci', length=255), + existing_nullable=True) + op.alter_column('messages', 'user_query', + existing_type=sa.Text(), + type_=mysql.VARCHAR(collation='utf8mb4_unicode_ci', length=255), + existing_nullable=True) + op.drop_column('documents', 'rerank_score') + # ### end Alembic commands ### \ No newline at end of file diff --git a/chat/backend/app/models.py b/chat/backend/app/models.py new file mode 100644 index 0000000..4f2ef9f --- /dev/null +++ b/chat/backend/app/models.py @@ -0,0 +1,114 @@ +from sqlalchemy import Column, String, JSON, DateTime, ForeignKey, Integer, Text, Enum, Float, UUID +from sqlalchemy.sql import func +from .database import Base +from sqlalchemy.orm import relationship +import uuid +from datetime import datetime +from enum import Enum + +class FeedbackType(Enum): + POSITIVE = "positive" + NEGATIVE = "negative" + RELEVANT = "relevant" + IRRELEVANT = "irrelevant" + + +class MessageFeedback(Base): + __tablename__ = "messages_feedback" + + id = Column(Integer, primary_key=True, autoincrement=True) + message_id = Column(Integer, ForeignKey("messages.id", ondelete="CASCADE"), nullable=False, unique=True) + feedback_type = Column(String(10), nullable=True, default=None) + notes = Column(String(2048), nullable=True) + created_at = Column(DateTime(timezone=True), server_default=func.now()) + updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now()) + + message = relationship("Message", back_populates="feedback") + +class SessionFeedback(Base): + __tablename__ = "sessions_feedback" + + id = Column(Integer, primary_key=True, autoincrement=True) + session_id = Column(String(36), ForeignKey('sessions.id'), nullable=True) + question = Column(String(2048)) + name = Column(String(255), nullable=True) + email = Column(String(255), nullable=True) + created_at = Column(DateTime, default=datetime.utcnow) + + session = relationship("Session", back_populates="feedback", uselist=False) + +class Session(Base): + __tablename__ = "sessions" + + id = Column(String(36), primary_key=True) + name = Column(String(255)) + created_at = Column(DateTime(timezone=True), server_default=func.now()) + updated_at = Column(DateTime(timezone=True), onupdate=func.now()) + + messages = relationship("Message", back_populates="session", order_by="Message.sequence") + feedback = relationship("SessionFeedback", back_populates="session") + +class Message(Base): + __tablename__ = "messages" + + id = Column(Integer, primary_key=True, autoincrement=True) + session_id = Column(String(36), ForeignKey('sessions.id')) + sequence = Column(Integer) + role = Column(String(50)) + content = Column(Text) + user_query = Column(Text, nullable=True) + rewritten_query_for_vector_base = Column(Text, nullable=True) + rewritten_query_for_llm = Column(Text, nullable=True) + formatted_content = Column(Text, nullable=True) + message_type = Column(String(50), nullable=True) + created_at = Column(DateTime(timezone=True), server_default=func.now()) + search_filters = Column(JSON, nullable=True) + + session = relationship("Session", back_populates="messages") + documents = relationship( + "Document", + secondary="message_documents", + back_populates="messages", + overlaps="documents,messages" + ) + feedback = relationship("MessageFeedback", back_populates="message", uselist=False) + +class Document(Base): + __tablename__ = "documents" + + id = Column(Integer, primary_key=True, autoincrement=True) + chunk_id = Column(String(36), nullable=False) + content = Column(Text) + meta = Column(JSON) + score = Column(Float) + rerank_score = Column(Float, nullable=True) + title = Column(String(255), nullable=True) + url = Column(String(1024), nullable=True) + created_at = Column(DateTime(timezone=True), server_default=func.now()) + + messages = relationship( + "Message", + secondary="message_documents", + back_populates="documents", + overlaps="documents,messages" + ) + feedback = relationship("DocumentFeedback", back_populates="document", uselist=False) + +class MessageDocument(Base): + __tablename__ = "message_documents" + + message_id = Column(Integer, ForeignKey('messages.id'), primary_key=True) + document_id = Column(Integer, ForeignKey('documents.id'), primary_key=True) + created_at = Column(DateTime(timezone=True), server_default=func.now()) + +class DocumentFeedback(Base): + __tablename__ = "documents_feedback" + + id = Column(Integer, primary_key=True, autoincrement=True) + document_id = Column(Integer, ForeignKey("documents.id", ondelete="CASCADE"), nullable=False, unique=True) + feedback_type = Column(String(10), nullable=True, default=None) + notes = Column(String(2048), nullable=True) + created_at = Column(DateTime(timezone=True), server_default=func.now()) + updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now()) + + document = relationship("Document", back_populates="feedback") diff --git a/chat/backend/app/routers/__init__.py b/chat/backend/app/routers/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/chat/backend/app/routers/chat.py b/chat/backend/app/routers/chat.py new file mode 100644 index 0000000..ca3122f --- /dev/null +++ b/chat/backend/app/routers/chat.py @@ -0,0 +1,483 @@ +from fastapi import APIRouter, Depends, Query +from fastapi.responses import StreamingResponse +import logging +import json +from asyncio import sleep +from ..database import get_db +from sqlalchemy.orm import Session as SQLAlchemySession +from ..services.session_service import SessionService +from ..services.cohere_service import CohereService +from ..services.base_llm_service import BaseLLMService +from ..services.litellm_service import LiteLLMService +from ..services.qdrant_service import QdrantService +from ..services.bron_service import BronService +from ..schemas import ChatMessage, ChatDocument, SessionCreate, SessionUpdate, Session, MessageRole, MessageType, SearchFilter +from ..config import settings +from typing import List, Dict, AsyncGenerator +from ..text_utils import get_formatted_date_english, format_text +import time +from datetime import date, datetime +from fastapi.responses import JSONResponse + +router = APIRouter() + +# Set up logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +ENVIRONMENT = settings.ENVIRONMENT + +base_api_url = "/" +if ENVIRONMENT == "development": + base_api_url = "/api/" + + +@router.get(base_api_url + "chat") +async def chat_endpoint( + query: str = Query(..., description="The chat message content"), + session_id: str = Query(None, description="The session ID"), + locations: List[str] = Query(None, description="List of location IDs to filter by"), + start_date: date = Query(None, description="Start date to filter by"), + end_date: date = Query(None, description="End date to filter by"), + rewrite_query: bool = Query(True, description="Whether to enable query rewriting"), + db: SQLAlchemySession = Depends(get_db) +): + try: + # Start timer for request duration tracking + start_time = time.time() + + llm_service = None + if settings.LLM_SERVICE.lower() == "litellm": + llm_service = LiteLLMService() + else: + llm_service = CohereService() + + qdrant_service = QdrantService(llm_service) + session_service = SessionService(db) + bron_service = BronService() + + logger.debug(f"query: {query}") + logger.debug(f"locations: {locations}") + logger.debug(f"start_date: {start_date}") + logger.debug(f"end_date: {end_date}") + logger.debug(f"rewrite_query: {rewrite_query}") + + date_range = None + if start_date is not None and end_date is not None: + date_range = [ + datetime.combine(start_date, datetime.min.time()), + datetime.combine(end_date, datetime.max.time()) + ] + + locations_objects = await bron_service.get_locations_by_ids(location_ids=locations) + + logger.debug(f"location_names: {locations_objects}") + search_filters = SearchFilter( + locations=locations_objects, + date_range=date_range, + rewrite_query=rewrite_query + ) + logger.info(f"search_filters: {search_filters}") + + return StreamingResponse( + event_generator( + session_id, + query, + start_time, + session_service, + llm_service, + qdrant_service, + search_filters + ), + media_type="text/event-stream", + headers={ + "Cache-Control": "no-cache", + "Connection": "keep-alive", + "X-Accel-Buffering": "no", + } + ) + except Exception as e: + logger.error(f"Error in chat_endpoint: {e}", exc_info=True) + return JSONResponse(status_code=500, content={"detail": "Internal Server Error"}) + +async def event_generator( + session_id: str, + user_query: str, + start_time: float, + session_service: SessionService, + llm_service: BaseLLMService, + qdrant_service: QdrantService, + search_filters: SearchFilter +): + # Initialize status message content + status_content = [] + + # First status message + status_msg = "Bron start met zoeken" + status_content.append(status_msg) + yield 'data: ' + json.dumps({ + "type": "status", + "role": "system", + "content": status_msg + }) + "\n\n" + await sleep(0) + + session = session_service.get_session_with_relations(session_id) + user_message = llm_service.get_user_message(user_query, search_filters) + rewritten_query_for_llm = llm_service.rewrite_query_for_llm(user_message) + + if len(session.messages) == 0: + # Create new session with initial messages + is_initial_message = True + rag_system_message = llm_service.get_rag_system_message() + + # if search_filters.rewrite_query: + user_message.user_query = user_query + user_message.rewritten_query_for_llm = rewritten_query_for_llm + + rewritten_query_for_vector_base = llm_service.rewrite_query_for_vector_base(user_message) + user_message.formatted_content = rewritten_query_for_vector_base + user_message.rewritten_query_for_vector_base = rewritten_query_for_vector_base + # else: + # user_message.formatted_content = user_query + + # # new fields + # user_message.user_query = user_query + # user_message.rewritten_query_for_llm = rewritten_query_for_llm + logger.debug(f"user_message: {user_message}") + session = session_service.add_messages( + session_id=session.id, + messages=[rag_system_message, user_message] + ) + + else: + is_initial_message = False + + user_message.user_query = user_query + user_message.rewritten_query_for_llm = rewritten_query_for_llm + + rewritten_query_for_vector_base = llm_service.rewrite_query_with_history_for_vector_base(user_message, session.messages) + user_message.formatted_content = rewritten_query_for_vector_base + user_message.rewritten_query_for_vector_base = rewritten_query_for_vector_base + + session = session_service.add_message( + session_id=session.id, + message=user_message + ) + logger.info(f"Using existing session: {session.id}") + + try: + # Second status message + status_msg = f"Zoekopdracht herschreven van '{user_message.user_query}' naar '{user_message.rewritten_query_for_vector_base}'" + status_content.append(status_msg) + yield 'data: ' + json.dumps({ + "type": "status", + "role": "system", + "content": status_msg + }) + "\n\n" + await sleep(0) + + + yield 'data: ' + json.dumps({ + "type": "session", + "session_id": session.id + }) + "\n\n" + await sleep(0) + + # Third status message + status_msg = "Documenten worden gezocht" + status_content.append(status_msg) + yield 'data: ' + json.dumps({ + "type": "status", + "role": "system", + "content": status_msg + }) + "\n\n" + await sleep(0) + + try: + # Use the formatted_content (rewritten query) from the last message + relevant_docs = qdrant_service.retrieve_relevant_documents( + user_message.rewritten_query_for_vector_base, + locations=search_filters.locations, + date_range=search_filters.date_range + ) + logger.debug(f"Relevant documents: {relevant_docs}") + except Exception as e: + logger.error(f"Error retrieving documents: {e}") + + status_msg = "Er is een fout opgetreden bij het zoeken naar documenten" + status_content.append(status_msg) + yield 'data: ' + json.dumps({ + "type": "status", + "role": "system", + "content": status_msg + }) + "\n\n" + await sleep(0) + return + + if not relevant_docs: + status_msg = "Er konden geen relevante documenten worden gevonden" + status_content.append(status_msg) + yield 'data: ' + json.dumps({ + "type": "status", + "role": "system", + "content": status_msg + }) + "\n\n" + await sleep(0) + return + + session_documents = session_service.get_documents(session) + if not session_documents: + combined_relevant_docs = relevant_docs + else: + if isinstance(session_documents[0], ChatDocument): + combined_relevant_docs = relevant_docs + else: + combined_relevant_docs = relevant_docs + session_documents + + reordered_relevant_docs = qdrant_service.reorder_documents_by_publication_date(combined_relevant_docs) + + yield 'data: ' + json.dumps({ + "type": "documents", + "role": "system", + "documents": reordered_relevant_docs + }) + "\n\n" + await sleep(0) + + relevant_docs_count = len(relevant_docs) + + status_msg = f"{relevant_docs_count} nieuwe documenten gevonden" + status_content.append(status_msg) + yield 'data: ' + json.dumps({ + "type": "status", + "role": "system", + "content": status_msg + }) + "\n\n" + await sleep(0) + + + status_msg = f"Antwoord op deze vraag wordt gegenereerd door AI: '{user_message.rewritten_query_for_llm}'" + status_content.append(status_msg) + yield 'data: ' + json.dumps({ + "type": "status", + "role": "system", + "content": status_msg + }) + "\n\n" + await sleep(0) + + # Save the status messages to the database + status_message = session_service.add_and_get_message( + session_id=session.id, + message=ChatMessage( + role=MessageRole.SYSTEM, + content="\n".join(status_content), + message_type=MessageType.STATUS + ) + ) + + async for response in generate_full_response( + llm_service, + session_service, + session.messages, + relevant_docs, + is_initial_message, + session.id, + user_message, + status_message, + start_time + ): + yield 'data: ' + json.dumps(response) + "\n\n" + await sleep(0) + + except Exception as e: + logger.error(f"Error in chat_endpoint: {e}", exc_info=True) + yield 'data: ' + json.dumps({"type": "error", "content": str(e)}) + "\n\n" + await sleep(0) + + finally: + # Send a proper close event with data + yield 'event: close\n\ndata: {"type": "end"}\n\n' + await sleep(0) + +async def generate_full_response( + llm_service : BaseLLMService, + session_service: SessionService, + session_messages: List[ChatMessage], + relevant_docs: List[Dict], + is_initial_message: bool, + session_id: str, + user_message: ChatMessage, + status_message: ChatMessage, + start_time: float +): + logger.debug(f"Generating full response for query: {user_message.content}, rewritten query: {user_message.formatted_content}") + full_text = "" + text_formatted_with_citations = "" + citations = [] + + async for event in generate_response(llm_service, session_messages, relevant_docs): + if event["type"] == "status": + yield { + "type": "status", + "role": "assistant", + "content": event["content"], + "content_original": event["content"], + } + elif event["type"] == "text": + full_text += event["content"] + yield { + "type": "partial", + "role": "assistant", + "content": event["content"], + } + elif event["type"] == "citation": + citations.append(event["content"]) + text_formatted_with_citations = format_text(full_text, citations) + yield { + "type": "citation", + "role": "assistant", + "content": text_formatted_with_citations, + "content_original": full_text, + "citations": citations, + } + + if not full_text: + status_msg = "\nEr konden geen relevante documenten worden gevonden om de vraag te beantwoorden" + status_message.content += status_msg + session_service.update_message( + session_id=session_id, + message=status_message + ) + yield { + "type": "status", + "role": "assistant", + "content": status_msg, + "content_original": status_msg + } + else: + if is_initial_message: + try: + chat_name = llm_service.create_chat_session_name(user_message) + session_service.update_session_name(session_id=session_id, name=chat_name) + except Exception as e: + logger.error(f"Error creating session name: {e}", exc_info=True) + + try: + if text_formatted_with_citations: + text_formatted = text_formatted_with_citations + else: + text_formatted = format_text(full_text, []) + + session_service.add_message( + session_id=session_id, + message=ChatMessage( + role=MessageRole.ASSISTANT, + message_type=MessageType.ASSISTANT_MESSAGE, + content=full_text, + formatted_content=text_formatted, + documents = [ + ChatDocument( + chunk_id=doc.get('chunk_id'), + score=doc.get('score'), + rerank_score=doc.get('rerank_score'), + content=doc.get('content', ''), + title=doc.get('title', ''), + url=doc.get('url', '') + ) + for doc in relevant_docs + ] + ) + ) + except Exception as e: + logger.error(f"Error updating session: {e}", exc_info=True) + + status_msg = f"\nAntwoord gegenereerd in {time.time() - start_time:.2f} seconden" + status_message.content += status_msg + session_service.update_message(message=status_message) + + yield { + "type": "status", + "role": "system", + "content": status_msg + } + + text_formatted_with_citations = format_text(full_text, citations) + session = session_service.get_session_with_relations(session_id) + # Remove system messages from the session + session.messages = [msg for msg in session.messages if msg.message_type != MessageType.SYSTEM_MESSAGE] + + yield { + "type": "full", + "session": session.model_dump() + } + +async def generate_response(llm_service: BaseLLMService, messages: List[ChatMessage], relevant_docs: List[Dict]) -> AsyncGenerator[Dict, None]: + logger.debug(f"Generating response for messages and documents: {messages}") + + formatted_docs = [{ + 'id': doc['chunk_id'], + "data": { + "title": doc['data']['title'], + "snippet": doc['data']['content'], + "publication date": get_formatted_date_english( + doc['data']['published'] + ), + "municipality": doc['data']['location_name'], + "source": BaseLLMService.get_human_readable_source( + doc['data']['source'] + ), + "type": doc['data']['type'], + } + } for doc in relevant_docs] + + current_citation = None + first_citation = True + + try: + for event in llm_service.chat_stream(messages, formatted_docs): + if event: + if hasattr(event, 'type'): + if event.type == "content-delta": + if hasattr(event, 'delta') and hasattr(event.delta, 'message') and hasattr(event.delta.message, 'content'): + yield { + "type": "text", + "content": event.delta.message.content.text + } + elif event.type == 'citation-start': + if first_citation: + yield { + "type": "status", + "content": "De bronnen om deze tekst te onderbouwen worden er nu bij gezocht." + } + first_citation = False + + if (hasattr(event, 'delta') and + hasattr(event.delta, 'message') and + hasattr(event.delta.message, 'citations')): + + document_ids = [] + if hasattr(event.delta.message.citations, 'sources') and event.delta.message.citations.sources: + document_ids = [source.document.get('id') for source in event.delta.message.citations.sources if hasattr(source, 'document')] + + current_citation = { + 'start': event.delta.message.citations.start, + 'end': event.delta.message.citations.end, + 'text': event.delta.message.citations.text, + 'document_ids': document_ids + } + + elif event.type == 'citation-end': + if current_citation: + yield { + "type": "citation", + "content": current_citation + } + current_citation = None + except GeneratorExit: + logger.info("Generator closed by client") + return + except Exception as e: + logger.error(f"Error in generate_response: {e}") + raise + finally: + logger.info("Exiting generate_response") diff --git a/chat/backend/app/routers/data.py b/chat/backend/app/routers/data.py new file mode 100644 index 0000000..851ad51 --- /dev/null +++ b/chat/backend/app/routers/data.py @@ -0,0 +1,30 @@ +from fastapi import APIRouter, HTTPException, Depends +import logging +from ..config import settings +from sqlalchemy.orm import Session as SQLAlchemySession +from ..database import get_db +from ..services.bron_service import BronService + +router = APIRouter() + +# Set up logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +ENVIRONMENT = settings.ENVIRONMENT + +base_api_url = "/" +if ENVIRONMENT == "development": + base_api_url = "/api/" + + +@router.get(base_api_url + "locations") +async def get_locations(db: SQLAlchemySession = Depends(get_db)): + """Return a list of available locations""" + + bron_service = BronService() + try: + return await bron_service.get_locations() + except Exception as e: + logger.error(f"Error fetching locations: {e}") + raise HTTPException(status_code=500, detail="Error fetching locations") diff --git a/chat/backend/app/routers/feedback.py b/chat/backend/app/routers/feedback.py new file mode 100644 index 0000000..2058c1a --- /dev/null +++ b/chat/backend/app/routers/feedback.py @@ -0,0 +1,167 @@ +from fastapi import APIRouter, Depends, HTTPException +from sqlalchemy.orm import Session +import logging +from uuid import UUID +from ..database import get_db +from ..services.feedback_service import FeedbackService +from ..services.session_service import SessionService +from ..models import FeedbackType +from ..config import settings +from ..schemas import ( + MessageFeedbackCreate, + MessageFeedbackUpdate, + SessionFeedbackCreate, + DocumentFeedbackCreate, + DocumentFeedbackUpdate, + FeedbackCreate, + MessageFeedbackTypeRequest, + MessageFeedbackNotesRequest, + FeedbackCreateRequest, + DocumentFeedbackTypeRequest, + DocumentFeedbackNotesRequest +) + +router = APIRouter() + +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +ENVIRONMENT = settings.ENVIRONMENT + +base_api_url = "/" +if ENVIRONMENT == "development": + base_api_url = "/api/" + +async def get_feedback_service(db: Session = Depends(get_db)) -> FeedbackService: + return FeedbackService(db) + +async def get_session_service(db: Session = Depends(get_db)) -> SessionService: + return SessionService(db) + +@router.post(base_api_url + "feedback/messages/type/{message_id}") +async def submit_message_feedback_type( + message_id: int, + feedback: MessageFeedbackTypeRequest, + feedback_service: FeedbackService = Depends(get_feedback_service) +): + existing_feedback = feedback_service.get_message_feedback(message_id) + + if existing_feedback: + return feedback_service.update_message_feedback( + MessageFeedbackUpdate( + message_id=message_id, + feedback_type=feedback.feedback_type, + notes="" + ) + ) + else: + return feedback_service.create_message_feedback( + MessageFeedbackCreate( + message_id=str(message_id), + feedback_type=feedback.feedback_type + ) + ) + +@router.post(base_api_url + "feedback/messages/notes/{message_id}") +async def submit_message_feedback_notes( + message_id: int, + feedback: MessageFeedbackNotesRequest, + feedback_service: FeedbackService = Depends(get_feedback_service) +): + return feedback_service.update_message_feedback( + MessageFeedbackUpdate( + message_id=message_id, + notes=feedback.notes + ) + ) + +# @router.post(base_api_url + "feedback/documents/{document_id}") +# def set_document_feedback( +# document_id: str, +# feedback: DocumentFeedbackRequest, +# db: Session = Depends(get_db) +# ): +# try: +# feedback_type = FeedbackType[feedback.feedback_type.upper()] +# feedback_service = FeedbackService(db) +# return feedback_service.set_document_feedback( +# document_id=document_id, +# feedback_type=feedback_type, +# notes=feedback.notes +# ) +# except KeyError: +# raise HTTPException( +# status_code=422, +# detail=f"Invalid feedback type. Must be one of: {[t.name for t in FeedbackType]}" + # ) + +@router.post(base_api_url + "feedback/documents/type/{document_id}") +async def submit_document_feedback_type( + document_id: int, + feedback: DocumentFeedbackTypeRequest, + feedback_service: FeedbackService = Depends(get_feedback_service) +): + existing_feedback = feedback_service.get_document_feedback(document_id) + + if existing_feedback: + return feedback_service.update_document_feedback( + DocumentFeedbackUpdate( + document_id=document_id, + feedback_type=feedback.feedback_type, + notes="" + ) + ) + else: + return feedback_service.create_document_feedback( + DocumentFeedbackCreate( + document_id=document_id, + feedback_type=feedback.feedback_type + ) + ) + +@router.post(base_api_url + "feedback/documents/notes/{document_id}") +async def submit_document_feedback_notes( + document_id: int, + feedback: DocumentFeedbackNotesRequest, + feedback_service: FeedbackService = Depends(get_feedback_service) +): + return feedback_service.update_document_feedback( + DocumentFeedbackUpdate( + document_id=document_id, + notes=feedback.notes + ) + ) + + +@router.post(base_api_url + "feedback/{session_id}") +def create_session_feedback( + session_id: str, + feedback: FeedbackCreateRequest, + feedback_service: FeedbackService = Depends(get_feedback_service), + session_service: SessionService = Depends(get_session_service) +): + session = session_service.get_session(session_id) + if session is None: + raise HTTPException(status_code=404, detail="Session not found") + + return feedback_service.create_session_feedback( + SessionFeedbackCreate( + question=feedback.question, + name=feedback.name, + email=feedback.email, + session_id=session_id + ) + ) + +@router.post(base_api_url + "feedback") +def create_feedback( + feedback: FeedbackCreateRequest, + feedback_service: FeedbackService = Depends(get_feedback_service) +): + return feedback_service.create_feedback( + FeedbackCreate( + question=feedback.question, + name=feedback.name, + email=feedback.email, + ) + ) \ No newline at end of file diff --git a/chat/backend/app/routers/sessions.py b/chat/backend/app/routers/sessions.py new file mode 100644 index 0000000..c80c35e --- /dev/null +++ b/chat/backend/app/routers/sessions.py @@ -0,0 +1,118 @@ +from fastapi import APIRouter, Depends +from sqlalchemy.orm import Session as SQLAlchemySession +from ..schemas import SessionCreate, MessageType, MessageRole +from ..services.session_service import SessionService +from ..database import get_db +from ..services.qdrant_service import QdrantService +import logging +from datetime import datetime +from ..config import settings +from ..services.cohere_service import CohereService +from ..services.litellm_service import LiteLLMService +from ..services.bron_service import BronService + +router = APIRouter() + +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +ENVIRONMENT = settings.ENVIRONMENT + +base_api_url = "/" +if ENVIRONMENT == "development": + base_api_url = "/api/" + +@router.get(base_api_url + "sessions/{session_id}") +async def get_session(session_id: str, db: SQLAlchemySession = Depends(get_db)): + logger.debug(f"Getting session with id: {session_id}") + session_service = SessionService(db) + + llm_service = None + # Use the configured LLM service + if settings.LLM_SERVICE.lower() == "litellm": + llm_service = LiteLLMService() + else: + llm_service = CohereService() + + qdrant_service = QdrantService(llm_service) + bron_service = BronService() + + session = session_service.get_session_with_relations(session_id) + documents = [] + for message in session.messages: + documents.extend(message.documents) + + logger.debug(f"Found {len(documents)} documents in MySQL for session {session_id}") + + qdrant_documents = qdrant_service.get_documents_by_ids(documents) + logger.info(f"Found {len(qdrant_documents)} documents in Qdrant for session {session_id}") + + # Remove system messages from the session + messages = [] + for message in session.messages: + if (message.role == MessageRole.SYSTEM and + (message.message_type is None or message.message_type != MessageType.STATUS)): + continue + + if message.role == MessageRole.ASSISTANT: + message.content = message.get_param("formatted_content") + + messages.append(message) + + locations = await bron_service.get_locations() + + response = { + "id": session.id, + "name": session.name, + "messages": messages, + "documents": qdrant_documents, + "locations": locations + } + + return response + +@router.post(base_api_url + "new_session") +async def create_session(db: SQLAlchemySession = Depends(get_db)): + logger.debug("Creating new session") + now = datetime.now() + session_service = SessionService(db) + return session_service.create_session( + SessionCreate( + name=f"Sessie {now.strftime('%Y-%m-%d %H:%M')}", + messages=[], + documents=[] + ) + ) + +@router.post(base_api_url + "sessions/{session_id}/clone") +async def clone_session(session_id: str, db: SQLAlchemySession = Depends(get_db)): + logger.debug(f"Cloning session with id: {session_id}") + session_service = SessionService(db) + + # Get the original session with messages + original_session = session_service.get_session_with_relations(session_id) + + new_session = session_service.create_session( + SessionCreate( + name=f"Copy of {original_session.name}", + messages=[], + documents=[] + ) + ) + + # Get messages in chronological order + sorted_messages = sorted(original_session.messages, key=lambda x: x.sequence) + + # Get only user messages in order + user_messages = [msg for msg in sorted_messages if msg.role == MessageRole.USER] + + # Return the new session ID and the messages to replay + return { + "session_id": new_session.id, + "messages": [ + { + "content": msg.content, + "message_type": msg.message_type + } for msg in user_messages + ] + } diff --git a/chat/backend/app/schemas.py b/chat/backend/app/schemas.py new file mode 100644 index 0000000..303bd84 --- /dev/null +++ b/chat/backend/app/schemas.py @@ -0,0 +1,256 @@ +from pydantic import BaseModel, EmailStr, field_serializer, ConfigDict +from typing import List, Dict, Optional, Any +from datetime import datetime, date +from enum import Enum + +class FeedbackType(str, Enum): + POSITIVE = "positive" + NEGATIVE = "negative" + RELEVANT = "relevant" + IRRELEVANT = "irrelevant" + +class MessageType(str, Enum): + SYSTEM_MESSAGE = "system_message" + USER_MESSAGE = "user_message" + ASSISTANT_MESSAGE = "assistant_message" + STATUS = "status" + +class MessageRole(str, Enum): + SYSTEM = "system" + USER = "user" + ASSISTANT = "assistant" + +class DocumentFeedbackBase(BaseModel): + document_id: int + feedback_type: Optional[FeedbackType] = None + notes: Optional[str] = None + + +class DocumentFeedback(DocumentFeedbackBase): + id: int + created_at: Optional[datetime] = None + updated_at: Optional[datetime] = None + + model_config = ConfigDict(ser_json_timedelta='iso8601', from_attributes=True) + + @field_serializer('created_at') + @field_serializer('updated_at') + def serialize_dt(self, dt: datetime, _info): + if dt: + return dt.timestamp() + return None + +class DocumentBase(BaseModel): + chunk_id: str + content: str + meta: Optional[Dict] = None + score: float + rerank_score: Optional[float] = None + title: Optional[str] = None + url: Optional[str] = None + +class DocumentCreate(DocumentBase): + pass + +class Document(DocumentBase): + id: int + created_at: Optional[datetime] = None + + model_config = ConfigDict(ser_json_timedelta='iso8601', from_attributes=True) + + @field_serializer('created_at') + @field_serializer('updated_at') + def serialize_dt(self, dt: datetime, _info): + if dt: + return dt.timestamp() + return None + +class ChatDocument(BaseModel): + id: Optional[int] = None + chunk_id: str + score: float + rerank_score: Optional[float] = None + content: Optional[str] = None + title: Optional[str] = None + url: Optional[str] = None + feedback: Optional[DocumentFeedback] = None + + def __hash__(self): + return hash(self.id) + + def __eq__(self, other): + if not isinstance(other, ChatDocument): + return False + return self.id == other.id + + +class MessageFeedbackBase(BaseModel): + message_id: int + feedback_type: Optional[FeedbackType] = None + notes: Optional[str] = None + + +class MessageFeedback(MessageFeedbackBase): + id: int + created_at: Optional[datetime] = None + updated_at: Optional[datetime] = None + + model_config = ConfigDict(ser_json_timedelta='iso8601', from_attributes=True) + + @field_serializer('created_at') + @field_serializer('updated_at') + def serialize_dt(self, dt: datetime, _info): + if dt: + return dt.timestamp() + return None + +class MessageFeedbackCreate(MessageFeedbackBase): + pass + + +class MessageFeedbackUpdate(MessageFeedbackBase): + pass + + +class Location(BaseModel): + id: str + name: str + type: str + + +class SearchFilter(BaseModel): + locations: Optional[List[Location]] = [] + date_range: Optional[List[datetime]] = [] + rewrite_query: bool = True + + @field_serializer('date_range') + def serialize_dt(self, date_range: List[datetime], _info): + if date_range: + return [dt.strftime("%Y-%m-%d") for dt in date_range] + return None + +class ChatMessage(BaseModel): + id: Optional[int] = None + role: MessageRole + message_type: Optional[MessageType] = None + sequence: Optional[int] = 0 + content: str + formatted_content: Optional[str] = None + user_query: Optional[str] = None + rewritten_query_for_vector_base: Optional[str] = None + rewritten_query_for_llm: Optional[str] = None + feedback: Optional[MessageFeedback] = None + documents: Optional[List[ChatDocument]] = [] + search_filters: Optional[SearchFilter] = None + + def get_param(self, param_name: str) -> Any: + """ + Dynamically get the value of a parameter by its name. + + Args: + param_name (str): The name of the parameter to retrieve. + + Returns: + Any: The value of the parameter. + + Raises: + ValueError: If the parameter does not exist in the model. + """ + if param_name in self.model_fields: + if param_name == "formatted_content": + if self.formatted_content is not None and self.formatted_content != "": + return self.formatted_content + else: + return self.content + else: + return getattr(self, param_name) + else: + raise ValueError(f"Parameter '{param_name}' does not exist in the model.") + + +class ChatRequest(BaseModel): + content: str + + +class SessionBase(BaseModel): + name: Optional[str] = None + messages: List[ChatMessage] = [] + +class SessionCreate(SessionBase): + pass + + +class SessionUpdate(SessionBase): + pass + + +class Session(SessionBase): + id: str + created_at: Optional[datetime] = None + updated_at: Optional[datetime] = None + + model_config = ConfigDict(ser_json_timedelta='iso8601', from_attributes=True) + + @field_serializer('created_at') + @field_serializer('updated_at') + def serialize_dt(self, dt: datetime, _info): + if dt: + return dt.timestamp() + return None + +class FeedbackBase(BaseModel): + id: int + created_at: datetime + session: Optional[Session] = None + question: str + name: Optional[str] = None + email: Optional[str] = None + + model_config = ConfigDict(ser_json_timedelta='iso8601', from_attributes=True) + + @field_serializer('created_at') + def serialize_dt(self, dt: datetime, _info): + return dt.timestamp() + + +class FeedbackCreateRequest(BaseModel): + question: str + name: Optional[str] = None + email: Optional[str] = None + + +class FeedbackCreate(BaseModel): + question: str + name: Optional[str] = None + email: Optional[str] = None + + +class SessionFeedbackCreate(BaseModel): + session_id: str + question: str + name: Optional[str] = None + email: Optional[str] = None + + +class MessageFeedbackTypeRequest(BaseModel): + feedback_type: FeedbackType + + +class MessageFeedbackNotesRequest(BaseModel): + notes: str + + +class DocumentFeedbackTypeRequest(BaseModel): + feedback_type: FeedbackType + + +class DocumentFeedbackNotesRequest(BaseModel): + notes: str + + +class DocumentFeedbackCreate(DocumentFeedbackBase): + pass + + +class DocumentFeedbackUpdate(DocumentFeedbackBase): + pass diff --git a/chat/backend/app/services/__init__.py b/chat/backend/app/services/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/chat/backend/app/services/base_llm_service.py b/chat/backend/app/services/base_llm_service.py new file mode 100644 index 0000000..fc93581 --- /dev/null +++ b/chat/backend/app/services/base_llm_service.py @@ -0,0 +1,233 @@ +from abc import ABC, abstractmethod +from typing import List, Dict, Generator +from ..text_utils import get_formatted_current_date_english +from ..schemas import ChatMessage, MessageRole, MessageType, SearchFilter + +HUMAN_READABLE_SOURCES = { + "openbesluitvorming": "Raadsstuk of bijlage", + "poliflw": "Politiek nieuwsbericht", + # "openspending": "Begrotingsdata", + "woogle": "Woo-verzoek", + "obk": "Officiële bekendmaking", + "cvdr": "Lokale wet- en regelgeving", + "oor": "Rapport", +} + +class BaseLLMService(ABC): + KNOWLEDGE_CUTOFF_DATE = "October 10, 2024" + + RAG_SYSTEM_MESSAGE=''' + +## Task and Context + +You are Bron chat, an extremely capable large language model developed by Open State Foundation and the SvdJ Incubator. You receive instructions programmatically via an API, which you follow to the best of your ability. Your users are journalists and researchers based in the Netherlands. You are provided with government documents and are asked to answer questions based on these documents. + +Bron contains 3.5 million open government documents from various Dutch government agencies and organizations, ranging from 2010 to {year}. Today's date is {date}. + +The document categories and their corresponding datasets are: + +- Raadsstukken en bijlages +- Politiek nieuwsberichten +- Woo-verzoeken +- Officiële bekendmakingen +- Rapporten +- Lokale wet- en regelgeving + +## Style Guide + +1. Always answer in the same language as the query. When in doubt, use Dutch. +2. Add two new lines before the start of a list. +3. Formulate your answers in the style of a journalist. +4. When making factual statements, always cite the source document(s) that provided the information. +5. If the answer is not specifically found in the context, prefer to answer "Ik heb het antwoord niet kunnen vinden.", or the same answer in the same language as the query, instead of guessing. +6. When asked about the present or time-sensitive information, qualify your answer with the publication date of the most recent document and state that you cannot provide information about events after that date. +7. Only if asked about Bron chat, this tool, service, the Bron corpus, or the source of the documents, use information about Bron chat from this system message to write a response, and ignore any other context. + +''' +# 7. Review the latest publication date of the retrieved documents and mention this date in your answer. +# 7. If you cannot find any documents supporting a factual answer of the question, suggest that the user review the Bron Gids which suggests resources and organizations that might be able to help. + + CHAT_NAME_SYSTEM_MESSAGE=''' + +## Task and Context + +You will be provided with a query. Your job is to turn this query into a concise and descriptive title for a AI chatbot session.” + +## Style Guide + +Always create a short and descriptive title of five words in the same language as the query. When in doubt, use Dutch. Don't use any special characters or punctuation. + +''' + + QUERY_REWRITE_SYSTEM_MESSAGE = ''' + +## Task and Context + +You are a query rewriter specializing in Dutch search queries for government documents. Your task is to enhance the user's query for use in hybrid vector and BM25 retrieval. + +## Instructions + +1. Maintain the original intent of the query. +2. Keep the query concise and focused. +3. Language: Write the query in Dutch. +4. If you're unsure about how to rewrite the query, just return the original query. +5. Output Format: Provide only the rewritten query without any explanations or additional text. + +## Examples + +### Example 1 + +Query: "Wat zijn de regels voor zonnepanelen?" +Rewritten query: "regels zonnepanelen" + +### Example 2 + +Query: "Ik ben op zoek naar documenten over klimaatbeleid in gemeente Amsterdam" +Rewritten query: "klimaatbeleid gemeente amsterdam" + +### Example 3 + +Query: "Ik ben op zoek naar rapporten over klimaatbeleid in gemeente Amsterdam" +Rewritten query: "rapport klimaatbeleid gemeente amsterdam" + +''' + + QUERY_REWRITE_SYSTEM_MESSAGE_WITH_HISTORY_FOR_DB = ''' + +## Task and Context + +You are a query rewriter specializing in Dutch search queries for government documents. Your task is to enhance the user's latest query by incorporating relevant context from their previous queries when appropriate. The rewritten query will be used for hybrid vector and BM25 retrieval. +## Instructions + +1. Focus on the Latest Query: Concentrate on the user's most recent query. +2. Analyze Previous Queries: + - If the latest query is a follow-up (examples 1, 2 and 3): + - Incorporate essential context from previous queries to improve search results. + - Maintain the original intent of the latest query. + - If the latest query is new (examples 4 and 5): + - Do not add context from previous queries. + - Keep the query concise and focused. +3. Language: Write the query in Dutch. +4. Output Format: Provide only the rewritten query without any explanations or additional text. +5. If you're unsure about the context, or unsure about the new query, just return the new query. + +## Example 1 + +User query 1: "Wat zijn de regels voor zonnepanelen?" +New query: "En wat kost de vergunning?" +Rewritten query: "kosten vergunning zonnepanelen" + +## Example 2 + +User query 1: "Wat zijn de regels voor zonnepanelen?" +New query: "En in gemeente Amsterdam?" +Rewritten query: "regels zonnepanelen gemeente Amsterdam" + +## Example 3 + +User query 1: "Wat zijn de regels voor zonnepanelen?" +User query 2: "En in Amsterdam?" +New query: "En in Almere?" +Rewritten query: "regels zonnepanelen Almere" + +## Example 4 + +User query 1: "Wat zijn de regels voor zonnepanelen?" +New query: "Welke documenten zijn er over klimaatverandering?" +Rewritten query: "klimaatverandering" + +## Example 5 + +User query 1: "Wat zijn de regels voor zonnepanelen?" +User query 1: "En in Amsterdam?" +New query: "Welke documenten zijn er over klimaat in gemeente Almere?" +Rewritten query: "klimaatbeleid Almere" + +''' + + @abstractmethod + def chat_stream(self, messages: list[ChatMessage], documents: list) -> Generator: + pass + + @abstractmethod + def rerank_documents(self, query: str, documents: list, top_n: int = 20, return_documents: bool = True) -> Dict: + pass + + @abstractmethod + def generate_dense_embedding(self, query: str) -> List[float]: + pass + + @abstractmethod + def create_chat_session_name(self, user_message: ChatMessage) -> str: + pass + + @abstractmethod + def rewrite_query_for_vector_base(self, message: ChatMessage) -> str: + pass + + @abstractmethod + def rewrite_query_for_llm(self, message: ChatMessage) -> str: + pass + + @abstractmethod + def rewrite_query_with_history_for_vector_base(self, message: ChatMessage, messages: list[ChatMessage]) -> str: + pass + + @staticmethod + def get_human_readable_source(source: str) -> str: + return HUMAN_READABLE_SOURCES.get(source, source) + + def get_user_message(self, content: str, search_filters: SearchFilter): + return ChatMessage( + role=MessageRole.USER, + message_type=MessageType.USER_MESSAGE, + content=content, + search_filters=search_filters + ) + + def get_rag_system_message(self): + formatted_date = get_formatted_current_date_english() + # formatted_year = get_formatted_current_year() + return ChatMessage( + role=MessageRole.SYSTEM, + message_type=MessageType.SYSTEM_MESSAGE, + content=self.RAG_SYSTEM_MESSAGE.format( + date=formatted_date, + year=self.KNOWLEDGE_CUTOFF_DATE + ) + ) + + def _get_chat_name_system_message(self): + return ChatMessage( + role=MessageRole.SYSTEM, + message_type=MessageType.SYSTEM_MESSAGE, + content=self.CHAT_NAME_SYSTEM_MESSAGE + ) + + def _get_chat_name_system_message(self): + return ChatMessage( + role=MessageRole.SYSTEM, + message_type=MessageType.SYSTEM_MESSAGE, + content=self.CHAT_NAME_SYSTEM_MESSAGE + ) + + def _truncate_chat_name(self, name: str, max_length: int = 250) -> str: + """ + Truncate chat name to ensure it fits within database limits. + Leaves some buffer below the 255 character limit. + """ + if len(name) <= max_length: + return name + + # Try to truncate at a natural break point + truncated = name[:max_length] + last_break = max( + truncated.rfind('.'), + truncated.rfind('?'), + truncated.rfind('!'), + truncated.rfind('\n') + ) + + if last_break > max_length // 2: + return truncated[:last_break + 1].strip() + return truncated.strip() \ No newline at end of file diff --git a/chat/backend/app/services/bron_service.py b/chat/backend/app/services/bron_service.py new file mode 100644 index 0000000..abcd7ba --- /dev/null +++ b/chat/backend/app/services/bron_service.py @@ -0,0 +1,115 @@ +import time +from typing import List +import logging +from ..config import settings +import httpx +from fastapi import HTTPException +from ..schemas import Location +# Set up logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +# Cache variables +locations_cache = None +locations_cache_time = 0 +CACHE_EXPIRATION = 86400 # 24 hours in seconds + +class BronService: + def __init__(self): + pass + + async def _fetch_locations_data(self): + """Fetch locations data from the external API""" + locations_url = 'https://api.bron.live/locations/search?includes=id,name,kind&limit=999' + response = None + + try: + async with httpx.AsyncClient() as client: + response = await client.get(locations_url) + response.raise_for_status() # Raise an error for bad responses + + except httpx.HTTPStatusError as http_error: + logger.error('HTTP error occurred: %s', http_error) + raise HTTPException( + status_code=http_error.response.status_code, + detail=f"HTTP error occurred: {http_error.response.text}" + ) + except httpx.RequestError as request_error: + logger.error('Request error occurred: %s', request_error) + raise HTTPException( + status_code=500, + detail="Error occurred while making the request to the external API." + ) + + return response + + async def get_locations(self) -> List[Location]: + """Return a list of available locations""" + global locations_cache, locations_cache_time + + # Check if cache is still valid + if locations_cache is not None and (time.time() - locations_cache_time) < CACHE_EXPIRATION: + return locations_cache + + locations_response = await self._fetch_locations_data() # Call the new function + + hits = locations_response.json().get('hits', {}).get('hits', []) + + # Transform the data into the desired format + locations = [] + for hit in hits: + source = hit.get('_source', {}) + + # Check if 'id' key exists in the source + if 'id' not in source: + logger.warning('Missing expected key "id" in source: %s', source) + continue # Skip this item if 'id' is missing + + # Skip items where id contains 'type:' or '*' + if 'type:' in source['id'] or '*' in source['id']: + continue + + # Use get to safely access 'kind' + kind = source.get('kind', 'ministry') # Default to 'ministry' if 'kind' is not present + + # Map the kind to the desired format + if kind == 'municipality': + kind_label = 'Gemeente' + elif kind == 'province': + kind_label = 'Provincie' + elif kind == 'ministry': + kind_label = 'Ministerie' + else: + kind_label = 'Ministerie' + + if source.get('name', '') == '': + continue + + locations.append( + Location( + id=source['id'], + name=source.get('name', ''), # Default to 'Unnamed' if 'name' is not present + type=kind_label + ) + ) + + # Update cache with the original response + locations_cache = locations # Cache the original response + locations_cache_time = time.time() + + return locations_cache + + async def get_locations_by_ids(self, location_ids: List[str]) -> List[Location]: + # Use cache if available, otherwise fetch data + if location_ids is None or len(location_ids) == 0: + return [] + + locations = await self.get_locations() + + # Create a lookup dictionary for faster access + location_map = {location.id: location for location in locations} + + # Get Location objects using dictionary lookup + filtered_locations = [location_map[loc_id] for loc_id in location_ids if loc_id in location_map] + + return filtered_locations diff --git a/chat/backend/app/services/cohere_service.py b/chat/backend/app/services/cohere_service.py new file mode 100644 index 0000000..de5b0cb --- /dev/null +++ b/chat/backend/app/services/cohere_service.py @@ -0,0 +1,268 @@ +from ..config import settings +from cohere import ClientV2 as CohereClient +import logging +from ..schemas import ChatMessage, MessageRole, MessageType +from .base_llm_service import BaseLLMService +from typing import Generator +# Set up logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +class CohereService(BaseLLMService): + def __init__(self): + self.client = CohereClient(api_key=settings.COHERE_API_KEY) + + def chat_stream(self, messages: list[ChatMessage], documents: list) -> Generator: + logger.info(f"Starting chat stream with {len(messages)} messages and {len(documents)} documents...") + + # Filter out status messages and validate message content + system_and_user_messages = [] + for msg in messages: + if msg.message_type == MessageType.SYSTEM_MESSAGE: + system_and_user_messages.append({ + 'role': msg.role, + 'content': msg.content + }) + elif msg.message_type == MessageType.USER_MESSAGE: + system_and_user_messages.append({ + 'role': msg.role, + 'content': msg.rewritten_query_for_llm + }) + + logger.info(f"Filtered to {len(system_and_user_messages)} valid non-status messages") + + max_retries = 3 + for attempt in range(max_retries): + try: + return self.client.chat_stream( + model="command-r-08-2024", + messages=system_and_user_messages, + documents=documents + ) + except Exception as e: + logger.error(f"Error in chat stream (attempt {attempt + 1}): {e}") + if attempt < max_retries - 1: + logger.info("Retrying...") + else: + logger.error("Max retries reached. Raising exception.") + raise + + + def rerank_documents(self, query: str, documents: list, top_n: int = 20, return_documents: bool = True): + logger.info(f"Reranking {len(documents)} documents, and returning {top_n} documents...") + + max_retries = 3 + for attempt in range(max_retries): + try: + return self.client.rerank( + query=query, + documents=documents, + top_n=top_n, + model=settings.COHERE_RERANK_MODEL, + return_documents=return_documents + ) + except Exception as e: + logger.error(f"Error reranking documents (attempt {attempt + 1}): {e}") + if attempt < max_retries - 1: + logger.info("Retrying...") + else: + logger.error("Max retries reached. Raising exception.") + raise + + def generate_dense_embedding(self, query: str): + logger.info(f"Generating dense embedding for query: {query}") + + # HACK TO FIX COHERE'S DIACRITIC ISSUES + # Replace diacritics with base characters using unicode normalization + import unicodedata + query = ''.join(c for c in unicodedata.normalize('NFKD', query) + if not unicodedata.combining(c)) + # END HACK + + max_retries = 3 + for attempt in range(max_retries): + try: + if settings.EMBEDDING_QUANTIZATION == "float": + embeddings = self.client.embed( + texts=[query], + input_type="search_query", + model=settings.COHERE_EMBED_MODEL, + embedding_types=["float"] + ).embeddings.float[0] + + logger.info("Generated dense embeddings") + return embeddings + elif settings.EMBEDDING_QUANTIZATION == "uint8": + embeddings = self.client.embed( + texts=[query], + input_type="search_query", + model=settings.COHERE_EMBED_MODEL, + embedding_types=["uint8"] + ).embeddings.uint8[0] + + logger.info("Generated dense embeddings") + return embeddings + else: + embeddings = self.client.embed( + texts=[query], + input_type="search_query", + model=settings.COHERE_EMBED_MODEL, + embedding_types=["float"] + ).embeddings.float[0] + + logger.info("Generated dense embeddings") + return embeddings + except Exception as e: + logger.error(f"Error generating dense embedding (attempt {attempt + 1}): {e}") + if attempt < max_retries - 1: + logger.info("Retrying...") + else: + logger.error("Max retries reached. Raising exception.") + raise + + def create_chat_session_name(self, user_message: ChatMessage): + logger.info(f"Creating chat session name for query: {user_message.content}, using rewritten query: {user_message.formatted_content}") + + system_message = self._get_chat_name_system_message() + + response = None + + max_retries = 3 + for attempt in range(max_retries): + try: + response = self.client.chat( + model="command-r-08-2024", + messages=[ + { + 'role': system_message.role, + 'content': system_message.content + }, + { + 'role': user_message.role, + 'content': user_message.rewritten_query_for_llm + } + ], + temperature=0.1 + ) + except Exception as e: + logger.error(f"Error creating chat session name (attempt {attempt + 1}): {e}") + if attempt < max_retries - 1: + logger.info("Retrying...") + else: + logger.error("Max retries reached. Raising exception.") + raise + + if response: + name = response.message.content[0].text + return self._truncate_chat_name(name) + else: + return None + + def rewrite_query_with_history_for_vector_base(self, message: ChatMessage, messages: list[ChatMessage]) -> str: + logger.info("Rewriting query based on chat history...") + + # Filter out system messages and get last few messages for context + # Get up to last 6 messages, but works with fewer messages too + chat_history = [msg for msg in messages if msg.role == MessageRole.USER][-6:] + + system_message = ChatMessage( + role="system", + content=self.QUERY_REWRITE_SYSTEM_MESSAGE_WITH_HISTORY_FOR_DB + ) + + # Format chat history and new query + history_context = "\n".join([ + f"User query {i}: {msg.user_query}" for i, msg in enumerate(chat_history, start=1) + ]) + user_message = ChatMessage( + role="user", + content=f"""{history_context} +New query: {message.user_query}""" + ) + + max_retries = 3 + for attempt in range(max_retries): + try: + response = self.client.chat( + model="command-r-08-2024", + messages=[ + { + 'role': system_message.role, + 'content': system_message.content + }, + { + 'role': user_message.role, + 'content': user_message.content + } + ], + temperature=0.1 + ) + + rewritten_query = response.message.content[0].text + logger.info(f"Original query: {message.user_query}") + logger.info(f"Rewritten query: {rewritten_query}") + return rewritten_query + except Exception as e: + logger.error(f"Error rewriting query with history (attempt {attempt + 1}): {e}") + if attempt < max_retries - 1: + logger.info("Retrying...") + else: + logger.error("Max retries reached. Raising exception.") + raise + + return message.user_query # Fall back to original query if rewriting fails + + def rewrite_query_for_llm(self, message: ChatMessage) -> str: + rewritten_query = message.content + # Check if any location names are already in the content + if message.search_filters.locations: + location_names = [location.name for location in message.search_filters.locations] + if not any(loc_name in message.content for loc_name in location_names): + locations_str = ", of ".join(location_names) + rewritten_query = f"{message.content} in {locations_str}" + + # if message.search_filters.date_range: + # rewritten_query += f" van {message.search_filters.date_range[0].strftime('%d-%m-%Y')} tot {message.search_filters.date_range[1].strftime('%d-%m-%Y')}" + + return rewritten_query + + def rewrite_query_for_vector_base(self, message: ChatMessage) -> str: + system_message = ChatMessage( + role="system", + content=self.QUERY_REWRITE_SYSTEM_MESSAGE + ) + user_message = ChatMessage( + role="user", + content=f"""Query: {message.user_query}""" + ) + + max_retries = 3 + for attempt in range(max_retries): + try: + response = self.client.chat( + model="command-r", + messages=[ + { + 'role': system_message.role, + 'content': system_message.content + }, + { + 'role': user_message.role, + 'content': user_message.content + } + ], + temperature=0.1 + ) + + rewritten_query = response.message.content[0].text + logger.info(f"Original query: {message.user_query}") + logger.info(f"Rewritten query: {rewritten_query}") + return rewritten_query + except Exception as e: + logger.error(f"Error rewriting query for vector base (attempt {attempt + 1}): {e}") + if attempt < max_retries - 1: + logger.info("Retrying...") + else: + logger.error("Max retries reached. Raising exception.") + raise + return message.content # Fall back to original query if rewriting fails \ No newline at end of file diff --git a/chat/backend/app/services/database_service.py b/chat/backend/app/services/database_service.py new file mode 100644 index 0000000..2f0443c --- /dev/null +++ b/chat/backend/app/services/database_service.py @@ -0,0 +1,7 @@ +from fastapi import Depends +from sqlalchemy.orm import Session +from ..database import get_db + +class DatabaseService: + def __init__(self, db: Session = Depends(get_db)): + self.db = db diff --git a/chat/backend/app/services/feedback_service.py b/chat/backend/app/services/feedback_service.py new file mode 100644 index 0000000..823f5e7 --- /dev/null +++ b/chat/backend/app/services/feedback_service.py @@ -0,0 +1,115 @@ +from uuid import UUID +from typing import Optional +from sqlalchemy import select, update, insert +from app.models import MessageFeedback, SessionFeedback, Document, DocumentFeedback +from app.schemas import MessageFeedbackCreate, MessageFeedbackUpdate, SessionFeedbackCreate, DocumentFeedbackCreate, DocumentFeedbackUpdate, FeedbackCreate +from .database_service import DatabaseService +from fastapi import HTTPException + + +class FeedbackService(DatabaseService): + def __init__(self, db): + super().__init__(db) + + def create_message_feedback(self, feedback: MessageFeedbackCreate) -> dict: + new_message_feedback = MessageFeedback( + message_id=feedback.message_id, + feedback_type=feedback.feedback_type + ) + + self.db.add(new_message_feedback) + self.db.commit() + self.db.refresh(new_message_feedback) + + return new_message_feedback + + def update_message_feedback( + self, + feedback: MessageFeedbackUpdate + ) -> dict: + db_message_feedback = self.get_message_feedback(feedback.message_id) + + if db_message_feedback is None: + raise HTTPException(status_code=404, detail="Message feedback not found") + + if feedback.feedback_type is not None: + db_message_feedback.feedback_type = feedback.feedback_type + + if feedback.notes is not None: + db_message_feedback.notes = feedback.notes + + self.db.commit() + self.db.refresh(db_message_feedback) + + return db_message_feedback + + def get_message_feedback(self, message_id: int) -> dict: + return self.db.query(MessageFeedback).filter(MessageFeedback.message_id == message_id).first() + + def get_session_feedback(self, session_id: int) -> dict: + return self.db.query(SessionFeedback).filter(SessionFeedback.session_id == session_id).first() + + def create_session_feedback(self, feedback: SessionFeedbackCreate) -> dict: + new_session_feedback = SessionFeedback( + session_id=feedback.session_id, + question=feedback.question, + name=feedback.name, + email=feedback.email + ) + + self.db.add(new_session_feedback) + self.db.commit() + self.db.refresh(new_session_feedback) + + return new_session_feedback + + def create_feedback(self, feedback: FeedbackCreate) -> dict: + new_feedback = SessionFeedback( + question=feedback.question, + name=feedback.name, + email=feedback.email + ) + + self.db.add(new_feedback) + self.db.commit() + self.db.refresh(new_feedback) + + return new_feedback + + def create_document_feedback(self, document_feedback: DocumentFeedbackCreate) -> dict: + """Create new document feedback""" + new_document_feedback = DocumentFeedback( + document_id=document_feedback.document_id, + feedback_type=document_feedback.feedback_type + ) + + self.db.add(new_document_feedback) + self.db.commit() + self.db.refresh(new_document_feedback) + + return new_document_feedback + + def update_document_feedback( + self, + feedback: DocumentFeedbackUpdate + ) -> dict: + """Update existing document feedback""" + document_feedback = self.get_document_feedback(feedback.document_id) + + if document_feedback is None: + raise HTTPException(status_code=404, detail="Document feedback not found") + + if feedback.feedback_type is not None: + document_feedback.feedback_type = feedback.feedback_type + + if feedback.notes is not None: + document_feedback.notes = feedback.notes + + self.db.commit() + self.db.refresh(document_feedback) + + return document_feedback + + def get_document_feedback(self, document_id: int) -> dict: + """Get document feedback by document ID""" + return self.db.query(DocumentFeedback).filter(DocumentFeedback.document_id == document_id).first() diff --git a/chat/backend/app/services/litellm_service.py b/chat/backend/app/services/litellm_service.py new file mode 100644 index 0000000..d7cc5e4 --- /dev/null +++ b/chat/backend/app/services/litellm_service.py @@ -0,0 +1,199 @@ +import os +from ..config import settings +from litellm import completion, embedding, rerank +from litellm.exceptions import APIConnectionError, Timeout, APIError +import logging +from ..text_utils import get_formatted_current_date_english, get_formatted_current_year +from ..schemas import ChatMessage +from .base_llm_service import BaseLLMService +from typing import Generator +# Set up logging +logging.basicConfig(level=logging.DEBUG) +logger = logging.getLogger(__name__) + +class LiteLLMService(BaseLLMService): + def __init__(self): + os.environ["COHERE_API_KEY"] = settings.COHERE_API_KEY + + def chat_stream(self, messages: list[ChatMessage], documents: list) -> Generator: + logger.info("Starting chat stream...") + os.environ["COHERE_API_KEY"] = settings.COHERE_API_KEY + + # Flatten the documents structure + flattened_docs = [{ + 'id': doc['id'], + 'title': doc['data']['title'], + 'snippet': doc['data']['snippet'], + 'publication date': doc['data']['publication date'], + 'municipality': doc['data']['municipality'], + 'source': doc['data']['source'], + 'type': doc['data']['type'] + } for doc in documents] + + system_prompt = messages[0].content + + # Log the messages and documents being sent + logger.info(f"System_prompt being sent: {system_prompt}") + logger.info(f"Messages being sent: {messages}") + logger.info(f"Documents being sent: {flattened_docs[0]}") + + try: + return completion( + model="cohere/command-r-plus-08-2024", + messages=[{ + 'role': message.role, + 'content': message.get_param("formatted_content") + } for message in messages + ], + documents=flattened_docs, + citation_quality="accurate", + stream=True + ) + + except GeneratorExit: + logger.info("Chat stream generator closed") + return + except APIConnectionError as e: + logger.error(f'Chat stream connection failed: {e}') + raise + except Timeout as e: + logger.error(f'Chat stream request timed out: {e}') + raise + except APIError as e: + logger.error(f'Chat stream API error occurred: {e}') + raise + + def rerank_documents(self, query: str, documents: list): + logger.info("Reranking documents...") + try: + response = rerank( + query=query, + documents=documents, + top_n=20, + model=f"cohere/{settings.COHERE_RERANK_MODEL}", + return_documents=True + ) + + # Transform response to match expected format + if hasattr(response, 'results'): + # Create object with results attribute containing list of results + class RerankedResult: + def __init__(self, index, relevance_score): + self.index = index + self.relevance_score = relevance_score + + transformed_response = type('RerankedResponse', (), { + 'results': [ + RerankedResult( + index=result['index'], + relevance_score=result['relevance_score'] + ) for result in response.results + ] + }) + + return transformed_response + + return response + except APIConnectionError as e: + logger.error(f'Reranking connection failed: {e}') + except Timeout as e: + logger.error(f'Reranking request timed out: {e}') + except APIError as e: + logger.error(f'Reranking API error occurred: {e}') + + def generate_dense_embedding(self, query: str): + try: + embedding_response = embedding( + input=[query], + input_type="search_query", + model=f"cohere/{settings.COHERE_EMBED_MODEL}" + ) + + return embedding_response.data[0]['embedding'] + except APIConnectionError as e: + logger.error(f'Embedding connection failed: {e}') + except Timeout as e: + logger.error(f'Embedding request timed out: {e}') + except APIError as e: + logger.error(f'Embedding API error occurred: {e}') + + def create_chat_session_name(self, user_message: ChatMessage): + logger.info(f"Creating chat session name for query: {user_message.content}, using rewritten query: {user_message.formatted_content}") + + system_message = self._get_chat_name_system_message() + messages = [system_message, user_message] + response = None + + try: + response = completion( + model="cohere/command-r", + messages=[{ + 'role': message.role, + 'content': message.get_param("formatted_content") + } for message in messages + ], + ) + except APIConnectionError as e: + logger.error(f'Chat name connection failed: {e}') + except Timeout as e: + logger.error(f'Chat name request timed out: {e}') + except APIError as e: + logger.error(f'Chat name API error occurred: {e}') + + if response: + name = response.message.content[0].text + return self._truncate_chat_name(name) + else: + return None + + def rewrite_query_for_vector_base(self, new_message: ChatMessage) -> str: + logger.info("Rewriting query...") + pass + + def rewrite_query_for_llm(self, new_message: ChatMessage) -> str: + logger.info("Rewriting query...") + pass + + def rewrite_query_with_history_for_vector_base(self, new_message: ChatMessage, messages: list[ChatMessage]) -> str: + logger.info("Rewriting query based on chat history...") + + # Filter out system messages and get last few messages for context + # Get up to last 6 messages, but works with fewer messages too + chat_history = [msg for msg in messages if msg.role != "system"][-6:] + + system_message = ChatMessage( + role="system", + content=self.QUERY_REWRITE_SYSTEM_MESSAGE + ) + + # Format chat history and new query + history_context = "\n".join([ + f"{msg.role}: {msg.get_param('formatted_content')}" for msg in chat_history + ]) + user_message = ChatMessage( + role="user", + content=f"""Chat history: + {history_context} + + New query: {new_message.content} + + Rewrite this query to include relevant context from the chat history.""" + ) + + try: + response = completion( + model="cohere/command-r", + messages=[{ + 'role': msg.role, + 'content': msg.content + } for msg in [system_message, user_message]], + temperature=0.1 + ) + + rewritten_query = response.message.content[0].text + logger.info(f"Original query: {new_message.content}") + logger.info(f"Rewritten query: {rewritten_query}") + return rewritten_query + except Exception as e: + logger.error(f"Error rewriting query: {e}") + return new_message.content # Fall back to original query if rewriting fails \ No newline at end of file diff --git a/chat/backend/app/services/qdrant_pool.py b/chat/backend/app/services/qdrant_pool.py new file mode 100644 index 0000000..1304eca --- /dev/null +++ b/chat/backend/app/services/qdrant_pool.py @@ -0,0 +1,112 @@ +from ..config import settings +from qdrant_client import QdrantClient +import logging +import threading +import queue +from contextlib import contextmanager +from datetime import datetime +import time +from typing import Optional + +# Set up logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +class QdrantConnectionPool: + _instance = None + _lock = threading.Lock() + _pool = None + _last_health_check = None + _health_check_interval = 60 # seconds + _timeout = settings.QDRANT_TIMEOUT # Ensure this is set to a higher value if needed + + def __init__(self): + self._pool = queue.Queue(maxsize=settings.QDRANT_POOL_SIZE) + self._active_connections = 0 + self._total_connections = 0 + self._failed_connections = 0 + self.initialize_pool() + + def initialize_pool(self): + """Initialize the connection pool""" + for _ in range(settings.QDRANT_POOL_SIZE): + self._add_connection() + + def _add_connection(self) -> Optional[QdrantClient]: + """Create and add a new connection to the pool""" + try: + client = QdrantClient( + host=settings.QDRANT_HOST, + port=settings.QDRANT_PORT, + timeout=self._timeout, + prefer_grpc=True + ) + # Test connection + client.get_collections() + self._pool.put(client) + self._total_connections += 1 + return client + except Exception as e: + logger.error(f"Failed to create Qdrant connection: {e}") + self._failed_connections += 1 + return None + + def _check_connection_health(self, client: QdrantClient) -> bool: + """Check if connection is healthy""" + try: + client.get_collections() + return True + except Exception: + return False + + def _health_check(self): + """Perform health check on all connections""" + if (not self._last_health_check or + time.time() - self._last_health_check > self._health_check_interval): + with self._lock: + size = self._pool.qsize() + for _ in range(size): + client = self._pool.get() + if not self._check_connection_health(client): + logger.warning("Unhealthy connection detected, creating new one") + client.close() + client = self._add_connection() + if client: + self._pool.put(client) + self._last_health_check = time.time() + + @classmethod + def get_instance(cls): + """Get singleton instance with double-checked locking""" + if cls._instance is None: + with cls._lock: + if cls._instance is None: + cls._instance = cls() + return cls._instance + + @contextmanager + def get_client(self): + """Get client from pool with context manager""" + client = None + try: + self._health_check() + client = self._pool.get(timeout=settings.QDRANT_POOL_TIMEOUT) + self._active_connections += 1 + yield client + except queue.Empty: + logger.error("Connection pool timeout - no available connections") + raise RuntimeError("No available database connections") + finally: + if client: + self._active_connections -= 1 + self._pool.put(client) + + def get_pool_stats(self): + """Get pool statistics""" + return { + "total_connections": self._total_connections, + "active_connections": self._active_connections, + "available_connections": self._pool.qsize(), + "failed_connections": self._failed_connections, + "last_health_check": self._last_health_check + } \ No newline at end of file diff --git a/chat/backend/app/services/qdrant_service.py b/chat/backend/app/services/qdrant_service.py new file mode 100644 index 0000000..c1e083e --- /dev/null +++ b/chat/backend/app/services/qdrant_service.py @@ -0,0 +1,419 @@ +from ..config import settings +from ..schemas import ChatMessage +from ..models import Session +from qdrant_client import QdrantClient, models +import logging +from typing import List, Dict, AsyncGenerator, Tuple +from markdown import markdown +import os +from ..services.base_llm_service import BaseLLMService +from ..text_utils import format_content +from fastembed.sparse import SparseTextEmbedding +from qdrant_client.http import models +from ..schemas import ChatDocument, Location +import threading +import queue +from contextlib import contextmanager +from datetime import datetime, date +import time +from typing import Optional +from .qdrant_pool import QdrantConnectionPool +from sklearn.metrics.pairwise import cosine_similarity +import numpy as np +import yaml +from ..text_utils import get_formatted_date_english +# Set up logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +class QdrantService: + DENSE_VECTORS_NAME = "text-dense" + SPARSE_VECTORS_NAME = "text-sparse" + + _sparse_document_embedder = None + _embedder_lock = threading.Lock() + # Adjust semaphore based on available CPU cores and workers + # Using (CPU cores * 2) as a good balance for concurrent embeddings + _query_semaphore = threading.BoundedSemaphore(16) + + # Add batch size control for optimal memory usage + BATCH_SIZE = 32 # Process embeddings in batches + + + def __init__(self, llm_service: BaseLLMService): + self.llm_service = llm_service + self.dense_model_name = settings.COHERE_EMBED_MODEL + self.sparse_model_name = settings.SPARSE_EMBED_MODEL + self.pool = QdrantConnectionPool.get_instance() + + @classmethod + def get_sparse_embedder(cls): + if cls._sparse_document_embedder is None: + with cls._embedder_lock: + if cls._sparse_document_embedder is None: + models_dir = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'models') + try: + # Set num_threads based on CPU cores while leaving room for other operations + cls._sparse_document_embedder = SparseTextEmbedding( + cache_dir=models_dir, + model_name=settings.SPARSE_EMBED_MODEL, + num_threads=4 # Half of CPU cores for embedding + ) + except Exception as e: + logger.error(f"Failed to initialize sparse embedder: {e}") + raise + return cls._sparse_document_embedder + + + def generate_sparse_embedding(self, query: str): + try: + with self._query_semaphore: + sparse_vectors = self.get_sparse_embedder().query_embed(query) + return next(iter(sparse_vectors), None) + except Exception as e: + logger.error(f"Error generating sparse embedding: {e}") + return None + + def get_documents_by_ids(self, documents: List[ChatDocument]): + if not documents or len(documents) == 0: + logger.debug("No documents provided to retrieve") + return [] + + qdrant_document_chunk_ids = [] + for doc in documents: + if doc and doc.chunk_id: # Add null check + qdrant_document_chunk_ids.append(doc.chunk_id) + + if not qdrant_document_chunk_ids: + logger.warning("No valid document IDs found.") + return [] + + try: + with self.pool.get_client() as client: + qdrant_documents = client.retrieve( + collection_name=settings.QDRANT_COLLECTION, + ids=qdrant_document_chunk_ids, + with_payload=True, + ) + + # Convert records to the expected dictionary format + qdrant_documents_dicts = [{ + 'id': record.id, + 'payload': record.payload, + 'rerank_score': 0.0 + } for record in qdrant_documents] + + return self._prepare_documents_with_scores_and_feedback(qdrant_documents_dicts, documents) + except Exception as e: + logger.error(f"Error retrieving documents from Qdrant using document IDs: {e}") + return [] + + def hybrid_search(self, query, locations: List[Location] = None, date_range: List[datetime] = None) -> List[Dict]: + + sparse_vector = self.generate_sparse_embedding(query) + + try: + # Generate dense embeddings + dense_vector = self.llm_service.generate_dense_embedding(query) + except Exception as e: + logger.error(f"Error creating dense vector from query using Cohere: {e}") + raise + + # Build filter conditions + filter_conditions = [] + + logger.debug("Adding location filters") + + if locations and len(locations) > 0: + filter_conditions.append( + models.FieldCondition( + key="meta.location", + match=models.MatchAny(any=[location.id for location in locations]) + ) + ) + + if date_range and len(date_range) == 2: + filter_conditions.append( + models.FieldCondition( + key="meta.published", + range=models.DatetimeRange( + gte=date_range[0].strftime("%Y-%m-%dT%H:%M:%SZ"), + lte=date_range[1].strftime("%Y-%m-%dT%H:%M:%SZ") + ) + ) + ) + + # Combine filters if any exist + search_filter = None + if filter_conditions: + search_filter = models.Filter( + must=filter_conditions, + ) + + logger.info(f"Retrieving documents from Qdrant for query using hybrid search: {query}, and filters: {search_filter}") + + try: + logger.info(f"Querying vector database with query: '{query}'") + with self.pool.get_client() as client: + qdrant_documents = client.query_points( + collection_name=settings.QDRANT_COLLECTION, + prefetch=[ + models.Prefetch( + query=models.SparseVector( + indices=sparse_vector.indices, + values=sparse_vector.values, + ), + using=self.SPARSE_VECTORS_NAME, + filter=search_filter, # Apply filter to sparse search + limit=settings.QDRANT_SPARSE_RETRIEVE_LIMIT + ), + models.Prefetch( + query=dense_vector, + using=self.DENSE_VECTORS_NAME, + filter=search_filter, # Apply filter to dense search + limit=settings.QDRANT_DENSE_RETRIEVE_LIMIT + ), + ], + query=models.FusionQuery(fusion=models.Fusion.RRF), + limit=settings.QDRANT_HYBRID_RETRIEVE_LIMIT, + score_threshold=None, + with_payload=True, + with_vectors=True, + timeout=settings.QDRANT_HYBRID_SEARCH_TIMEOUT, # Increase timeout to 120 seconds + ).points + + except Exception as e: + logger.error(f"Error retrieving documents from Qdrant using hybrid search: {e}") + return None + + if not qdrant_documents: + logger.warning("No documents found in Qdrant") + + # Convert qdrant_document_candidates to a list of dictionaries + qdrant_documents_dicts = self._qdrant_documents_searched_to_dicts(qdrant_documents) + + return qdrant_documents_dicts + + def dense_vector_search(self, query): + logger.debug(f"Retrieving documents from Qdrant for query: {query}") + + try: + # Generate dense embeddings + dense_vector = self.llm_service.generate_dense_embedding(query) + except Exception as e: + logger.error(f"Error creating dense vector from query using Cohere: {e}") + return None + + try: + qdrant_documents = self.pool.get_client().search( + query_vector=(self.DENSE_VECTORS_NAME, dense_vector), + collection_name=settings.QDRANT_COLLECTION, + limit=settings.QDRANT_DENSE_RETRIEVE_LIMIT + ) + + if not qdrant_documents: + logger.warning("No documents found in Qdrant") + + return qdrant_documents + except Exception as e: + logger.error(f"Error retrieving documents from Qdrant using dense vector search: {e}") + return None + + def _qdrant_documents_retrieved_to_dicts(self, qdrant_documents): + return [ + { + 'id': candidate.id, + 'payload': candidate.payload + } + for candidate in qdrant_documents + ] + + def _qdrant_documents_searched_to_dicts(self, qdrant_documents): + return [ + { + 'id': candidate.id, + 'version': candidate.version, + 'score': candidate.score, + 'payload': candidate.payload, + 'vector': candidate.vector + } + for candidate in qdrant_documents + ] + + def retrieve_relevant_documents(self, query: str, locations: List[Location] = None, date_range: List[date] = None) -> List[Dict]: + logger.debug(f"Retrieving relevant documents for query: {query}") + + # Step 1: Retrieve initial candidates with filters + qdrant_document_candidates = self.hybrid_search(query, locations, date_range) + + # Check if qdrant_documents is None or empty + if not qdrant_document_candidates: + logger.warning("No documents retrieved from Qdrant") + return [] + + # Step 2: Get relevance scores + # document_texts = [document['payload']['content'] for document in qdrant_document_candidates] + # Extract document metadata + qdrant_document_candidates_with_payload = [{ + 'Title': doc['payload']['meta']['title'], + 'Location': doc['payload']['meta']['location_name'], + 'Published': get_formatted_date_english(doc['payload']['meta']['published']), + 'Documents type': doc['payload']['meta']['type'], + 'Data source': doc['payload']['meta']['source'], + 'Document source': BaseLLMService.get_human_readable_source(doc['payload']['meta']['source']), + 'Content': doc['payload']['content'] + } for doc in qdrant_document_candidates] + + document_candidates_yaml = [yaml.dump(doc, sort_keys=False) for doc in qdrant_document_candidates_with_payload] + + logger.debug(f"Reranking:\n\n {document_candidates_yaml[0]}...") + reranked_documents = self.llm_service.rerank_documents( + query=query, + documents=document_candidates_yaml, + top_n=settings.RERANK_DOC_RETRIEVE_LIMIT, + return_documents=False + ) + + # Initialize all documents with a default rerank score + for candidate in qdrant_document_candidates: + candidate['rerank_score'] = 0.0 + + # Update scores for documents that were reranked + for candidate, reranked_doc in zip(qdrant_document_candidates, reranked_documents.results): + try: + candidate['rerank_score'] = reranked_doc.relevance_score + except AttributeError as e: + logger.warning(f"Could not get relevance score for document: {e}") + # Keep default score of 0.0 + + logger.info(f"Reranked documents: {len(qdrant_document_candidates)}") + + # Filter out candidates with low rerank scores + qdrant_document_candidates = [ + candidate for candidate in qdrant_document_candidates + if candidate.get('rerank_score', 0.0) >= settings.RERANK_RELEVANCE_THRESHOLD + ] + logger.info(f"Filtered documents: {len(qdrant_document_candidates)}") + + # Return early if no documents meet the threshold + if not qdrant_document_candidates: + logger.warning(f"No documents met the minimum score threshold of {settings.RERANK_RELEVANCE_THRESHOLD}") + return [] + + # Step 3: Compute similarity matrix + dense_embeddings = [candidate['vector']['text-dense'] for candidate in qdrant_document_candidates] + similarity_matrix = cosine_similarity(dense_embeddings) + + # Step 4: Apply MMR + logger.info(f"Applying MMR to {len(qdrant_document_candidates)} documents, to remove most similar documents, and keep {settings.MMR_DOC_RETRIEVE_LIMIT} documents") + relevance_scores = [candidate.get('rerank_score', 0.0) for candidate in qdrant_document_candidates] + diversified_candidates = self._mmr( + documents=qdrant_document_candidates, + query_embedding=dense_embeddings, + relevance_scores=relevance_scores, + similarity_matrix=similarity_matrix, + lambda_param=settings.MMR_DOC_LAMBDA_PARAM, + top_n=settings.MMR_DOC_RETRIEVE_LIMIT + ) + + return self.prepare_documents(diversified_candidates) + + def prepare_documents(self, qdrant_documents): + return [self._prepare_document_dict(doc) for doc in qdrant_documents] + + def _prepare_documents_with_scores_and_feedback(self, qdrant_documents, documents: List[ChatDocument]): + # Create a dictionary mapping document IDs to their scores + score_map = {str(doc.chunk_id): doc.score for doc in documents} + rerank_score_map = {str(doc.chunk_id): doc.rerank_score for doc in documents} + feedback_map = {str(doc.chunk_id): doc.feedback for doc in documents} + chunk_id_map = {str(doc.chunk_id): doc.id for doc in documents} + + return [ + self._prepare_document_dict( + doc, + score_map.get(str(doc['id']), 0), + rerank_score_map.get(str(doc['id']), 0), + feedback_map.get(str(doc['id']), None), + chunk_id_map.get(str(doc['id']), None) + ) for doc in qdrant_documents + ] + + def reorder_documents_by_publication_date(self, documents: List[Dict]): + # Filter out ChatDocument instances and convert them to the expected format + formatted_documents = [] + for doc in documents: + if isinstance(doc, ChatDocument): + # Skip ChatDocument instances as they don't contain publication dates + continue + formatted_documents.append(doc) + + return sorted(formatted_documents, key=lambda x: x['data']['published'], reverse=True) + + def _get_best_url(self, doc): + url = "" + if doc['payload']['meta']['doc_url']: + url = doc['payload']['meta']['doc_url'] + elif doc['payload']['meta']['url']: + url = doc['payload']['meta']['url'] + return url + + def _prepare_document_dict(self, doc, score=None, rerank_score=None, feedback=None, id=None): + """Helper method to prepare a single document dictionary""" + + return { + 'id': id, + 'chunk_id': doc['id'], + 'score': score if score is not None else doc['score'], + 'rerank_score': rerank_score if rerank_score is not None else doc['rerank_score'], + 'feedback': feedback, + 'data': { + 'source_id': doc['payload']['meta']['source_id'], + 'url': self._get_best_url(doc), + 'title': doc['payload']['meta']['title'], + 'location': doc['payload']['meta']['location'], + 'location_name': doc['payload']['meta']['location_name'], + # 'modified': doc['payload']['meta']['modified'], + 'published': doc['payload']['meta']['published'], + 'type': doc['payload']['meta']['type'], + 'source': doc['payload']['meta']['source'], + # 'page_number': doc.payload['meta']['page_number'], + # 'page_count': doc['payload']['meta']['page_count'], + 'content': doc['payload']['content'] + } + } + + def _mmr(self, documents, query_embedding, relevance_scores, similarity_matrix, lambda_param=0.7, top_n=10): + selected = [] + candidate_indices = list(range(len(documents))) + + # Normalize relevance scores + relevance_scores = np.array(relevance_scores) + if relevance_scores.max() > relevance_scores.min(): + relevance_scores = (relevance_scores - relevance_scores.min()) / (relevance_scores.max() - relevance_scores.min()) + else: + relevance_scores = np.ones_like(relevance_scores) + + while len(selected) < top_n and candidate_indices: + mmr_scores = [] + for idx in candidate_indices: + # Relevance to the query + relevance = relevance_scores[idx] + + # Max similarity to already selected documents + if selected: + sim_to_selected = max([similarity_matrix[idx][sel_idx] for sel_idx in selected]) + else: + sim_to_selected = 0 + + # Calculate MMR score + mmr_score = lambda_param * relevance - (1 - lambda_param) * sim_to_selected + mmr_scores.append((mmr_score, idx)) + + # Select the document with the highest MMR score + mmr_scores.sort(reverse=True) + selected_idx = mmr_scores[0][1] + selected.append(selected_idx) + candidate_indices.remove(selected_idx) + + # Return the selected documents + return [documents[idx] for idx in selected] \ No newline at end of file diff --git a/chat/backend/app/services/session_service.py b/chat/backend/app/services/session_service.py new file mode 100644 index 0000000..46e5e6a --- /dev/null +++ b/chat/backend/app/services/session_service.py @@ -0,0 +1,293 @@ +from .database_service import DatabaseService +from ..models import Session as SessionModel, DocumentFeedback as DocumentFeedbackModel, Message, Document, MessageFeedback as MessageFeedbackModel, MessageDocument +from ..schemas import SessionCreate, ChatMessage, ChatDocument, Session, DocumentFeedback, MessageFeedback, SearchFilter, Location +from fastapi import HTTPException +import uuid +import logging +from typing import List, Dict +from datetime import datetime +from sqlalchemy.orm import joinedload + + +# Set up logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +class SessionService(DatabaseService): + def __init__(self, db): + super().__init__(db) + + def create_session(self, session_create: SessionCreate) -> SessionModel: + # Create the session first + logger.info(f"Creating session with name: {session_create.name} and messages: {session_create.messages}") + + new_session = SessionModel( + id=str(uuid.uuid4()), + name=session_create.name, + messages=self._messages_schema_to_db_model(session_create.messages) + ) + self.db.add(new_session) + self.db.commit() + self.db.refresh(new_session) + + return self._session_db_model_to_schema(new_session) + + def update_session_name(self, session_id: str, name: str) -> Session: + db_session = self._get_session(session_id) + if not db_session: + raise HTTPException(status_code=404, detail="Session not found") + db_session.name = name + self.db.commit() + self.db.refresh(db_session) + + return self._session_db_model_to_schema(db_session) + + def get_session(self, session_id: str) -> Session: + return self._session_db_model_to_schema(self._get_session(session_id)) + + def get_session_with_relations(self, session_id: str) -> Session: + return self._session_db_model_to_schema(self._get_session_with_relations(session_id)) + + def delete_session(self, session_id: str): + db_session = self._get_session(session_id) + if db_session is None: + raise HTTPException(status_code=404, detail="Session not found") + self.db.delete(db_session) + + def get_messages(self, session: Session) -> List[ChatMessage]: + # Query messages with feedback relationship eagerly loaded + messages = self.db.query(Message)\ + .filter(Message.session_id == session.id)\ + .outerjoin(MessageFeedback)\ + .options(joinedload(Message.feedback))\ + .order_by(Message.sequence)\ + .all() + + return self._messages_db_model_to_schema(messages) + + def get_documents(self, session: Session) -> List[ChatDocument]: + # Query documents directly through message_documents relationship + documents = self.db.query(Document)\ + .join(MessageDocument)\ + .join(Message)\ + .outerjoin(Document.feedback)\ + .filter(Message.session_id == session.id)\ + .distinct()\ + .all() + + return self._documents_db_model_to_schema(documents) + + def add_message(self, session_id: int, message: ChatMessage) -> Session: + db_session = self._get_session(session_id) + db_message = self._message_schema_to_db_model(message, len(db_session.messages)) + + logger.info(f"Adding message with search_filters: {db_message.search_filters}") + + db_session.messages.append(db_message) + self.db.commit() + self.db.refresh(db_session, ['messages']) + + return self._session_db_model_to_schema(db_session) + + def add_and_get_message(self, session_id: int, message: ChatMessage) -> Session: + db_session = self._get_session(session_id) + db_message = self._message_schema_to_db_model(message, len(db_session.messages)) + db_session.messages.append(db_message) + + self.db.commit() + self.db.refresh(db_session, ['messages']) + + logger.info(f"Added message with id: {db_message.id}") + return self._message_db_model_to_schema(db_message) + + def add_messages(self, session_id: int, messages: List[ChatMessage]) -> Session: + db_session = self._get_session(session_id) + db_messages = self._messages_schema_to_db_model(messages) + db_session.messages.extend(db_messages) + self.db.commit() + self.db.refresh(db_session, ['messages']) + return self._session_db_model_to_schema(db_session) + + def update_message(self, message: ChatMessage) -> ChatMessage: + logger.info(f"Updating message with id: {message.id} and content: {message.content}") + db_message = self.db.query(Message)\ + .filter(Message.id == message.id)\ + .first() + + if db_message is None: + raise HTTPException(status_code=404, detail="Message not found") + + db_message.content = message.content + db_message.formatted_content = message.formatted_content + + self.db.commit() + self.db.refresh(db_message) + return self._message_db_model_to_schema(db_message) + + def _get_session(self, session_id: str) -> SessionModel: + db_session = self.db.query(SessionModel)\ + .options(joinedload(SessionModel.messages))\ + .filter(SessionModel.id == session_id)\ + .first() + if db_session is None: + raise HTTPException(status_code=404, detail="Session not found") + return db_session + + def _get_session_with_relations(self, session_id: str) -> SessionModel: + db_session = self.db.query(SessionModel)\ + .options( + joinedload(SessionModel.messages).joinedload(Message.feedback), + joinedload(SessionModel.messages).joinedload(Message.documents).joinedload(Document.feedback), + joinedload(SessionModel.feedback) + )\ + .filter(SessionModel.id == session_id)\ + .first() + if db_session is None: + raise HTTPException(status_code=404, detail="Session not found") + return db_session + + # Convert DB models to schemas + def _session_db_model_to_schema(self, db_session: SessionModel) -> Session: + return Session( + id=db_session.id, + name=db_session.name, + messages=self._messages_db_model_to_schema(db_session.messages), + ) + + def _messages_db_model_to_schema(self, db_messages: List[Message]) -> List[ChatMessage]: + return [self._message_db_model_to_schema(msg) for msg in db_messages] + + def _documents_db_model_to_schema(self, db_documents: List[Document]) -> List[ChatDocument]: + return [self._document_db_model_to_schema(doc) for doc in db_documents] + + def _document_db_model_to_schema(self, db_document: Document) -> ChatDocument: + if db_document is None: + return None + + return ChatDocument( + id=db_document.id, + chunk_id=db_document.chunk_id, + content=db_document.content, + score=db_document.score, + rerank_score=db_document.rerank_score, + title=db_document.title, + url=db_document.url, + feedback=self._document_feedback_db_model_to_schema(db_document.feedback) + ) + + def _document_feedback_db_model_to_schema(self, db_feedback: DocumentFeedbackModel) -> DocumentFeedback: + if db_feedback is None: + return None + + return DocumentFeedback( + id=db_feedback.id, + document_id=db_feedback.document_id, + created_at=db_feedback.created_at, + feedback_type=db_feedback.feedback_type, + notes=db_feedback.notes + ) + + def _message_feedback_db_model_to_schema(self, db_feedback: MessageFeedbackModel) -> MessageFeedback: + if db_feedback is None: + return None + + return MessageFeedback( + id=db_feedback.id, + message_id=db_feedback.message_id, + created_at=db_feedback.created_at, + feedback_type=db_feedback.feedback_type, + notes=db_feedback.notes + ) + + def _message_db_model_to_schema(self, db_message: Message) -> ChatMessage: + if db_message is None: + return None + + return ChatMessage( + id=db_message.id, + role=db_message.role, + message_type=db_message.message_type, + sequence=db_message.sequence, + content=db_message.content, + formatted_content=db_message.formatted_content, + user_query=db_message.user_query, + rewritten_query_for_vector_base=db_message.rewritten_query_for_vector_base, + rewritten_query_for_llm=db_message.rewritten_query_for_llm, + search_filters=self._search_filters_db_model_to_schema(db_message.search_filters), + feedback=self._message_feedback_db_model_to_schema(db_message.feedback), + documents=self._documents_db_model_to_schema(db_message.documents) + ) + + def _search_filters_db_model_to_schema(self, db_search_filters: Dict) -> SearchFilter: + if db_search_filters is None: + return None + + return SearchFilter( + locations=db_search_filters.get("locations", []), + date_range=db_search_filters.get("date_range", []), + rewrite_query=db_search_filters.get("rewrite_query", True) + ) + + # Convert schemas to DB models + def _session_schema_to_db_model(self, session: Session) -> SessionModel: + if session is None: + return None + + return SessionModel( + id=session.id, + name=session.name, + messages=self._messages_schema_to_db_model(session.messages) + ) + + def _messages_schema_to_db_model(self, messages: List[ChatMessage]) -> List[Message]: + return [self._message_schema_to_db_model(message, i) for i, message in enumerate(messages)] + + def _documents_schema_to_db_model(self, documents: List[ChatDocument]) -> List[Document]: + return [self._document_schema_to_db_model(document) for document in documents] + + def _message_schema_to_db_model(self, message: ChatMessage, sequence: int) -> Message: + if message is None: + return None + + db_message = Message( + sequence=sequence, + role=message.role, + message_type=message.message_type, + content=message.content, + formatted_content=message.formatted_content, + user_query=message.user_query, + rewritten_query_for_vector_base=message.rewritten_query_for_vector_base, + rewritten_query_for_llm=message.rewritten_query_for_llm, + documents=self._documents_schema_to_db_model(message.documents), + ) + + if message.search_filters: + db_message.search_filters = self._prepare_search_filters_for_db(message.search_filters) + + return db_message + + def _document_schema_to_db_model(self, document: ChatDocument) -> Document: + if document is None: + return None + + return Document( + chunk_id=document.chunk_id, + content=document.content, + score=document.score, + rerank_score=document.rerank_score, + title=document.title, + url=document.url, + ) + + def _prepare_search_filters_for_db(self, search_filters: SearchFilter) -> Dict: + if not search_filters: + return None + + return { + "locations": self._locations_to_db_model(search_filters.locations), + "date_range": [date.isoformat() for date in search_filters.date_range] if search_filters.date_range else [], + "rewrite_query": search_filters.rewrite_query + } + + def _locations_to_db_model(self, locations: List[Location]): + return [{"id": location.id, "name": location.name, "type": location.type} for location in locations] \ No newline at end of file diff --git a/chat/backend/app/text_utils.py b/chat/backend/app/text_utils.py new file mode 100644 index 0000000..9fa29fa --- /dev/null +++ b/chat/backend/app/text_utils.py @@ -0,0 +1,94 @@ +from datetime import datetime +import locale +import markdown +import logging +import re + +# Set up logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +def get_formatted_current_date_dutch(): + return get_formatted_date_dutch(datetime.now()) + +def get_formatted_current_date_english(): + return get_formatted_date_english(datetime.now()) + +def get_formatted_date_english(date): + logger.debug(f"Formatting date EN: {date}") + + if isinstance(date, str): + try: + date = datetime.fromisoformat(date) + except ValueError: + # If the string is not in ISO format, you might need to specify the format + # date = datetime.strptime(date, '%Y-%m-%d') + return "Invalid date format" + + try: + locale.setlocale(locale.LC_TIME, 'en_US.UTF-8') + except locale.Error: + logger.warning("Failed to set locale to en_US.UTF-8.") + + + return date.strftime('%A, %d %B %Y') + +def get_formatted_current_year(): + return datetime.now().year + +def get_formatted_date_dutch(date): + logger.debug(f"Formatting date NL: {date}") + + if isinstance(date, str): + try: + date = datetime.fromisoformat(date) + except ValueError: + # If the string is not in ISO format, you might need to specify the format + # date = datetime.strptime(date, '%Y-%m-%d') + return "Invalid date format" + + try: + locale.setlocale(locale.LC_TIME, 'nl_NL.UTF-8') + except locale.Error: + logger.warning("Failed to set locale to nl_NL.UTF-8.") + + return date.strftime('%A, %d %B %Y') + +def to_markdown(text): + # Replace single line bullet points with properly formatted ones + text = re.sub(r':\s*-\s*', ':\n\n- ', text) + return markdown.markdown(text) + +def format_content(content): + # Remove leading and trailing whitespace + content = re.sub(r']*>', '', content) + content = f'{content} [...]' + return to_markdown(content) + +def add_citations_to_text(text, citations): + if citations is None or len(citations) == 0: + return text + + citations_list = sorted(citations, key=lambda x: x['start']) + + text_w_citations = "" + last_end = 0 + + for citation in citations_list: + text_w_citations += text[last_end:citation['start']] + citation_text = text[citation['start']:citation['end']] + document_id_list_string = ','.join([f"'{doc_id}'" for doc_id in citation['document_ids']]) + text_w_citations += f'{citation_text}' + last_end = citation['end'] + + text_w_citations += text[last_end:] + + return text_w_citations + +def format_text(text, citations): + if citations is None or len(citations) == 0: + return to_markdown(text) + + text_w_citations = add_citations_to_text(text, citations) + html_text = to_markdown(text_w_citations) # Change this line + return html_text diff --git a/chat/backend/entrypoint.sh b/chat/backend/entrypoint.sh new file mode 100644 index 0000000..8a8573b --- /dev/null +++ b/chat/backend/entrypoint.sh @@ -0,0 +1,8 @@ +#!/bin/sh + +# Add --reload if ENV variable DEVELOPMENT is set to true +if [ "$ENVIRONMENT" = "development" ]; then + exec uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload +else + exec uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 8 +fi diff --git a/chat/backend/requirements.txt b/chat/backend/requirements.txt new file mode 100644 index 0000000..76ea36c --- /dev/null +++ b/chat/backend/requirements.txt @@ -0,0 +1,27 @@ +fastapi +watchgod +qdrant-client +cohere +fastembed +uvicorn[standard] +websockets +python-dotenv +watchfiles +markdown +sqlalchemy +mysqlclient +uuid +pydantic[email,timezone] +pydantic-settings +sentry-sdk[fastapi] +arize-phoenix +litellm +#git+https://github.com/BerriAI/litellm.git@litellm_dev_11_30_2024#egg=litellm +openinference-semantic-conventions +openinference-instrumentation-litellm +arize-phoenix-otel +arize-phoenix-evals +scikit-learn +numpy +alembic +httpx \ No newline at end of file diff --git a/chat/backend/run.py b/chat/backend/run.py new file mode 100644 index 0000000..f1359a8 --- /dev/null +++ b/chat/backend/run.py @@ -0,0 +1,5 @@ +from backend.app.main import app + +if __name__ == "__main__": + import uvicorn + uvicorn.run(app, host="0.0.0.0", port=8000) diff --git a/chat/docker-compose.dev.yml b/chat/docker-compose.dev.yml new file mode 100644 index 0000000..1f0e404 --- /dev/null +++ b/chat/docker-compose.dev.yml @@ -0,0 +1,84 @@ +# Development environment configuration for Bron Chat +# This file extends the base docker-compose.yml with development-specific settings +# It configures hot-reloading, volume mounts, and other development conveniences + +services: + backend: + env_file: + - .env # Load environment variables from .env file + build: + context: ./backend + dockerfile: Dockerfile.dev # Development-specific Dockerfile + ports: + - "8000:8000" # Expose API on port 8000 + environment: + - PYTHONPATH=/app/backend # Ensure Python can find the application modules + - QDRANT_HOST=${QDRANT_HOST} # Vector database host + - QDRANT_PORT=${QDRANT_PORT} # Vector database port + - COHERE_API_KEY=${COHERE_API_KEY} # API key for Cohere LLM service + - ENVIRONMENT=${ENVIRONMENT} # Current environment (development) + - MYSQL_DATABASE=${MYSQL_DATABASE} # Database name + - MYSQL_USER=${MYSQL_USER} # Database user + - MYSQL_PASSWORD=${MYSQL_PASSWORD} # Database password + volumes: + - ./backend:/app/backend # Mount local backend code for hot-reloading + extra_hosts: + - "host.docker.internal:host-gateway" # Allow container to access host services + depends_on: + - mysql # Ensure database is started before backend + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8000/health"] # Health check endpoint + interval: 30s + timeout: 10s + retries: 3 + command: /entrypoint.sh # Development entrypoint script + + frontend: + env_file: + - .env # Load environment variables from .env file + build: + context: ./frontend + dockerfile: Dockerfile.dev # Development-specific Dockerfile + ports: + - "5173:5173" # Expose frontend on port 5173 (SvelteKit dev server) + stdin_open: true + tty: true + environment: + - PUBLIC_API_URL=${PUBLIC_API_URL} # API URL for frontend to connect to + volumes: + - ./frontend:/app # Mount local frontend code for hot-reloading + - /app/node_modules # Exclude node_modules from mounting (performance) + command: npm run dev -- --host 0.0.0.0 # Run SvelteKit dev server + depends_on: + - backend # Ensure backend is started before frontend + + mysql: + image: mysql:8.0 # MySQL database server + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} # Root password + MYSQL_DATABASE: ${MYSQL_DATABASE} # Database name + MYSQL_USER: ${MYSQL_USER} # Database user + MYSQL_PASSWORD: ${MYSQL_PASSWORD} # Database password + volumes: + - mysql_data:/var/lib/mysql # Persist database data + ports: + - "3306:3306" # Expose MySQL port + command: > + --character-set-server=utf8mb4 + --collation-server=utf8mb4_unicode_ci + --default-authentication-plugin=caching_sha2_password + --max-connections=2000 + --innodb-buffer-pool-size=4G + --innodb-thread-concurrency=16 + --innodb-read-io-threads=8 + --innodb-write-io-threads=8 + --wait_timeout=300 + --interactive_timeout=300 + healthcheck: + test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] # Database health check + timeout: 20s + retries: 10 + restart: always # Ensure database restarts on failure + +volumes: + mysql_data: # Named volume for persisting MySQL data diff --git a/chat/docker-compose.prod.yml b/chat/docker-compose.prod.yml new file mode 100644 index 0000000..25757da --- /dev/null +++ b/chat/docker-compose.prod.yml @@ -0,0 +1,123 @@ +# Production environment configuration for Bron Chat +# This file extends the base docker-compose.yml with production-specific settings +# It includes optimizations for performance, security, and reliability + +services: + traefik: + build: + context: . + dockerfile: ./traefik/Dockerfile.prod + image: traefik + depends_on: + - backend + - frontend + volumes: + - traefik:/etc/traefik/acme + ports: + - '0.0.0.0:80:80' + - '0.0.0.0:443:443' + + backend: + build: + context: ./backend + dockerfile: Dockerfile.prod + expose: + - "8000" + environment: + - PYTHONPATH=/app/backend + - QDRANT_HOST=${QDRANT_HOST} + - QDRANT_PORT=${QDRANT_PORT} + - COHERE_API_KEY=${COHERE_API_KEY} + - ENVIRONMENT=${ENVIRONMENT} + - MYSQL_DATABASE=${MYSQL_DATABASE} + - MYSQL_USER=${MYSQL_USER} + - MYSQL_PASSWORD=${MYSQL_PASSWORD} + volumes: + - ./backend:/app/backend + extra_hosts: + - "host.docker.internal:host-gateway" + depends_on: + mysql: + condition: service_healthy + qdrant: + condition: service_started + env_file: + - .env + restart: unless-stopped + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8000/health"] + interval: 30s + timeout: 10s + retries: 3 + command: /entrypoint.sh + + frontend: + build: + context: ./frontend + dockerfile: Dockerfile.prod + args: + - PUBLIC_API_URL=${PUBLIC_API_URL} + expose: + - "3000" + environment: + - PUBLIC_API_URL=${PUBLIC_API_URL} + depends_on: + - backend + restart: unless-stopped + + qdrant: + image: qdrant/qdrant:v1.11.3 + restart: unless-stopped + container_name: qdrant + ports: + - 6333:6333 + - 6334:6334 + # expose: + # - 6333 + # - 6334 + # - 6335 + configs: + - source: qdrant_config + target: /qdrant/config/production.yaml + volumes: + - /var/lib/qdrant_data/storage:/qdrant/storage + + mysql: + image: mysql:8.0 + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_DATABASE: ${MYSQL_DATABASE} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + volumes: + - /var/lib/mysql:/var/lib/mysql + ports: + - "3306:3306" + expose: + - "3306" + command: > + --character-set-server=utf8mb4 + --collation-server=utf8mb4_unicode_ci + --default-authentication-plugin=caching_sha2_password + --max-connections=100 + --innodb-buffer-pool-size=1G + --innodb-thread-concurrency=4 + --innodb-read-io-threads=2 + --innodb-write-io-threads=2 + --wait_timeout=28800 + --interactive_timeout=28800 + --max_allowed_packet=256M + healthcheck: + test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] + timeout: 20s + retries: 10 + restart: unless-stopped + +volumes: + letsencrypt: + traefik: {} + +configs: + qdrant_config: + content: | + log_level: INFO diff --git a/chat/docker-compose.stag.yml b/chat/docker-compose.stag.yml new file mode 100644 index 0000000..aee745e --- /dev/null +++ b/chat/docker-compose.stag.yml @@ -0,0 +1,111 @@ +# Staging environment configuration for Bron Chat +# This file extends the base docker-compose.yml with staging-specific settings +# It mimics the production environment but with staging-specific configurations + +services: + traefik: + build: + context: . + dockerfile: ./traefik/Dockerfile.prod + image: traefik + depends_on: + - backend + - frontend + volumes: + - traefik:/etc/traefik/acme + ports: + - '0.0.0.0:80:80' + - '0.0.0.0:443:443' + + backend: + build: + context: ./backend + dockerfile: Dockerfile.prod + expose: + - "8000" + environment: + - PYTHONPATH=/app/backend + - QDRANT_HOST=${QDRANT_HOST} + - QDRANT_PORT=${QDRANT_PORT} + - COHERE_API_KEY=${COHERE_API_KEY} + - ENVIRONMENT=${ENVIRONMENT} + - MYSQL_DATABASE=${MYSQL_DATABASE} + - MYSQL_USER=${MYSQL_USER} + - MYSQL_PASSWORD=${MYSQL_PASSWORD} + volumes: + - ./backend:/app/backend + extra_hosts: + - "host.docker.internal:host-gateway" + depends_on: + mysql: + condition: service_healthy + qdrant: + condition: service_started + env_file: + - .env + restart: unless-stopped + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8000/health"] + interval: 30s + timeout: 10s + retries: 3 + command: /entrypoint.sh + + frontend: + build: + context: ./frontend + dockerfile: Dockerfile.prod + args: + - PUBLIC_API_URL=${PUBLIC_API_URL} + expose: + - "3000" + environment: + - PUBLIC_API_URL=${PUBLIC_API_URL} + depends_on: + - backend + restart: unless-stopped + + qdrant: + image: qdrant/qdrant:v1.12.6 + restart: unless-stopped + container_name: qdrant + ports: + - 6333:6333 + - 6334:6334 + # expose: + # - 6333 + # - 6334 + # - 6335 + configs: + - source: qdrant_config + target: /qdrant/config/production.yaml + volumes: + - /var/lib/qdrant_data/storage:/qdrant/storage + + mysql: + image: mysql:8.0 + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_DATABASE: ${MYSQL_DATABASE} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + volumes: + - /var/lib/mysql:/var/lib/mysql + ports: + - "3306:3306" + expose: + - "3306" + healthcheck: + test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] + timeout: 20s + retries: 10 + restart: unless-stopped + +volumes: + letsencrypt: + traefik: {} + +configs: + qdrant_config: + content: | + log_level: INFO diff --git a/chat/docker-compose.yml b/chat/docker-compose.yml new file mode 100644 index 0000000..8d182cd --- /dev/null +++ b/chat/docker-compose.yml @@ -0,0 +1,20 @@ +# Base Docker Compose configuration for Bron Chat +# This file contains the common configuration shared across all environments +# It is extended by environment-specific files (dev, prod, staging) + +services: + backend: + # ... existing backend configuration ... + # Add these lines: + command: uvicorn app.main:app --host 0.0.0.0 --port 8000 # Keep your default command + # Add aliases for common operations + profiles: + - app + + # Add a new service for CLI operations + backend-cli: + extends: backend + profiles: + - cli + command: python -m app.config_cli # CLI entry point for administrative tasks + entrypoint: [] \ No newline at end of file diff --git a/chat/frontend/.dockerignore b/chat/frontend/.dockerignore new file mode 100644 index 0000000..5171c54 --- /dev/null +++ b/chat/frontend/.dockerignore @@ -0,0 +1,2 @@ +node_modules +npm-debug.log \ No newline at end of file diff --git a/chat/frontend/.gitignore b/chat/frontend/.gitignore new file mode 100644 index 0000000..79518f7 --- /dev/null +++ b/chat/frontend/.gitignore @@ -0,0 +1,21 @@ +node_modules + +# Output +.output +.vercel +/.svelte-kit +/build + +# OS +.DS_Store +Thumbs.db + +# Env +.env +.env.* +!.env.example +!.env.test + +# Vite +vite.config.js.timestamp-* +vite.config.ts.timestamp-* diff --git a/chat/frontend/.npmrc b/chat/frontend/.npmrc new file mode 100644 index 0000000..b6f27f1 --- /dev/null +++ b/chat/frontend/.npmrc @@ -0,0 +1 @@ +engine-strict=true diff --git a/chat/frontend/.prettierignore b/chat/frontend/.prettierignore new file mode 100644 index 0000000..ab78a95 --- /dev/null +++ b/chat/frontend/.prettierignore @@ -0,0 +1,4 @@ +# Package Managers +package-lock.json +pnpm-lock.yaml +yarn.lock diff --git a/chat/frontend/.prettierrc b/chat/frontend/.prettierrc new file mode 100644 index 0000000..9573023 --- /dev/null +++ b/chat/frontend/.prettierrc @@ -0,0 +1,8 @@ +{ + "useTabs": true, + "singleQuote": true, + "trailingComma": "none", + "printWidth": 100, + "plugins": ["prettier-plugin-svelte"], + "overrides": [{ "files": "*.svelte", "options": { "parser": "svelte" } }] +} diff --git a/chat/frontend/Dockerfile.dev b/chat/frontend/Dockerfile.dev new file mode 100644 index 0000000..d8b1a25 --- /dev/null +++ b/chat/frontend/Dockerfile.dev @@ -0,0 +1,20 @@ +# Use the official Node.js LTS (Long Term Support) image +FROM node:18-alpine + +# Set the working directory inside the container +WORKDIR /app + +# Copy package.json and package-lock.json (if available) +COPY package*.json ./ + +# Install project dependencies +RUN npm install + +COPY . . + +ENV NODE_ENV=development +ARG PUBLIC_API_URL +ENV PUBLIC_API_URL=${PUBLIC_API_URL} + +# Expose the development server port (default is 5173 for Vite) +EXPOSE 5173 diff --git a/chat/frontend/Dockerfile.prod b/chat/frontend/Dockerfile.prod new file mode 100644 index 0000000..7558764 --- /dev/null +++ b/chat/frontend/Dockerfile.prod @@ -0,0 +1,43 @@ +# Build stage +FROM node:18-alpine as builder + +WORKDIR /app + +# Copy package files +COPY package*.json ./ +COPY postcss.config.js ./ +COPY tailwind.config.js ./ + +# Install dependencies +RUN npm ci + +# Copy source code +COPY . . + +# Add environment variable for build time +ARG PUBLIC_API_URL +ENV PUBLIC_API_URL=${PUBLIC_API_URL} + +# Build the application +RUN npm run build + +# Production stage +FROM node:18-alpine + +WORKDIR /app + +# Copy built application from builder +COPY --from=builder /app/build ./build +COPY --from=builder /app/package*.json ./ + +# Install production dependencies only +RUN npm ci --omit=dev + +EXPOSE 3000 + +ENV NODE_ENV=production +# Add runtime environment variable +ENV PUBLIC_API_URL=${PUBLIC_API_URL} + +# Use @sveltejs/kit's node adapter +CMD ["node", "./build"] diff --git a/chat/frontend/README.md b/chat/frontend/README.md new file mode 100644 index 0000000..5ce6766 --- /dev/null +++ b/chat/frontend/README.md @@ -0,0 +1,38 @@ +# create-svelte + +Everything you need to build a Svelte project, powered by [`create-svelte`](https://github.com/sveltejs/kit/tree/main/packages/create-svelte). + +## Creating a project + +If you're seeing this, you've probably already done this step. Congrats! + +```bash +# create a new project in the current directory +npm create svelte@latest + +# create a new project in my-app +npm create svelte@latest my-app +``` + +## Developing + +Once you've created a project and installed dependencies with `npm install` (or `pnpm install` or `yarn`), start a development server: + +```bash +npm run dev + +# or start the server and open the app in a new browser tab +npm run dev -- --open +``` + +## Building + +To create a production version of your app: + +```bash +npm run build +``` + +You can preview the production build with `npm run preview`. + +> To deploy your app, you may need to install an [adapter](https://kit.svelte.dev/docs/adapters) for your target environment. diff --git a/chat/frontend/eslint.config.js b/chat/frontend/eslint.config.js new file mode 100644 index 0000000..2eeebcc --- /dev/null +++ b/chat/frontend/eslint.config.js @@ -0,0 +1,23 @@ +import js from '@eslint/js'; +import svelte from 'eslint-plugin-svelte'; +import prettier from 'eslint-config-prettier'; +import globals from 'globals'; + +/** @type {import('eslint').Linter.Config[]} */ +export default [ + js.configs.recommended, + ...svelte.configs['flat/recommended'], + prettier, + ...svelte.configs['flat/prettier'], + { + languageOptions: { + globals: { + ...globals.browser, + ...globals.node + } + } + }, + { + ignores: ['build/', '.svelte-kit/', 'dist/'] + } +]; diff --git a/chat/frontend/nginx.conf b/chat/frontend/nginx.conf new file mode 100644 index 0000000..97f67e7 --- /dev/null +++ b/chat/frontend/nginx.conf @@ -0,0 +1,32 @@ +server { + listen 80; + server_name _; + + root /usr/share/nginx/html; + index index.html; + + # Security headers + add_header X-Frame-Options "SAMEORIGIN"; + add_header X-XSS-Protection "1; mode=block"; + add_header X-Content-Type-Options "nosniff"; + + # Gzip compression + gzip on; + gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; + + location / { + try_files $uri $uri/ /index.html; + expires -1; + } + + # Cache static assets + location /assets { + expires 1y; + add_header Cache-Control "public, no-transform"; + } + + # Disable access to . files + location ~ /\. { + deny all; + } +} \ No newline at end of file diff --git a/chat/frontend/package-lock.json b/chat/frontend/package-lock.json new file mode 100644 index 0000000..458caf0 --- /dev/null +++ b/chat/frontend/package-lock.json @@ -0,0 +1,3199 @@ +{ + "name": "frontend", + "version": "0.0.1", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "frontend", + "version": "0.0.1", + "dependencies": { + "@floating-ui/dom": "^1.6.12", + "autoprefixer": "^10.4.20", + "dompurify": "^3.1.7", + "lodash-es": "^4.17.21", + "markdown-it": "^14.1.0", + "markdown-it-footnote": "^4.0.0", + "marked": "^14.1.3", + "nouislider": "^15.8.1", + "postcss": "^8.4.47", + "stopword": "^3.1.1", + "svelte-headlessui": "^0.0.45", + "svelte-select": "^5.8.3", + "svelte-transition": "^0.0.17", + "tailwindcss": "^3.4.13", + "typed.js": "^2.1.0" + }, + "devDependencies": { + "@sveltejs/adapter-node": "^5.2.9", + "@sveltejs/kit": "^2.7.0", + "@sveltejs/vite-plugin-svelte": "^3.1.2", + "@types/eslint": "^9.6.0", + "eslint": "^9.0.0", + "eslint-config-prettier": "^9.1.0", + "eslint-plugin-svelte": "^2.36.0", + "globals": "^15.0.0", + "prettier": "^3.1.1", + "prettier-plugin-svelte": "^3.1.2", + "svelte": "^4.2.7", + "svelte-range-slider-pips": "^3.1.4", + "vite": "^5.4.8" + } + }, + "node_modules/@alloc/quick-lru": { + "version": "5.2.0", + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@ampproject/remapping": { + "version": "2.3.0", + "license": "Apache-2.0", + "dependencies": { + "@jridgewell/gen-mapping": "^0.3.5", + "@jridgewell/trace-mapping": "^0.3.24" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@esbuild/linux-x64": { + "version": "0.21.5", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=12" + } + }, + "node_modules/@eslint-community/eslint-utils": { + "version": "4.4.0", + "dev": true, + "license": "MIT", + "dependencies": { + "eslint-visitor-keys": "^3.3.0" + }, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "peerDependencies": { + "eslint": "^6.0.0 || ^7.0.0 || >=8.0.0" + } + }, + "node_modules/@eslint-community/eslint-utils/node_modules/eslint-visitor-keys": { + "version": "3.4.3", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@eslint-community/regexpp": { + "version": "4.11.1", + "dev": true, + "license": "MIT", + "engines": { + "node": "^12.0.0 || ^14.0.0 || >=16.0.0" + } + }, + "node_modules/@eslint/config-array": { + "version": "0.18.0", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/object-schema": "^2.1.4", + "debug": "^4.3.1", + "minimatch": "^3.1.2" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/core": { + "version": "0.6.0", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/eslintrc": { + "version": "3.1.0", + "dev": true, + "license": "MIT", + "dependencies": { + "ajv": "^6.12.4", + "debug": "^4.3.2", + "espree": "^10.0.1", + "globals": "^14.0.0", + "ignore": "^5.2.0", + "import-fresh": "^3.2.1", + "js-yaml": "^4.1.0", + "minimatch": "^3.1.2", + "strip-json-comments": "^3.1.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@eslint/eslintrc/node_modules/globals": { + "version": "14.0.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@eslint/js": { + "version": "9.12.0", + "dev": true, + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/object-schema": { + "version": "2.1.4", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/plugin-kit": { + "version": "0.2.0", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "levn": "^0.4.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@floating-ui/core": { + "version": "1.6.8", + "resolved": "https://registry.npmjs.org/@floating-ui/core/-/core-1.6.8.tgz", + "integrity": "sha512-7XJ9cPU+yI2QeLS+FCSlqNFZJq8arvswefkZrYI1yQBbftw6FyrZOxYSh+9S7z7TpeWlRt9zJ5IhM1WIL334jA==", + "license": "MIT", + "dependencies": { + "@floating-ui/utils": "^0.2.8" + } + }, + "node_modules/@floating-ui/dom": { + "version": "1.6.12", + "resolved": "https://registry.npmjs.org/@floating-ui/dom/-/dom-1.6.12.tgz", + "integrity": "sha512-NP83c0HjokcGVEMeoStg317VD9W7eDlGK7457dMBANbKA6GJZdc7rjujdgqzTaz93jkGgc5P/jeWbaCHnMNc+w==", + "license": "MIT", + "dependencies": { + "@floating-ui/core": "^1.6.0", + "@floating-ui/utils": "^0.2.8" + } + }, + "node_modules/@floating-ui/utils": { + "version": "0.2.8", + "resolved": "https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.8.tgz", + "integrity": "sha512-kym7SodPp8/wloecOpcmSnWJsK7M0E5Wg8UcFA+uO4B9s5d0ywXOEro/8HM9x0rW+TljRzul/14UYz3TleT3ig==", + "license": "MIT" + }, + "node_modules/@humanfs/core": { + "version": "0.19.0", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanfs/node": { + "version": "0.16.5", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@humanfs/core": "^0.19.0", + "@humanwhocodes/retry": "^0.3.0" + }, + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanwhocodes/module-importer": { + "version": "1.0.1", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=12.22" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@humanwhocodes/retry": { + "version": "0.3.1", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@isaacs/cliui": { + "version": "8.0.2", + "license": "ISC", + "dependencies": { + "string-width": "^5.1.2", + "string-width-cjs": "npm:string-width@^4.2.0", + "strip-ansi": "^7.0.1", + "strip-ansi-cjs": "npm:strip-ansi@^6.0.1", + "wrap-ansi": "^8.1.0", + "wrap-ansi-cjs": "npm:wrap-ansi@^7.0.0" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/@jridgewell/gen-mapping": { + "version": "0.3.5", + "license": "MIT", + "dependencies": { + "@jridgewell/set-array": "^1.2.1", + "@jridgewell/sourcemap-codec": "^1.4.10", + "@jridgewell/trace-mapping": "^0.3.24" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/resolve-uri": { + "version": "3.1.2", + "license": "MIT", + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/set-array": { + "version": "1.2.1", + "license": "MIT", + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/sourcemap-codec": { + "version": "1.5.0", + "license": "MIT" + }, + "node_modules/@jridgewell/trace-mapping": { + "version": "0.3.25", + "license": "MIT", + "dependencies": { + "@jridgewell/resolve-uri": "^3.1.0", + "@jridgewell/sourcemap-codec": "^1.4.14" + } + }, + "node_modules/@nodelib/fs.scandir": { + "version": "2.1.5", + "license": "MIT", + "dependencies": { + "@nodelib/fs.stat": "2.0.5", + "run-parallel": "^1.1.9" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/@nodelib/fs.stat": { + "version": "2.0.5", + "license": "MIT", + "engines": { + "node": ">= 8" + } + }, + "node_modules/@nodelib/fs.walk": { + "version": "1.2.8", + "license": "MIT", + "dependencies": { + "@nodelib/fs.scandir": "2.1.5", + "fastq": "^1.6.0" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/@pkgjs/parseargs": { + "version": "0.11.0", + "license": "MIT", + "optional": true, + "engines": { + "node": ">=14" + } + }, + "node_modules/@polka/url": { + "version": "1.0.0-next.28", + "dev": true, + "license": "MIT" + }, + "node_modules/@rollup/plugin-commonjs": { + "version": "28.0.1", + "resolved": "https://registry.npmjs.org/@rollup/plugin-commonjs/-/plugin-commonjs-28.0.1.tgz", + "integrity": "sha512-+tNWdlWKbpB3WgBN7ijjYkq9X5uhjmcvyjEght4NmH5fAU++zfQzAJ6wumLS+dNcvwEZhKx2Z+skY8m7v0wGSA==", + "dev": true, + "dependencies": { + "@rollup/pluginutils": "^5.0.1", + "commondir": "^1.0.1", + "estree-walker": "^2.0.2", + "fdir": "^6.2.0", + "is-reference": "1.2.1", + "magic-string": "^0.30.3", + "picomatch": "^4.0.2" + }, + "engines": { + "node": ">=16.0.0 || 14 >= 14.17" + }, + "peerDependencies": { + "rollup": "^2.68.0||^3.0.0||^4.0.0" + }, + "peerDependenciesMeta": { + "rollup": { + "optional": true + } + } + }, + "node_modules/@rollup/plugin-commonjs/node_modules/estree-walker": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-2.0.2.tgz", + "integrity": "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==", + "dev": true + }, + "node_modules/@rollup/plugin-commonjs/node_modules/is-reference": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/is-reference/-/is-reference-1.2.1.tgz", + "integrity": "sha512-U82MsXXiFIrjCK4otLT+o2NA2Cd2g5MLoOVXUZjIOhLurrRxpEXzI8O0KZHr3IjLvlAH1kTPYSuqer5T9ZVBKQ==", + "dev": true, + "dependencies": { + "@types/estree": "*" + } + }, + "node_modules/@rollup/plugin-json": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/@rollup/plugin-json/-/plugin-json-6.1.0.tgz", + "integrity": "sha512-EGI2te5ENk1coGeADSIwZ7G2Q8CJS2sF120T7jLw4xFw9n7wIOXHo+kIYRAoVpJAN+kmqZSoO3Fp4JtoNF4ReA==", + "dev": true, + "dependencies": { + "@rollup/pluginutils": "^5.1.0" + }, + "engines": { + "node": ">=14.0.0" + }, + "peerDependencies": { + "rollup": "^1.20.0||^2.0.0||^3.0.0||^4.0.0" + }, + "peerDependenciesMeta": { + "rollup": { + "optional": true + } + } + }, + "node_modules/@rollup/plugin-node-resolve": { + "version": "15.3.0", + "resolved": "https://registry.npmjs.org/@rollup/plugin-node-resolve/-/plugin-node-resolve-15.3.0.tgz", + "integrity": "sha512-9eO5McEICxMzJpDW9OnMYSv4Sta3hmt7VtBFz5zR9273suNOydOyq/FrGeGy+KsTRFm8w0SLVhzig2ILFT63Ag==", + "dev": true, + "dependencies": { + "@rollup/pluginutils": "^5.0.1", + "@types/resolve": "1.20.2", + "deepmerge": "^4.2.2", + "is-module": "^1.0.0", + "resolve": "^1.22.1" + }, + "engines": { + "node": ">=14.0.0" + }, + "peerDependencies": { + "rollup": "^2.78.0||^3.0.0||^4.0.0" + }, + "peerDependenciesMeta": { + "rollup": { + "optional": true + } + } + }, + "node_modules/@rollup/pluginutils": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/@rollup/pluginutils/-/pluginutils-5.1.2.tgz", + "integrity": "sha512-/FIdS3PyZ39bjZlwqFnWqCOVnW7o963LtKMwQOD0NhQqw22gSr2YY1afu3FxRip4ZCZNsD5jq6Aaz6QV3D/Njw==", + "dev": true, + "dependencies": { + "@types/estree": "^1.0.0", + "estree-walker": "^2.0.2", + "picomatch": "^2.3.1" + }, + "engines": { + "node": ">=14.0.0" + }, + "peerDependencies": { + "rollup": "^1.20.0||^2.0.0||^3.0.0||^4.0.0" + }, + "peerDependenciesMeta": { + "rollup": { + "optional": true + } + } + }, + "node_modules/@rollup/pluginutils/node_modules/estree-walker": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-2.0.2.tgz", + "integrity": "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==", + "dev": true + }, + "node_modules/@rollup/pluginutils/node_modules/picomatch": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz", + "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==", + "dev": true, + "engines": { + "node": ">=8.6" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/@rollup/rollup-linux-x64-gnu": { + "version": "4.24.0", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-x64-musl": { + "version": "4.24.0", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@sveltejs/adapter-node": { + "version": "5.2.9", + "resolved": "https://registry.npmjs.org/@sveltejs/adapter-node/-/adapter-node-5.2.9.tgz", + "integrity": "sha512-51euNrx0AcaTu8//wDfVh7xmqQSVgU52rfinE/MwvGkJa4nHPJMHmzv6+OIpmxg7gZaF6+5NVlxnieCzxLD59g==", + "dev": true, + "dependencies": { + "@rollup/plugin-commonjs": "^28.0.1", + "@rollup/plugin-json": "^6.1.0", + "@rollup/plugin-node-resolve": "^15.3.0", + "rollup": "^4.9.5" + }, + "peerDependencies": { + "@sveltejs/kit": "^2.4.0" + } + }, + "node_modules/@sveltejs/kit": { + "version": "2.7.0", + "resolved": "https://registry.npmjs.org/@sveltejs/kit/-/kit-2.7.0.tgz", + "integrity": "sha512-4XyY1SCB/Eyz8E9G7SEBKViysYwVtDftuA7kyQ5bmuFNPWC1KZC4988rMvaJxhH2BbCTsbLPjNOZwiEGXt8/2g==", + "dev": true, + "hasInstallScript": true, + "dependencies": { + "@types/cookie": "^0.6.0", + "cookie": "^0.6.0", + "devalue": "^5.1.0", + "esm-env": "^1.0.0", + "import-meta-resolve": "^4.1.0", + "kleur": "^4.1.5", + "magic-string": "^0.30.5", + "mrmime": "^2.0.0", + "sade": "^1.8.1", + "set-cookie-parser": "^2.6.0", + "sirv": "^2.0.4", + "tiny-glob": "^0.2.9" + }, + "bin": { + "svelte-kit": "svelte-kit.js" + }, + "engines": { + "node": ">=18.13" + }, + "peerDependencies": { + "@sveltejs/vite-plugin-svelte": "^3.0.0 || ^4.0.0-next.1", + "svelte": "^4.0.0 || ^5.0.0-next.0", + "vite": "^5.0.3" + } + }, + "node_modules/@sveltejs/vite-plugin-svelte": { + "version": "3.1.2", + "dev": true, + "license": "MIT", + "dependencies": { + "@sveltejs/vite-plugin-svelte-inspector": "^2.1.0", + "debug": "^4.3.4", + "deepmerge": "^4.3.1", + "kleur": "^4.1.5", + "magic-string": "^0.30.10", + "svelte-hmr": "^0.16.0", + "vitefu": "^0.2.5" + }, + "engines": { + "node": "^18.0.0 || >=20" + }, + "peerDependencies": { + "svelte": "^4.0.0 || ^5.0.0-next.0", + "vite": "^5.0.0" + } + }, + "node_modules/@sveltejs/vite-plugin-svelte-inspector": { + "version": "2.1.0", + "dev": true, + "license": "MIT", + "dependencies": { + "debug": "^4.3.4" + }, + "engines": { + "node": "^18.0.0 || >=20" + }, + "peerDependencies": { + "@sveltejs/vite-plugin-svelte": "^3.0.0", + "svelte": "^4.0.0 || ^5.0.0-next.0", + "vite": "^5.0.0" + } + }, + "node_modules/@types/cookie": { + "version": "0.6.0", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/eslint": { + "version": "9.6.1", + "dev": true, + "license": "MIT", + "dependencies": { + "@types/estree": "*", + "@types/json-schema": "*" + } + }, + "node_modules/@types/estree": { + "version": "1.0.6", + "license": "MIT" + }, + "node_modules/@types/json-schema": { + "version": "7.0.15", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/resolve": { + "version": "1.20.2", + "resolved": "https://registry.npmjs.org/@types/resolve/-/resolve-1.20.2.tgz", + "integrity": "sha512-60BCwRFOZCQhDncwQdxxeOEEkbc5dIMccYLwbxsS4TUNeVECQ/pBJ0j09mrHOl/JJvpRPGwO9SvE4nR2Nb/a4Q==", + "dev": true + }, + "node_modules/acorn": { + "version": "8.12.1", + "license": "MIT", + "bin": { + "acorn": "bin/acorn" + }, + "engines": { + "node": ">=0.4.0" + } + }, + "node_modules/acorn-jsx": { + "version": "5.3.2", + "dev": true, + "license": "MIT", + "peerDependencies": { + "acorn": "^6.0.0 || ^7.0.0 || ^8.0.0" + } + }, + "node_modules/ajv": { + "version": "6.12.6", + "dev": true, + "license": "MIT", + "dependencies": { + "fast-deep-equal": "^3.1.1", + "fast-json-stable-stringify": "^2.0.0", + "json-schema-traverse": "^0.4.1", + "uri-js": "^4.2.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/ansi-regex": { + "version": "6.1.0", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/ansi-regex?sponsor=1" + } + }, + "node_modules/ansi-styles": { + "version": "4.3.0", + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/any-promise": { + "version": "1.3.0", + "license": "MIT" + }, + "node_modules/anymatch": { + "version": "3.1.3", + "license": "ISC", + "dependencies": { + "normalize-path": "^3.0.0", + "picomatch": "^2.0.4" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/anymatch/node_modules/picomatch": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz", + "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==", + "engines": { + "node": ">=8.6" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/arg": { + "version": "5.0.2", + "license": "MIT" + }, + "node_modules/argparse": { + "version": "2.0.1", + "license": "Python-2.0" + }, + "node_modules/aria-query": { + "version": "5.3.2", + "license": "Apache-2.0", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/autoprefixer": { + "version": "10.4.20", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/autoprefixer" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "browserslist": "^4.23.3", + "caniuse-lite": "^1.0.30001646", + "fraction.js": "^4.3.7", + "normalize-range": "^0.1.2", + "picocolors": "^1.0.1", + "postcss-value-parser": "^4.2.0" + }, + "bin": { + "autoprefixer": "bin/autoprefixer" + }, + "engines": { + "node": "^10 || ^12 || >=14" + }, + "peerDependencies": { + "postcss": "^8.1.0" + } + }, + "node_modules/axobject-query": { + "version": "4.1.0", + "license": "Apache-2.0", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/balanced-match": { + "version": "1.0.2", + "license": "MIT" + }, + "node_modules/binary-extensions": { + "version": "2.3.0", + "license": "MIT", + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/brace-expansion": { + "version": "1.1.11", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0", + "concat-map": "0.0.1" + } + }, + "node_modules/braces": { + "version": "3.0.3", + "license": "MIT", + "dependencies": { + "fill-range": "^7.1.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/browserslist": { + "version": "4.24.0", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/browserslist" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/browserslist" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "caniuse-lite": "^1.0.30001663", + "electron-to-chromium": "^1.5.28", + "node-releases": "^2.0.18", + "update-browserslist-db": "^1.1.0" + }, + "bin": { + "browserslist": "cli.js" + }, + "engines": { + "node": "^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7" + } + }, + "node_modules/callsites": { + "version": "3.1.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/camelcase-css": { + "version": "2.0.1", + "license": "MIT", + "engines": { + "node": ">= 6" + } + }, + "node_modules/caniuse-lite": { + "version": "1.0.30001667", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/browserslist" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/caniuse-lite" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "CC-BY-4.0" + }, + "node_modules/chalk": { + "version": "4.1.2", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/chokidar": { + "version": "3.6.0", + "license": "MIT", + "dependencies": { + "anymatch": "~3.1.2", + "braces": "~3.0.2", + "glob-parent": "~5.1.2", + "is-binary-path": "~2.1.0", + "is-glob": "~4.0.1", + "normalize-path": "~3.0.0", + "readdirp": "~3.6.0" + }, + "engines": { + "node": ">= 8.10.0" + }, + "funding": { + "url": "https://paulmillr.com/funding/" + }, + "optionalDependencies": { + "fsevents": "~2.3.2" + } + }, + "node_modules/chokidar/node_modules/glob-parent": { + "version": "5.1.2", + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.1" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/code-red": { + "version": "1.0.4", + "license": "MIT", + "dependencies": { + "@jridgewell/sourcemap-codec": "^1.4.15", + "@types/estree": "^1.0.1", + "acorn": "^8.10.0", + "estree-walker": "^3.0.3", + "periscopic": "^3.1.0" + } + }, + "node_modules/color-convert": { + "version": "2.0.1", + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/color-name": { + "version": "1.1.4", + "license": "MIT" + }, + "node_modules/commander": { + "version": "4.1.1", + "license": "MIT", + "engines": { + "node": ">= 6" + } + }, + "node_modules/commondir": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/commondir/-/commondir-1.0.1.tgz", + "integrity": "sha512-W9pAhw0ja1Edb5GVdIF1mjZw/ASI0AlShXM83UUGe2DVr5TdAPEA1OA8m/g8zWp9x6On7gqufY+FatDbC3MDQg==", + "dev": true + }, + "node_modules/concat-map": { + "version": "0.0.1", + "dev": true, + "license": "MIT" + }, + "node_modules/cookie": { + "version": "0.6.0", + "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.6.0.tgz", + "integrity": "sha512-U71cyTamuh1CRNCfpGY6to28lxvNwPG4Guz/EVjgf3Jmzv0vlDp1atT9eS5dDjMYHucpHbWns6Lwf3BKz6svdw==", + "dev": true, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/cross-spawn": { + "version": "7.0.3", + "license": "MIT", + "dependencies": { + "path-key": "^3.1.0", + "shebang-command": "^2.0.0", + "which": "^2.0.1" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/css-tree": { + "version": "2.3.1", + "license": "MIT", + "dependencies": { + "mdn-data": "2.0.30", + "source-map-js": "^1.0.1" + }, + "engines": { + "node": "^10 || ^12.20.0 || ^14.13.0 || >=15.0.0" + } + }, + "node_modules/cssesc": { + "version": "3.0.0", + "license": "MIT", + "bin": { + "cssesc": "bin/cssesc" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/debug": { + "version": "4.3.7", + "dev": true, + "license": "MIT", + "dependencies": { + "ms": "^2.1.3" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/deep-is": { + "version": "0.1.4", + "dev": true, + "license": "MIT" + }, + "node_modules/deepmerge": { + "version": "4.3.1", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/devalue": { + "version": "5.1.1", + "dev": true, + "license": "MIT" + }, + "node_modules/didyoumean": { + "version": "1.2.2", + "license": "Apache-2.0" + }, + "node_modules/dlv": { + "version": "1.1.3", + "license": "MIT" + }, + "node_modules/dompurify": { + "version": "3.1.7", + "resolved": "https://registry.npmjs.org/dompurify/-/dompurify-3.1.7.tgz", + "integrity": "sha512-VaTstWtsneJY8xzy7DekmYWEOZcmzIe3Qb3zPd4STve1OBTa+e+WmS1ITQec1fZYXI3HCsOZZiSMpG6oxoWMWQ==" + }, + "node_modules/eastasianwidth": { + "version": "0.2.0", + "license": "MIT" + }, + "node_modules/electron-to-chromium": { + "version": "1.5.32", + "license": "ISC" + }, + "node_modules/emoji-regex": { + "version": "9.2.2", + "license": "MIT" + }, + "node_modules/entities": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/entities/-/entities-4.5.0.tgz", + "integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/esbuild": { + "version": "0.21.5", + "dev": true, + "hasInstallScript": true, + "license": "MIT", + "bin": { + "esbuild": "bin/esbuild" + }, + "engines": { + "node": ">=12" + }, + "optionalDependencies": { + "@esbuild/aix-ppc64": "0.21.5", + "@esbuild/android-arm": "0.21.5", + "@esbuild/android-arm64": "0.21.5", + "@esbuild/android-x64": "0.21.5", + "@esbuild/darwin-arm64": "0.21.5", + "@esbuild/darwin-x64": "0.21.5", + "@esbuild/freebsd-arm64": "0.21.5", + "@esbuild/freebsd-x64": "0.21.5", + "@esbuild/linux-arm": "0.21.5", + "@esbuild/linux-arm64": "0.21.5", + "@esbuild/linux-ia32": "0.21.5", + "@esbuild/linux-loong64": "0.21.5", + "@esbuild/linux-mips64el": "0.21.5", + "@esbuild/linux-ppc64": "0.21.5", + "@esbuild/linux-riscv64": "0.21.5", + "@esbuild/linux-s390x": "0.21.5", + "@esbuild/linux-x64": "0.21.5", + "@esbuild/netbsd-x64": "0.21.5", + "@esbuild/openbsd-x64": "0.21.5", + "@esbuild/sunos-x64": "0.21.5", + "@esbuild/win32-arm64": "0.21.5", + "@esbuild/win32-ia32": "0.21.5", + "@esbuild/win32-x64": "0.21.5" + } + }, + "node_modules/escalade": { + "version": "3.2.0", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/escape-string-regexp": { + "version": "4.0.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/eslint": { + "version": "9.12.0", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/eslint-utils": "^4.2.0", + "@eslint-community/regexpp": "^4.11.0", + "@eslint/config-array": "^0.18.0", + "@eslint/core": "^0.6.0", + "@eslint/eslintrc": "^3.1.0", + "@eslint/js": "9.12.0", + "@eslint/plugin-kit": "^0.2.0", + "@humanfs/node": "^0.16.5", + "@humanwhocodes/module-importer": "^1.0.1", + "@humanwhocodes/retry": "^0.3.1", + "@types/estree": "^1.0.6", + "@types/json-schema": "^7.0.15", + "ajv": "^6.12.4", + "chalk": "^4.0.0", + "cross-spawn": "^7.0.2", + "debug": "^4.3.2", + "escape-string-regexp": "^4.0.0", + "eslint-scope": "^8.1.0", + "eslint-visitor-keys": "^4.1.0", + "espree": "^10.2.0", + "esquery": "^1.5.0", + "esutils": "^2.0.2", + "fast-deep-equal": "^3.1.3", + "file-entry-cache": "^8.0.0", + "find-up": "^5.0.0", + "glob-parent": "^6.0.2", + "ignore": "^5.2.0", + "imurmurhash": "^0.1.4", + "is-glob": "^4.0.0", + "json-stable-stringify-without-jsonify": "^1.0.1", + "lodash.merge": "^4.6.2", + "minimatch": "^3.1.2", + "natural-compare": "^1.4.0", + "optionator": "^0.9.3", + "text-table": "^0.2.0" + }, + "bin": { + "eslint": "bin/eslint.js" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://eslint.org/donate" + }, + "peerDependencies": { + "jiti": "*" + }, + "peerDependenciesMeta": { + "jiti": { + "optional": true + } + } + }, + "node_modules/eslint-compat-utils": { + "version": "0.5.1", + "dev": true, + "license": "MIT", + "dependencies": { + "semver": "^7.5.4" + }, + "engines": { + "node": ">=12" + }, + "peerDependencies": { + "eslint": ">=6.0.0" + } + }, + "node_modules/eslint-config-prettier": { + "version": "9.1.0", + "dev": true, + "license": "MIT", + "bin": { + "eslint-config-prettier": "bin/cli.js" + }, + "peerDependencies": { + "eslint": ">=7.0.0" + } + }, + "node_modules/eslint-plugin-svelte": { + "version": "2.44.1", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/eslint-utils": "^4.4.0", + "@jridgewell/sourcemap-codec": "^1.4.15", + "eslint-compat-utils": "^0.5.1", + "esutils": "^2.0.3", + "known-css-properties": "^0.34.0", + "postcss": "^8.4.38", + "postcss-load-config": "^3.1.4", + "postcss-safe-parser": "^6.0.0", + "postcss-selector-parser": "^6.1.0", + "semver": "^7.6.2", + "svelte-eslint-parser": "^0.41.1" + }, + "engines": { + "node": "^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/ota-meshi" + }, + "peerDependencies": { + "eslint": "^7.0.0 || ^8.0.0-0 || ^9.0.0-0", + "svelte": "^3.37.0 || ^4.0.0 || ^5.0.0-next.191" + }, + "peerDependenciesMeta": { + "svelte": { + "optional": true + } + } + }, + "node_modules/eslint-scope": { + "version": "8.1.0", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "esrecurse": "^4.3.0", + "estraverse": "^5.2.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/eslint-visitor-keys": { + "version": "4.1.0", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/esm-env": { + "version": "1.0.0", + "dev": true, + "license": "MIT" + }, + "node_modules/espree": { + "version": "10.2.0", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "acorn": "^8.12.0", + "acorn-jsx": "^5.3.2", + "eslint-visitor-keys": "^4.1.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/esquery": { + "version": "1.6.0", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "estraverse": "^5.1.0" + }, + "engines": { + "node": ">=0.10" + } + }, + "node_modules/esrecurse": { + "version": "4.3.0", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "estraverse": "^5.2.0" + }, + "engines": { + "node": ">=4.0" + } + }, + "node_modules/estraverse": { + "version": "5.3.0", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=4.0" + } + }, + "node_modules/estree-walker": { + "version": "3.0.3", + "license": "MIT", + "dependencies": { + "@types/estree": "^1.0.0" + } + }, + "node_modules/esutils": { + "version": "2.0.3", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/fast-deep-equal": { + "version": "3.1.3", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-glob": { + "version": "3.3.2", + "license": "MIT", + "dependencies": { + "@nodelib/fs.stat": "^2.0.2", + "@nodelib/fs.walk": "^1.2.3", + "glob-parent": "^5.1.2", + "merge2": "^1.3.0", + "micromatch": "^4.0.4" + }, + "engines": { + "node": ">=8.6.0" + } + }, + "node_modules/fast-glob/node_modules/glob-parent": { + "version": "5.1.2", + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.1" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/fast-json-stable-stringify": { + "version": "2.1.0", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-levenshtein": { + "version": "2.0.6", + "dev": true, + "license": "MIT" + }, + "node_modules/fastq": { + "version": "1.17.1", + "license": "ISC", + "dependencies": { + "reusify": "^1.0.4" + } + }, + "node_modules/fdir": { + "version": "6.4.2", + "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.4.2.tgz", + "integrity": "sha512-KnhMXsKSPZlAhp7+IjUkRZKPb4fUyccpDrdFXbi4QL1qkmFh9kVY09Yox+n4MaOb3lHZ1Tv829C3oaaXoMYPDQ==", + "dev": true, + "peerDependencies": { + "picomatch": "^3 || ^4" + }, + "peerDependenciesMeta": { + "picomatch": { + "optional": true + } + } + }, + "node_modules/file-entry-cache": { + "version": "8.0.0", + "dev": true, + "license": "MIT", + "dependencies": { + "flat-cache": "^4.0.0" + }, + "engines": { + "node": ">=16.0.0" + } + }, + "node_modules/fill-range": { + "version": "7.1.1", + "license": "MIT", + "dependencies": { + "to-regex-range": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/find-up": { + "version": "5.0.0", + "dev": true, + "license": "MIT", + "dependencies": { + "locate-path": "^6.0.0", + "path-exists": "^4.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/flat-cache": { + "version": "4.0.1", + "dev": true, + "license": "MIT", + "dependencies": { + "flatted": "^3.2.9", + "keyv": "^4.5.4" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/flatted": { + "version": "3.3.1", + "dev": true, + "license": "ISC" + }, + "node_modules/foreground-child": { + "version": "3.3.0", + "license": "ISC", + "dependencies": { + "cross-spawn": "^7.0.0", + "signal-exit": "^4.0.1" + }, + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/fraction.js": { + "version": "4.3.7", + "license": "MIT", + "engines": { + "node": "*" + }, + "funding": { + "type": "patreon", + "url": "https://github.com/sponsors/rawify" + } + }, + "node_modules/function-bind": { + "version": "1.1.2", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/glob": { + "version": "10.4.5", + "license": "ISC", + "dependencies": { + "foreground-child": "^3.1.0", + "jackspeak": "^3.1.2", + "minimatch": "^9.0.4", + "minipass": "^7.1.2", + "package-json-from-dist": "^1.0.0", + "path-scurry": "^1.11.1" + }, + "bin": { + "glob": "dist/esm/bin.mjs" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/glob-parent": { + "version": "6.0.2", + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.3" + }, + "engines": { + "node": ">=10.13.0" + } + }, + "node_modules/glob/node_modules/brace-expansion": { + "version": "2.0.1", + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0" + } + }, + "node_modules/glob/node_modules/minimatch": { + "version": "9.0.5", + "license": "ISC", + "dependencies": { + "brace-expansion": "^2.0.1" + }, + "engines": { + "node": ">=16 || 14 >=14.17" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/globals": { + "version": "15.10.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/globalyzer": { + "version": "0.1.0", + "dev": true, + "license": "MIT" + }, + "node_modules/globrex": { + "version": "0.1.2", + "dev": true, + "license": "MIT" + }, + "node_modules/has-flag": { + "version": "4.0.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/hasown": { + "version": "2.0.2", + "license": "MIT", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/ignore": { + "version": "5.3.2", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 4" + } + }, + "node_modules/import-fresh": { + "version": "3.3.0", + "dev": true, + "license": "MIT", + "dependencies": { + "parent-module": "^1.0.0", + "resolve-from": "^4.0.0" + }, + "engines": { + "node": ">=6" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/import-meta-resolve": { + "version": "4.1.0", + "dev": true, + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/imurmurhash": { + "version": "0.1.4", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.8.19" + } + }, + "node_modules/is-binary-path": { + "version": "2.1.0", + "license": "MIT", + "dependencies": { + "binary-extensions": "^2.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/is-core-module": { + "version": "2.15.1", + "license": "MIT", + "dependencies": { + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-extglob": { + "version": "2.1.1", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-fullwidth-code-point": { + "version": "3.0.0", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-glob": { + "version": "4.0.3", + "license": "MIT", + "dependencies": { + "is-extglob": "^2.1.1" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-module": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-module/-/is-module-1.0.0.tgz", + "integrity": "sha512-51ypPSPCoTEIN9dy5Oy+h4pShgJmPCygKfyRCISBI+JoWT/2oJvK8QPxmwv7b/p239jXrm9M1mlQbyKJ5A152g==", + "dev": true + }, + "node_modules/is-number": { + "version": "7.0.0", + "license": "MIT", + "engines": { + "node": ">=0.12.0" + } + }, + "node_modules/is-reference": { + "version": "3.0.2", + "license": "MIT", + "dependencies": { + "@types/estree": "*" + } + }, + "node_modules/isexe": { + "version": "2.0.0", + "license": "ISC" + }, + "node_modules/jackspeak": { + "version": "3.4.3", + "license": "BlueOak-1.0.0", + "dependencies": { + "@isaacs/cliui": "^8.0.2" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + }, + "optionalDependencies": { + "@pkgjs/parseargs": "^0.11.0" + } + }, + "node_modules/jiti": { + "version": "1.21.6", + "license": "MIT", + "bin": { + "jiti": "bin/jiti.js" + } + }, + "node_modules/js-yaml": { + "version": "4.1.0", + "dev": true, + "license": "MIT", + "dependencies": { + "argparse": "^2.0.1" + }, + "bin": { + "js-yaml": "bin/js-yaml.js" + } + }, + "node_modules/json-buffer": { + "version": "3.0.1", + "dev": true, + "license": "MIT" + }, + "node_modules/json-schema-traverse": { + "version": "0.4.1", + "dev": true, + "license": "MIT" + }, + "node_modules/json-stable-stringify-without-jsonify": { + "version": "1.0.1", + "dev": true, + "license": "MIT" + }, + "node_modules/keyv": { + "version": "4.5.4", + "dev": true, + "license": "MIT", + "dependencies": { + "json-buffer": "3.0.1" + } + }, + "node_modules/kleur": { + "version": "4.1.5", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/known-css-properties": { + "version": "0.34.0", + "dev": true, + "license": "MIT" + }, + "node_modules/levn": { + "version": "0.4.1", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1", + "type-check": "~0.4.0" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/lilconfig": { + "version": "2.1.0", + "license": "MIT", + "engines": { + "node": ">=10" + } + }, + "node_modules/lines-and-columns": { + "version": "1.2.4", + "license": "MIT" + }, + "node_modules/linkify-it": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/linkify-it/-/linkify-it-5.0.0.tgz", + "integrity": "sha512-5aHCbzQRADcdP+ATqnDuhhJ/MRIqDkZX5pyjFHRRysS8vZ5AbqGEoFIb6pYHPZ+L/OC2Lc+xT8uHVVR5CAK/wQ==", + "dependencies": { + "uc.micro": "^2.0.0" + } + }, + "node_modules/locate-character": { + "version": "3.0.0", + "license": "MIT" + }, + "node_modules/locate-path": { + "version": "6.0.0", + "dev": true, + "license": "MIT", + "dependencies": { + "p-locate": "^5.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/lodash-es": { + "version": "4.17.21", + "resolved": "https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.21.tgz", + "integrity": "sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw==", + "license": "MIT" + }, + "node_modules/lodash.merge": { + "version": "4.6.2", + "dev": true, + "license": "MIT" + }, + "node_modules/lru-cache": { + "version": "10.4.3", + "license": "ISC" + }, + "node_modules/magic-string": { + "version": "0.30.11", + "license": "MIT", + "dependencies": { + "@jridgewell/sourcemap-codec": "^1.5.0" + } + }, + "node_modules/markdown-it": { + "version": "14.1.0", + "resolved": "https://registry.npmjs.org/markdown-it/-/markdown-it-14.1.0.tgz", + "integrity": "sha512-a54IwgWPaeBCAAsv13YgmALOF1elABB08FxO9i+r4VFk5Vl4pKokRPeX8u5TCgSsPi6ec1otfLjdOpVcgbpshg==", + "dependencies": { + "argparse": "^2.0.1", + "entities": "^4.4.0", + "linkify-it": "^5.0.0", + "mdurl": "^2.0.0", + "punycode.js": "^2.3.1", + "uc.micro": "^2.1.0" + }, + "bin": { + "markdown-it": "bin/markdown-it.mjs" + } + }, + "node_modules/markdown-it-footnote": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/markdown-it-footnote/-/markdown-it-footnote-4.0.0.tgz", + "integrity": "sha512-WYJ7urf+khJYl3DqofQpYfEYkZKbmXmwxQV8c8mO/hGIhgZ1wOe7R4HLFNwqx7TjILbnC98fuyeSsin19JdFcQ==" + }, + "node_modules/marked": { + "version": "14.1.3", + "resolved": "https://registry.npmjs.org/marked/-/marked-14.1.3.tgz", + "integrity": "sha512-ZibJqTULGlt9g5k4VMARAktMAjXoVnnr+Y3aCqW1oDftcV4BA3UmrBifzXoZyenHRk75csiPu9iwsTj4VNBT0g==", + "bin": { + "marked": "bin/marked.js" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/mdn-data": { + "version": "2.0.30", + "license": "CC0-1.0" + }, + "node_modules/mdurl": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/mdurl/-/mdurl-2.0.0.tgz", + "integrity": "sha512-Lf+9+2r+Tdp5wXDXC4PcIBjTDtq4UKjCPMQhKIuzpJNW0b96kVqSwW0bT7FhRSfmAiFYgP+SCRvdrDozfh0U5w==" + }, + "node_modules/merge2": { + "version": "1.4.1", + "license": "MIT", + "engines": { + "node": ">= 8" + } + }, + "node_modules/micromatch": { + "version": "4.0.8", + "license": "MIT", + "dependencies": { + "braces": "^3.0.3", + "picomatch": "^2.3.1" + }, + "engines": { + "node": ">=8.6" + } + }, + "node_modules/micromatch/node_modules/picomatch": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz", + "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==", + "engines": { + "node": ">=8.6" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/minimatch": { + "version": "3.1.2", + "dev": true, + "license": "ISC", + "dependencies": { + "brace-expansion": "^1.1.7" + }, + "engines": { + "node": "*" + } + }, + "node_modules/minipass": { + "version": "7.1.2", + "license": "ISC", + "engines": { + "node": ">=16 || 14 >=14.17" + } + }, + "node_modules/mri": { + "version": "1.2.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/mrmime": { + "version": "2.0.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + } + }, + "node_modules/ms": { + "version": "2.1.3", + "dev": true, + "license": "MIT" + }, + "node_modules/mz": { + "version": "2.7.0", + "license": "MIT", + "dependencies": { + "any-promise": "^1.0.0", + "object-assign": "^4.0.1", + "thenify-all": "^1.0.0" + } + }, + "node_modules/nanoid": { + "version": "3.3.7", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "bin": { + "nanoid": "bin/nanoid.cjs" + }, + "engines": { + "node": "^10 || ^12 || ^13.7 || ^14 || >=15.0.1" + } + }, + "node_modules/natural-compare": { + "version": "1.4.0", + "dev": true, + "license": "MIT" + }, + "node_modules/node-releases": { + "version": "2.0.18", + "license": "MIT" + }, + "node_modules/normalize-path": { + "version": "3.0.0", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/normalize-range": { + "version": "0.1.2", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/nouislider": { + "version": "15.8.1", + "resolved": "https://registry.npmjs.org/nouislider/-/nouislider-15.8.1.tgz", + "integrity": "sha512-93TweAi8kqntHJSPiSWQ1o/uZ29VWOmal9YKb6KKGGlCkugaNfAupT7o1qTHqdJvNQ7S0su5rO6qRFCjP8fxtw==", + "license": "MIT" + }, + "node_modules/object-assign": { + "version": "4.1.1", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/object-hash": { + "version": "3.0.0", + "license": "MIT", + "engines": { + "node": ">= 6" + } + }, + "node_modules/optionator": { + "version": "0.9.4", + "dev": true, + "license": "MIT", + "dependencies": { + "deep-is": "^0.1.3", + "fast-levenshtein": "^2.0.6", + "levn": "^0.4.1", + "prelude-ls": "^1.2.1", + "type-check": "^0.4.0", + "word-wrap": "^1.2.5" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/p-limit": { + "version": "3.1.0", + "dev": true, + "license": "MIT", + "dependencies": { + "yocto-queue": "^0.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/p-locate": { + "version": "5.0.0", + "dev": true, + "license": "MIT", + "dependencies": { + "p-limit": "^3.0.2" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/package-json-from-dist": { + "version": "1.0.1", + "license": "BlueOak-1.0.0" + }, + "node_modules/parent-module": { + "version": "1.0.1", + "dev": true, + "license": "MIT", + "dependencies": { + "callsites": "^3.0.0" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/path-exists": { + "version": "4.0.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/path-key": { + "version": "3.1.1", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/path-parse": { + "version": "1.0.7", + "license": "MIT" + }, + "node_modules/path-scurry": { + "version": "1.11.1", + "license": "BlueOak-1.0.0", + "dependencies": { + "lru-cache": "^10.2.0", + "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" + }, + "engines": { + "node": ">=16 || 14 >=14.18" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/periscopic": { + "version": "3.1.0", + "license": "MIT", + "dependencies": { + "@types/estree": "^1.0.0", + "estree-walker": "^3.0.0", + "is-reference": "^3.0.0" + } + }, + "node_modules/picocolors": { + "version": "1.1.0", + "license": "ISC" + }, + "node_modules/picomatch": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.2.tgz", + "integrity": "sha512-M7BAV6Rlcy5u+m6oPhAPFgJTzAioX/6B0DxyvDlo9l8+T3nLKbrczg2WLUyzd45L8RqfUMyGPzekbMvX2Ldkwg==", + "dev": true, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/pify": { + "version": "2.3.0", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/pirates": { + "version": "4.0.6", + "license": "MIT", + "engines": { + "node": ">= 6" + } + }, + "node_modules/postcss": { + "version": "8.4.47", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/postcss" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "nanoid": "^3.3.7", + "picocolors": "^1.1.0", + "source-map-js": "^1.2.1" + }, + "engines": { + "node": "^10 || ^12 || >=14" + } + }, + "node_modules/postcss-import": { + "version": "15.1.0", + "license": "MIT", + "dependencies": { + "postcss-value-parser": "^4.0.0", + "read-cache": "^1.0.0", + "resolve": "^1.1.7" + }, + "engines": { + "node": ">=14.0.0" + }, + "peerDependencies": { + "postcss": "^8.0.0" + } + }, + "node_modules/postcss-js": { + "version": "4.0.1", + "license": "MIT", + "dependencies": { + "camelcase-css": "^2.0.1" + }, + "engines": { + "node": "^12 || ^14 || >= 16" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + "peerDependencies": { + "postcss": "^8.4.21" + } + }, + "node_modules/postcss-load-config": { + "version": "3.1.4", + "dev": true, + "license": "MIT", + "dependencies": { + "lilconfig": "^2.0.5", + "yaml": "^1.10.2" + }, + "engines": { + "node": ">= 10" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + "peerDependencies": { + "postcss": ">=8.0.9", + "ts-node": ">=9.0.0" + }, + "peerDependenciesMeta": { + "postcss": { + "optional": true + }, + "ts-node": { + "optional": true + } + } + }, + "node_modules/postcss-nested": { + "version": "6.2.0", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "postcss-selector-parser": "^6.1.1" + }, + "engines": { + "node": ">=12.0" + }, + "peerDependencies": { + "postcss": "^8.2.14" + } + }, + "node_modules/postcss-safe-parser": { + "version": "6.0.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + "peerDependencies": { + "postcss": "^8.3.3" + } + }, + "node_modules/postcss-scss": { + "version": "4.0.9", + "dev": true, + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/postcss-scss" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "engines": { + "node": ">=12.0" + }, + "peerDependencies": { + "postcss": "^8.4.29" + } + }, + "node_modules/postcss-selector-parser": { + "version": "6.1.2", + "license": "MIT", + "dependencies": { + "cssesc": "^3.0.0", + "util-deprecate": "^1.0.2" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/postcss-value-parser": { + "version": "4.2.0", + "license": "MIT" + }, + "node_modules/prelude-ls": { + "version": "1.2.1", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/prettier": { + "version": "3.3.3", + "dev": true, + "license": "MIT", + "bin": { + "prettier": "bin/prettier.cjs" + }, + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/prettier/prettier?sponsor=1" + } + }, + "node_modules/prettier-plugin-svelte": { + "version": "3.2.7", + "dev": true, + "license": "MIT", + "peerDependencies": { + "prettier": "^3.0.0", + "svelte": "^3.2.0 || ^4.0.0-next.0 || ^5.0.0-next.0" + } + }, + "node_modules/punycode": { + "version": "2.3.1", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/punycode.js": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/punycode.js/-/punycode.js-2.3.1.tgz", + "integrity": "sha512-uxFIHU0YlHYhDQtV4R9J6a52SLx28BCjT+4ieh7IGbgwVJWO+km431c4yRlREUAsAmt/uMjQUyQHNEPf0M39CA==", + "engines": { + "node": ">=6" + } + }, + "node_modules/queue-microtask": { + "version": "1.2.3", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT" + }, + "node_modules/read-cache": { + "version": "1.0.0", + "license": "MIT", + "dependencies": { + "pify": "^2.3.0" + } + }, + "node_modules/readdirp": { + "version": "3.6.0", + "license": "MIT", + "dependencies": { + "picomatch": "^2.2.1" + }, + "engines": { + "node": ">=8.10.0" + } + }, + "node_modules/readdirp/node_modules/picomatch": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz", + "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==", + "engines": { + "node": ">=8.6" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/resolve": { + "version": "1.22.8", + "license": "MIT", + "dependencies": { + "is-core-module": "^2.13.0", + "path-parse": "^1.0.7", + "supports-preserve-symlinks-flag": "^1.0.0" + }, + "bin": { + "resolve": "bin/resolve" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/resolve-from": { + "version": "4.0.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/reusify": { + "version": "1.0.4", + "license": "MIT", + "engines": { + "iojs": ">=1.0.0", + "node": ">=0.10.0" + } + }, + "node_modules/rollup": { + "version": "4.24.0", + "dev": true, + "license": "MIT", + "dependencies": { + "@types/estree": "1.0.6" + }, + "bin": { + "rollup": "dist/bin/rollup" + }, + "engines": { + "node": ">=18.0.0", + "npm": ">=8.0.0" + }, + "optionalDependencies": { + "@rollup/rollup-android-arm-eabi": "4.24.0", + "@rollup/rollup-android-arm64": "4.24.0", + "@rollup/rollup-darwin-arm64": "4.24.0", + "@rollup/rollup-darwin-x64": "4.24.0", + "@rollup/rollup-linux-arm-gnueabihf": "4.24.0", + "@rollup/rollup-linux-arm-musleabihf": "4.24.0", + "@rollup/rollup-linux-arm64-gnu": "4.24.0", + "@rollup/rollup-linux-arm64-musl": "4.24.0", + "@rollup/rollup-linux-powerpc64le-gnu": "4.24.0", + "@rollup/rollup-linux-riscv64-gnu": "4.24.0", + "@rollup/rollup-linux-s390x-gnu": "4.24.0", + "@rollup/rollup-linux-x64-gnu": "4.24.0", + "@rollup/rollup-linux-x64-musl": "4.24.0", + "@rollup/rollup-win32-arm64-msvc": "4.24.0", + "@rollup/rollup-win32-ia32-msvc": "4.24.0", + "@rollup/rollup-win32-x64-msvc": "4.24.0", + "fsevents": "~2.3.2" + } + }, + "node_modules/run-parallel": { + "version": "1.2.0", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "dependencies": { + "queue-microtask": "^1.2.2" + } + }, + "node_modules/sade": { + "version": "1.8.1", + "dev": true, + "license": "MIT", + "dependencies": { + "mri": "^1.1.0" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/semver": { + "version": "7.6.3", + "dev": true, + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/set-cookie-parser": { + "version": "2.7.0", + "dev": true, + "license": "MIT" + }, + "node_modules/shebang-command": { + "version": "2.0.0", + "license": "MIT", + "dependencies": { + "shebang-regex": "^3.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/shebang-regex": { + "version": "3.0.0", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/signal-exit": { + "version": "4.1.0", + "license": "ISC", + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/sirv": { + "version": "2.0.4", + "dev": true, + "license": "MIT", + "dependencies": { + "@polka/url": "^1.0.0-next.24", + "mrmime": "^2.0.0", + "totalist": "^3.0.0" + }, + "engines": { + "node": ">= 10" + } + }, + "node_modules/source-map-js": { + "version": "1.2.1", + "license": "BSD-3-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/stopword": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/stopword/-/stopword-3.1.1.tgz", + "integrity": "sha512-TzJdIuzqJNo6IaFvrF3fYqu08uJ/0VMsdABl6d6+dt6daD7QeHJnMt9sPqhVIxEmNaaeE8+eandVPJv9RhAL5Q==" + }, + "node_modules/string-width": { + "version": "5.1.2", + "license": "MIT", + "dependencies": { + "eastasianwidth": "^0.2.0", + "emoji-regex": "^9.2.2", + "strip-ansi": "^7.0.1" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/string-width-cjs": { + "name": "string-width", + "version": "4.2.3", + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/string-width-cjs/node_modules/ansi-regex": { + "version": "5.0.1", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/string-width-cjs/node_modules/emoji-regex": { + "version": "8.0.0", + "license": "MIT" + }, + "node_modules/string-width-cjs/node_modules/strip-ansi": { + "version": "6.0.1", + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-ansi": { + "version": "7.1.0", + "license": "MIT", + "dependencies": { + "ansi-regex": "^6.0.1" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/strip-ansi?sponsor=1" + } + }, + "node_modules/strip-ansi-cjs": { + "name": "strip-ansi", + "version": "6.0.1", + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-ansi-cjs/node_modules/ansi-regex": { + "version": "5.0.1", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-json-comments": { + "version": "3.1.1", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/sucrase": { + "version": "3.35.0", + "license": "MIT", + "dependencies": { + "@jridgewell/gen-mapping": "^0.3.2", + "commander": "^4.0.0", + "glob": "^10.3.10", + "lines-and-columns": "^1.1.6", + "mz": "^2.7.0", + "pirates": "^4.0.1", + "ts-interface-checker": "^0.1.9" + }, + "bin": { + "sucrase": "bin/sucrase", + "sucrase-node": "bin/sucrase-node" + }, + "engines": { + "node": ">=16 || 14 >=14.17" + } + }, + "node_modules/supports-color": { + "version": "7.2.0", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^4.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/supports-preserve-symlinks-flag": { + "version": "1.0.0", + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/svelte": { + "version": "4.2.19", + "license": "MIT", + "dependencies": { + "@ampproject/remapping": "^2.2.1", + "@jridgewell/sourcemap-codec": "^1.4.15", + "@jridgewell/trace-mapping": "^0.3.18", + "@types/estree": "^1.0.1", + "acorn": "^8.9.0", + "aria-query": "^5.3.0", + "axobject-query": "^4.0.0", + "code-red": "^1.0.3", + "css-tree": "^2.3.1", + "estree-walker": "^3.0.3", + "is-reference": "^3.0.1", + "locate-character": "^3.0.0", + "magic-string": "^0.30.4", + "periscopic": "^3.1.0" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/svelte-eslint-parser": { + "version": "0.41.1", + "dev": true, + "license": "MIT", + "dependencies": { + "eslint-scope": "^7.2.2", + "eslint-visitor-keys": "^3.4.3", + "espree": "^9.6.1", + "postcss": "^8.4.39", + "postcss-scss": "^4.0.9" + }, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/ota-meshi" + }, + "peerDependencies": { + "svelte": "^3.37.0 || ^4.0.0 || ^5.0.0-next.191" + }, + "peerDependenciesMeta": { + "svelte": { + "optional": true + } + } + }, + "node_modules/svelte-eslint-parser/node_modules/eslint-scope": { + "version": "7.2.2", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "esrecurse": "^4.3.0", + "estraverse": "^5.2.0" + }, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/svelte-eslint-parser/node_modules/eslint-visitor-keys": { + "version": "3.4.3", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/svelte-eslint-parser/node_modules/espree": { + "version": "9.6.1", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "acorn": "^8.9.0", + "acorn-jsx": "^5.3.2", + "eslint-visitor-keys": "^3.4.1" + }, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/svelte-floating-ui": { + "version": "1.5.8", + "resolved": "https://registry.npmjs.org/svelte-floating-ui/-/svelte-floating-ui-1.5.8.tgz", + "integrity": "sha512-dVvJhZ2bT+kQDHlE4Lep8t+sgEc0XD96fXLzAi2DDI2bsaegBbClxXVNMma0C2WsG+n9GJSYx292dTvA8CYRtw==", + "license": "MIT", + "dependencies": { + "@floating-ui/core": "^1.5.0", + "@floating-ui/dom": "^1.5.3" + } + }, + "node_modules/svelte-headlessui": { + "version": "0.0.45", + "resolved": "https://registry.npmjs.org/svelte-headlessui/-/svelte-headlessui-0.0.45.tgz", + "integrity": "sha512-YFFwCQxRvnoMtjLybR4ksDvT455JszFVYrnbWFV4HibcXyk4pZ6Tfs9Xd1UrjPftQuZ/cQG8+35GeZVZnZrnng==", + "license": "MIT", + "peerDependencies": { + "svelte": "^4.0.0 || ^5.0.0" + } + }, + "node_modules/svelte-hmr": { + "version": "0.16.0", + "dev": true, + "license": "ISC", + "engines": { + "node": "^12.20 || ^14.13.1 || >= 16" + }, + "peerDependencies": { + "svelte": "^3.19.0 || ^4.0.0" + } + }, + "node_modules/svelte-range-slider-pips": { + "version": "3.1.4", + "resolved": "https://registry.npmjs.org/svelte-range-slider-pips/-/svelte-range-slider-pips-3.1.4.tgz", + "integrity": "sha512-/dQjdsm9sAB3HoOHJjRqADpx3y7RVSB+djcZAicoOhZETXhoeqQrf6yoI9UTurj0pObyMQhSU7CbHagi5mrIPg==", + "dev": true, + "license": "MPL-2.0", + "peerDependencies": { + "svelte": "^4.2.7 || ^5.0.0" + } + }, + "node_modules/svelte-select": { + "version": "5.8.3", + "resolved": "https://registry.npmjs.org/svelte-select/-/svelte-select-5.8.3.tgz", + "integrity": "sha512-nQsvflWmTCOZjssdrNptzfD1Ok45hHVMTL5IHay5DINk7dfu5Er+8KsVJnZMJdSircqtR0YlT4YkCFlxOUhVPA==", + "license": "ISC", + "dependencies": { + "svelte-floating-ui": "1.5.8" + } + }, + "node_modules/svelte-transition": { + "version": "0.0.17", + "resolved": "https://registry.npmjs.org/svelte-transition/-/svelte-transition-0.0.17.tgz", + "integrity": "sha512-df3kJdJogRNDGiEwJSb7Pdk8/7ZTcQapHhtQ5VicaT/oUP/BNH5qR5Q3sKvptBvOD0uJSzjjMVC5oLwdgjUdRw==", + "peerDependencies": { + "svelte": "^3.59.1 || ^4.0.0 || ^5.0.0" + } + }, + "node_modules/tailwindcss": { + "version": "3.4.13", + "license": "MIT", + "dependencies": { + "@alloc/quick-lru": "^5.2.0", + "arg": "^5.0.2", + "chokidar": "^3.5.3", + "didyoumean": "^1.2.2", + "dlv": "^1.1.3", + "fast-glob": "^3.3.0", + "glob-parent": "^6.0.2", + "is-glob": "^4.0.3", + "jiti": "^1.21.0", + "lilconfig": "^2.1.0", + "micromatch": "^4.0.5", + "normalize-path": "^3.0.0", + "object-hash": "^3.0.0", + "picocolors": "^1.0.0", + "postcss": "^8.4.23", + "postcss-import": "^15.1.0", + "postcss-js": "^4.0.1", + "postcss-load-config": "^4.0.1", + "postcss-nested": "^6.0.1", + "postcss-selector-parser": "^6.0.11", + "resolve": "^1.22.2", + "sucrase": "^3.32.0" + }, + "bin": { + "tailwind": "lib/cli.js", + "tailwindcss": "lib/cli.js" + }, + "engines": { + "node": ">=14.0.0" + } + }, + "node_modules/tailwindcss/node_modules/postcss-load-config": { + "version": "4.0.2", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "lilconfig": "^3.0.0", + "yaml": "^2.3.4" + }, + "engines": { + "node": ">= 14" + }, + "peerDependencies": { + "postcss": ">=8.0.9", + "ts-node": ">=9.0.0" + }, + "peerDependenciesMeta": { + "postcss": { + "optional": true + }, + "ts-node": { + "optional": true + } + } + }, + "node_modules/tailwindcss/node_modules/postcss-load-config/node_modules/lilconfig": { + "version": "3.1.2", + "license": "MIT", + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/antonk52" + } + }, + "node_modules/tailwindcss/node_modules/yaml": { + "version": "2.5.1", + "license": "ISC", + "bin": { + "yaml": "bin.mjs" + }, + "engines": { + "node": ">= 14" + } + }, + "node_modules/text-table": { + "version": "0.2.0", + "dev": true, + "license": "MIT" + }, + "node_modules/thenify": { + "version": "3.3.1", + "license": "MIT", + "dependencies": { + "any-promise": "^1.0.0" + } + }, + "node_modules/thenify-all": { + "version": "1.6.0", + "license": "MIT", + "dependencies": { + "thenify": ">= 3.1.0 < 4" + }, + "engines": { + "node": ">=0.8" + } + }, + "node_modules/tiny-glob": { + "version": "0.2.9", + "dev": true, + "license": "MIT", + "dependencies": { + "globalyzer": "0.1.0", + "globrex": "^0.1.2" + } + }, + "node_modules/to-regex-range": { + "version": "5.0.1", + "license": "MIT", + "dependencies": { + "is-number": "^7.0.0" + }, + "engines": { + "node": ">=8.0" + } + }, + "node_modules/totalist": { + "version": "3.0.1", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/ts-interface-checker": { + "version": "0.1.13", + "license": "Apache-2.0" + }, + "node_modules/type-check": { + "version": "0.4.0", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/typed.js": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/typed.js/-/typed.js-2.1.0.tgz", + "integrity": "sha512-bDuXEf7YcaKN4g08NMTUM6G90XU25CK3bh6U0THC/Mod/QPKlEt9g/EjvbYB8x2Qwr2p6J6I3NrsoYaVnY6wsQ==" + }, + "node_modules/uc.micro": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/uc.micro/-/uc.micro-2.1.0.tgz", + "integrity": "sha512-ARDJmphmdvUk6Glw7y9DQ2bFkKBHwQHLi2lsaH6PPmz/Ka9sFOBsBluozhDltWmnv9u/cF6Rt87znRTPV+yp/A==" + }, + "node_modules/update-browserslist-db": { + "version": "1.1.1", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/browserslist" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/browserslist" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "escalade": "^3.2.0", + "picocolors": "^1.1.0" + }, + "bin": { + "update-browserslist-db": "cli.js" + }, + "peerDependencies": { + "browserslist": ">= 4.21.0" + } + }, + "node_modules/uri-js": { + "version": "4.4.1", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "punycode": "^2.1.0" + } + }, + "node_modules/util-deprecate": { + "version": "1.0.2", + "license": "MIT" + }, + "node_modules/vite": { + "version": "5.4.8", + "dev": true, + "license": "MIT", + "dependencies": { + "esbuild": "^0.21.3", + "postcss": "^8.4.43", + "rollup": "^4.20.0" + }, + "bin": { + "vite": "bin/vite.js" + }, + "engines": { + "node": "^18.0.0 || >=20.0.0" + }, + "funding": { + "url": "https://github.com/vitejs/vite?sponsor=1" + }, + "optionalDependencies": { + "fsevents": "~2.3.3" + }, + "peerDependencies": { + "@types/node": "^18.0.0 || >=20.0.0", + "less": "*", + "lightningcss": "^1.21.0", + "sass": "*", + "sass-embedded": "*", + "stylus": "*", + "sugarss": "*", + "terser": "^5.4.0" + }, + "peerDependenciesMeta": { + "@types/node": { + "optional": true + }, + "less": { + "optional": true + }, + "lightningcss": { + "optional": true + }, + "sass": { + "optional": true + }, + "sass-embedded": { + "optional": true + }, + "stylus": { + "optional": true + }, + "sugarss": { + "optional": true + }, + "terser": { + "optional": true + } + } + }, + "node_modules/vitefu": { + "version": "0.2.5", + "dev": true, + "license": "MIT", + "peerDependencies": { + "vite": "^3.0.0 || ^4.0.0 || ^5.0.0" + }, + "peerDependenciesMeta": { + "vite": { + "optional": true + } + } + }, + "node_modules/which": { + "version": "2.0.2", + "license": "ISC", + "dependencies": { + "isexe": "^2.0.0" + }, + "bin": { + "node-which": "bin/node-which" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/word-wrap": { + "version": "1.2.5", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/wrap-ansi": { + "version": "8.1.0", + "license": "MIT", + "dependencies": { + "ansi-styles": "^6.1.0", + "string-width": "^5.0.1", + "strip-ansi": "^7.0.1" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/wrap-ansi-cjs": { + "name": "wrap-ansi", + "version": "7.0.0", + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/ansi-regex": { + "version": "5.0.1", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/emoji-regex": { + "version": "8.0.0", + "license": "MIT" + }, + "node_modules/wrap-ansi-cjs/node_modules/string-width": { + "version": "4.2.3", + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/strip-ansi": { + "version": "6.0.1", + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi/node_modules/ansi-styles": { + "version": "6.2.1", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/yaml": { + "version": "1.10.2", + "dev": true, + "license": "ISC", + "engines": { + "node": ">= 6" + } + }, + "node_modules/yocto-queue": { + "version": "0.1.0", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + } + } +} diff --git a/chat/frontend/package.json b/chat/frontend/package.json new file mode 100644 index 0000000..a6a47eb --- /dev/null +++ b/chat/frontend/package.json @@ -0,0 +1,44 @@ +{ + "name": "frontend", + "version": "0.0.1", + "private": true, + "scripts": { + "dev": "vite dev", + "build": "vite build", + "preview": "vite preview", + "lint": "prettier --check . && eslint .", + "format": "prettier --write ." + }, + "devDependencies": { + "@sveltejs/adapter-node": "^5.2.9", + "@sveltejs/kit": "^2.7.0", + "@sveltejs/vite-plugin-svelte": "^3.1.2", + "@types/eslint": "^9.6.0", + "eslint": "^9.0.0", + "eslint-config-prettier": "^9.1.0", + "eslint-plugin-svelte": "^2.36.0", + "globals": "^15.0.0", + "prettier": "^3.1.1", + "prettier-plugin-svelte": "^3.1.2", + "svelte": "^4.2.7", + "vite": "^5.4.8" + }, + "type": "module", + "dependencies": { + "@floating-ui/dom": "^1.6.12", + "autoprefixer": "^10.4.20", + "dompurify": "^3.1.7", + "lodash-es": "^4.17.21", + "markdown-it": "^14.1.0", + "markdown-it-footnote": "^4.0.0", + "marked": "^14.1.3", + "nouislider": "^15.8.1", + "postcss": "^8.4.47", + "stopword": "^3.1.1", + "svelte-headlessui": "^0.0.45", + "svelte-select": "^5.8.3", + "svelte-transition": "^0.0.17", + "tailwindcss": "^3.4.13", + "typed.js": "^2.1.0" + } +} diff --git a/chat/frontend/postcss.config.js b/chat/frontend/postcss.config.js new file mode 100644 index 0000000..2e7af2b --- /dev/null +++ b/chat/frontend/postcss.config.js @@ -0,0 +1,6 @@ +export default { + plugins: { + tailwindcss: {}, + autoprefixer: {}, + }, +} diff --git a/chat/frontend/src/app.css b/chat/frontend/src/app.css new file mode 100644 index 0000000..b91373e --- /dev/null +++ b/chat/frontend/src/app.css @@ -0,0 +1,13 @@ +@tailwind base; +@tailwind components; +@tailwind utilities; + +html, body { + height: 100%; +} + +body { + margin: 0; + font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, + Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif; +} \ No newline at end of file diff --git a/chat/frontend/src/app.html b/chat/frontend/src/app.html new file mode 100644 index 0000000..6866d5a --- /dev/null +++ b/chat/frontend/src/app.html @@ -0,0 +1,36 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + %sveltekit.head% + + +
%sveltekit.body%
+ + diff --git a/chat/frontend/src/hooks.server.js b/chat/frontend/src/hooks.server.js new file mode 100644 index 0000000..ef48f20 --- /dev/null +++ b/chat/frontend/src/hooks.server.js @@ -0,0 +1,17 @@ +/** @type {import('@sveltejs/kit').Handle} */ +export async function handle({ event, resolve }) { + const response = await resolve(event, { + transformPageChunk: ({ html }) => { + if (process.env.NODE_ENV === 'production') { + const host = event.url.host; + return html.replace( + '', + `` + ); + } + return html; + } + }); + + return response; +} \ No newline at end of file diff --git a/chat/frontend/src/lib/components/BaseInput.svelte b/chat/frontend/src/lib/components/BaseInput.svelte new file mode 100644 index 0000000..cafee27 --- /dev/null +++ b/chat/frontend/src/lib/components/BaseInput.svelte @@ -0,0 +1,380 @@ + + +
+
+
+
+ +
+ +
+ +
+ + + + +
+ {#if selectedLocations.length === 0 && selectedYearRange.length === 0 } + Filter op gemeente, provincie, ministerie of jaar + {:else} + {#if selectedLocations.length > 0} + in + {#if selectedLocations.length === 1} + + + + + {:else if selectedLocations.length === 2} + + + + + + of + + + + + {:else} + {#each selectedLocations.slice(0, -1) as location} + + + , + + {/each} + of + + + + + {/if} + {/if} + + {#if selectedYearRange.length > 0 } + + tussen + + en + + + + + + {/if} + {/if} +
+
+ + +
+ +
+
+
+
+ + {#if showLocationFilter} +
+ showLocationFilter = false} + /> +
+ {/if} + + {#if showYearFilter} +
+ showYearFilter = false} + /> +
+ {/if} +
\ No newline at end of file diff --git a/chat/frontend/src/lib/components/Chat.svelte b/chat/frontend/src/lib/components/Chat.svelte new file mode 100644 index 0000000..447b6b8 --- /dev/null +++ b/chat/frontend/src/lib/components/Chat.svelte @@ -0,0 +1,733 @@ + + + +
+
+ {#if messages && messages.length > 0} +
+ + {#each messages as message} +
+
+ {#if message.role === 'user'} +

{message.content}

+ {:else if message.role === 'system'} +
toggleStatusMessage(message.id)} + > +
    + {#if expandedStatusMessages.has(message.id)} + {#each formatStatusMessage(message.content) as line} +
  • {line}
  • + {/each} + {:else} +
  • {formatStatusMessage(message.content).pop()}
  • + {/if} +
+ + + +
+ {:else if message.role === 'assistant'} + {@html insertClickableCitations(message.content, message.type)} + +
+
+ + + + + {#if copyPopupMessage != null && copyPopupMessage.id === message.id} +
+ {copyPopupMessage.text} +
+ {/if} +
+ +
+
+ + + + Wat vond je van dit antwoord? + +
+ + {#if feedbackPopupMessage != null && feedbackPopupMessage.id === message.id} + + {/if} +
+ +
+ {/if} +
+
+ {/each} + + {#if currentStatusMessage} +
+
+
+ {#if isLoading} +
+ + + +
+ {/if} +
    + {#each formatStatusMessage(streamedStatusContent) as line} +
  • {line}...
  • + {/each} +
+
+
+
+ {/if} + + {#if currentMessage} +
+
+ {@html insertClickableCitations(streamedContent)} +
+
+ {/if} +
+ {/if} +
+ + + {#if showNotification} +
+ {notificationMessage} +
+ {/if} + +
+ diff --git a/chat/frontend/src/lib/components/ChatInput.svelte b/chat/frontend/src/lib/components/ChatInput.svelte new file mode 100644 index 0000000..458e0a4 --- /dev/null +++ b/chat/frontend/src/lib/components/ChatInput.svelte @@ -0,0 +1,54 @@ + + + +
+ {#if isLoading} + + {:else} + + {/if} +
+
\ No newline at end of file diff --git a/chat/frontend/src/lib/components/ChatLayout.svelte b/chat/frontend/src/lib/components/ChatLayout.svelte new file mode 100644 index 0000000..2bb2e12 --- /dev/null +++ b/chat/frontend/src/lib/components/ChatLayout.svelte @@ -0,0 +1,541 @@ + + + + {sessionName} + + +
+ +
+
+ +
+ +
+
+
+ + + {#if documents.length > 0} +
+
+ isDocumentsPanelOpen = !isDocumentsPanelOpen} + /> +
+
+ {/if} +
+ + + + diff --git a/chat/frontend/src/lib/components/Citation.svelte b/chat/frontend/src/lib/components/Citation.svelte new file mode 100644 index 0000000..a253e5f --- /dev/null +++ b/chat/frontend/src/lib/components/Citation.svelte @@ -0,0 +1,11 @@ + + + \ No newline at end of file diff --git a/chat/frontend/src/lib/components/CloneSessionButton.svelte b/chat/frontend/src/lib/components/CloneSessionButton.svelte new file mode 100644 index 0000000..c6ade78 --- /dev/null +++ b/chat/frontend/src/lib/components/CloneSessionButton.svelte @@ -0,0 +1,54 @@ + + + \ No newline at end of file diff --git a/chat/frontend/src/lib/components/Document.svelte b/chat/frontend/src/lib/components/Document.svelte new file mode 100644 index 0000000..9ee59d2 --- /dev/null +++ b/chat/frontend/src/lib/components/Document.svelte @@ -0,0 +1,386 @@ + + + +
+
+

+ +

+
+
+ + + + +
+ +
+ + + + {doc.data.location_name || 'Onbekend'} +
+ +
+ + + + +
+ +
+ + + + {formatSource(doc.data.source)} +
+ + {#if $showAllScores} +
+ + + + {doc.score ? `${(doc.score * 100).toFixed(1)}%` : 'Onbekend'} +
+ +
+ + + + {doc.rerank_score ? `${(doc.rerank_score * 100).toFixed(1)}%` : 'Onbekend'} +
+ {/if} +
+
+ +
+ +
+ +
+
+ {#if doc.id} + + + + Is dit document relevant? + + {/if} +
+ + {#if feedbackPopupVisible} + + {/if} + +
+ {#if doc.data.url} + + {doc.data.type || 'Onbekend'} + + + + + + {/if} +
+
+
diff --git a/chat/frontend/src/lib/components/Documents.svelte b/chat/frontend/src/lib/components/Documents.svelte new file mode 100644 index 0000000..15a8e8c --- /dev/null +++ b/chat/frontend/src/lib/components/Documents.svelte @@ -0,0 +1,192 @@ + + +
+ + +
+
+ + {#if citationText} +
+ +
+ {:else} +
+ + + + + + + + + + + + +
+ {/if} + + {#if citationText} +
"{citationText}"
+ {:else} +
Documenten
+ {/if} + + {#if isDocumentsPanelOpen} +
+ +
+ {/if} +
+
+ +
+
+ {#if sortedSelectedDocuments && sortedSelectedDocuments.length > 0} + {#each sortedSelectedDocuments as doc, index} +
+ +
+ {/each} + {:else if sortedDocuments.length > 0} + {#each sortedDocuments as doc, index} +
+ +
+ {/each} + {:else} + + {/if} +
+
+
+ + diff --git a/chat/frontend/src/lib/components/DualRangeSlider.svelte b/chat/frontend/src/lib/components/DualRangeSlider.svelte new file mode 100644 index 0000000..da7371e --- /dev/null +++ b/chat/frontend/src/lib/components/DualRangeSlider.svelte @@ -0,0 +1,74 @@ + + +
+ +
+ + +
handleMouseDown(e, 'left')} + >
+ + +
handleMouseDown(e, 'right')} + >
+
+ + \ No newline at end of file diff --git a/chat/frontend/src/lib/components/Header.svelte b/chat/frontend/src/lib/components/Header.svelte new file mode 100644 index 0000000..489a8a3 --- /dev/null +++ b/chat/frontend/src/lib/components/Header.svelte @@ -0,0 +1,288 @@ + + +
+
+ +
+
+
+ +
+ +
+ +
+
+
+ +{#if showFeedbackModal} +
+
+ {#if showThankYou} +
+ +

Bedankt voor je feedback!

+

We waarderen je input.

+ +

+ Wil je een verhaal maken met behulp van Bron chat, of + heb je meer vragen over deze tool? +

+ +

Neem dan contact met ons op

+

Joost van de Loo

+

tel: 06-50733904

+

joostvandeloo@svdjincubator.nl

+
+ {:else} +
+

Hebben wij je vraag kunnen beantwoorden?

+ +
+
+
+ +
+
+ + +
+
+ + +
+ + +
+ {/if} +
+
+{/if} + + +{#if showNoticeModal} +
+
+
+

Wil je onze VIP gast zijn?

+ +
+ +
+

Op dinsdag 18 februari 2025 lanceren we Bron chat. Bij Open State Foundation, met een avond en programma speciaal voor nuchtere onderzoekers:

+ +
+
    +
  • Bron chat demo: met voorbeeld uit de praktijk
  • +
  • Kritische Reviewer Carrousel: met 10 lokale en landelijke journalisten
  • +
  • Q&A: met makers van Bron chat
  • +
+
+ +
+

+ Locatie en tijd: Marineterrein in Amsterdam, 19:30 - 21:00 (inloop vanaf 19:00). +

+
+
+

Wil je erbij zijn? Wil je iets delen, heb je een idee of een vraag? Neem contact op met

+

Joost van de Loo, 06-50733904, joostvandeloo@svdjincubator.nl

+
+
+
+
+{/if} \ No newline at end of file diff --git a/chat/frontend/src/lib/components/InputContainer.svelte b/chat/frontend/src/lib/components/InputContainer.svelte new file mode 100644 index 0000000..54f04d6 --- /dev/null +++ b/chat/frontend/src/lib/components/InputContainer.svelte @@ -0,0 +1,49 @@ + + +
+
+ + +
+
\ No newline at end of file diff --git a/chat/frontend/src/lib/components/LocationFilter.svelte b/chat/frontend/src/lib/components/LocationFilter.svelte new file mode 100644 index 0000000..c97453a --- /dev/null +++ b/chat/frontend/src/lib/components/LocationFilter.svelte @@ -0,0 +1,106 @@ + + +
+
+

Locatie(s)

+ +
+