SDS Library (Binder) Implementation Plan
Overview
The SDS Library feature provides companies with a complete view of all their Safety Data Sheets (SDSs), including:
- Standalone SDSs - Bulk imported SDSs not tied to specific inventory items
- Inventory-attached SDSs - SDSs linked to chemical inventory items
- Bulk upload capability - Import multiple SDSs at once
- Version management - Upload new versions of existing SDSs
Data Model Analysis
The existing data model fully supports this requirement:
Key Tables
chemiq_sds_documents- Global SDS repository storing the actual SDS documentschemiq_company_sds_mappings- Links companies to SDSs with:mapped_to_inventory_count- Tracks how many inventory items use this SDS- When
mapped_to_inventory_count = 0, the SDS is "standalone" (not tied to inventory) - When
mapped_to_inventory_count > 0, the SDS is attached to inventory items
chemiq_company_product_catalog- Hascurrent_sds_idfor inventory-attached SDSs
No Schema Changes Required
The current architecture already supports:
- Standalone SDSs via
CompanySDSMappingwithmapped_to_inventory_count = 0 - Version tracking via
previous_version_idandsuperseded_by_id - Bulk operations via existing upload infrastructure
Implementation Tasks
Phase 1: Backend API Enhancements
1.1 Enhanced SDS Listing Endpoint
File: tellus-ehs-hazcom-service/app/api/v1/chemiq/sds.py
Add new endpoint for SDS Library with enhanced filtering:
@router.get("/library", response_model=SDSLibraryListResponse)
async def list_sds_library(
page: int = Query(1, ge=1),
page_size: int = Query(25, ge=1, le=100),
search: Optional[str] = Query(None, description="Search by product name, manufacturer"),
source_filter: Optional[str] = Query(None, description="all, standalone, inventory_attached"),
review_status: Optional[str] = Query(None, description="pending, reviewed, approved, flagged"),
sort_by: Optional[str] = Query("created_at", description="product_name, manufacturer, revision_date, created_at"),
sort_order: Optional[str] = Query("desc", description="asc or desc"),
ctx: UserContext = Depends(get_user_context),
db: Session = Depends(get_db)
):
"""
List all SDSs in company's library with filtering and sorting.
Returns both standalone SDSs and inventory-attached SDSs.
"""
New Response Schema Fields:
is_standalone- Boolean indicating if SDS has no inventory attachmentsinventory_attachment_count- Number of inventory items using this SDSreview_status- Company-specific review statusfirst_mapped_at- When SDS was added to company's library
1.2 Bulk Upload Endpoints
For large SDS libraries (500+ files), we provide two upload strategies:
1.2.1 Chunked Batch Upload (Frontend-driven, 10-50 files per request)
File: tellus-ehs-hazcom-service/app/api/v1/chemiq/sds.py
@router.post("/bulk-upload", response_model=SDSBulkUploadResponse)
async def bulk_upload_sds(
files: List[UploadFile] = File(..., description="Multiple PDF files (max 25 per request)"),
ctx: UserContext = Depends(get_user_context),
db: Session = Depends(get_db)
):
"""
Bulk upload multiple SDS documents (chunked approach).
- Accepts up to 25 PDF files per request
- Frontend chunks large uploads into multiple requests
- Each file is validated and uploaded to S3
- Creates SDS records with 'pending' review status
- Queues all files for background parsing
- Returns summary of successful/failed uploads
"""
Response includes:
total_files- Number of files submitted in this batchsuccessful_uploads- List of successfully uploaded SDS IDsfailed_uploads- List of failures with error messagesduplicates_found- List of files that were duplicates (by checksum)
1.2.2 ZIP Upload with Background Processing (For 500+ files)
File: tellus-ehs-hazcom-service/app/api/v1/chemiq/sds.py
@router.post("/bulk-upload-zip", response_model=SDSBulkUploadJobResponse)
async def bulk_upload_sds_zip(
file: UploadFile = File(..., description="ZIP file containing PDF SDSs"),
ctx: UserContext = Depends(get_user_context),
db: Session = Depends(get_db)
):
"""
Upload ZIP file containing multiple SDS PDFs for background processing.
- Accepts ZIP file up to 500MB
- Uploads ZIP to S3 immediately
- Creates background job for extraction and processing
- Returns job_id for status polling
- User can close browser - processing continues
"""
@router.get("/bulk-upload-status/{job_id}", response_model=SDSBulkUploadJobStatus)
async def get_bulk_upload_status(
job_id: UUID,
ctx: UserContext = Depends(get_user_context),
db: Session = Depends(get_db)
):
"""
Get status of bulk upload job.
Returns:
- job_status: pending, processing, completed, failed
- total_files: Total PDFs found in ZIP
- processed_files: Number processed so far
- successful_uploads: Count of successful uploads
- failed_uploads: Count of failures
- errors: List of error details
"""
Upload Strategy Decision Matrix
| Scenario | Recommended Approach |
|---|---|
| < 50 files | Single /bulk-upload request |
| 50-500 files | Chunked uploads via frontend (25 files/batch) |
| 500+ files | ZIP upload with background processing |
New Database Table: chemiq_sds_bulk_upload_jobs
CREATE TABLE chemiq_sds_bulk_upload_jobs (
job_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
company_id UUID NOT NULL REFERENCES core_data_companies(company_id),
uploaded_by_user_id UUID REFERENCES core_data_users(user_id),
-- File info
s3_bucket VARCHAR(100) NOT NULL,
s3_key TEXT NOT NULL,
file_name VARCHAR(255) NOT NULL,
file_size BIGINT,
-- Job status
job_status VARCHAR(20) NOT NULL DEFAULT 'pending', -- pending, extracting, processing, completed, failed
-- Progress tracking
total_files INTEGER DEFAULT 0,
processed_files INTEGER DEFAULT 0,
successful_uploads INTEGER DEFAULT 0,
failed_uploads INTEGER DEFAULT 0,
duplicates_found INTEGER DEFAULT 0,
-- Results
results JSONB, -- Detailed results per file
error_message TEXT,
-- Timestamps
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
started_at TIMESTAMP WITH TIME ZONE,
completed_at TIMESTAMP WITH TIME ZONE
);
CREATE INDEX idx_bulk_upload_jobs_company ON chemiq_sds_bulk_upload_jobs(company_id);
CREATE INDEX idx_bulk_upload_jobs_status ON chemiq_sds_bulk_upload_jobs(job_status);
1.3 Upload New SDS Version Endpoint
File: tellus-ehs-hazcom-service/app/api/v1/chemiq/sds.py
@router.post("/{sds_id}/new-version", response_model=SDSDocumentResponse)
async def upload_new_sds_version(
sds_id: UUID,
file: UploadFile = File(..., description="New version PDF"),
revision_date: date = Form(..., description="New revision date"),
revision_number: Optional[str] = Form(None),
ctx: UserContext = Depends(get_user_context),
db: Session = Depends(get_db)
):
"""
Upload a new version of an existing SDS document.
- Links new version to previous via version tracking
- Updates all inventory items using old SDS to use new version
- Marks old SDS as superseded (is_current=False)
- Creates change log entry
"""
1.4 SDS Detail Endpoint Enhancement
File: tellus-ehs-hazcom-service/app/api/v1/chemiq/sds.py
Enhance existing GET /{sds_id} to include:
- Version history (previous versions list)
- Inventory items using this SDS
- Company-specific metadata (review status, notes)
@router.get("/{sds_id}/details", response_model=SDSLibraryDetailResponse)
async def get_sds_library_details(
sds_id: UUID,
ctx: UserContext = Depends(get_user_context),
db: Session = Depends(get_db)
):
"""
Get comprehensive SDS details for library view.
Includes:
- Full SDS metadata
- Hazard info (Section 2)
- Composition (Section 3)
- Version history
- Inventory items using this SDS
- Company-specific review status and notes
"""
Phase 2: Repository Layer Updates
2.1 SDS Repository Enhancements
File: tellus-ehs-hazcom-service/app/db/repositories/chemiq/sds_repository.py
Add new methods to ChemIQSDSRepository:
def list_library(
self,
company_id: UUID,
search: Optional[str] = None,
source_filter: Optional[str] = None, # 'all', 'standalone', 'inventory_attached'
review_status: Optional[str] = None,
sort_by: str = "created_at",
sort_order: str = "desc",
page: int = 1,
page_size: int = 25
) -> Tuple[List[SDSLibraryItem], int]:
"""List all company SDSs with filtering and sorting."""
def get_version_history(self, sds_id: UUID) -> List[SDSDocument]:
"""Get all versions of an SDS document."""
def get_inventory_using_sds(
self,
sds_id: UUID,
company_id: UUID
) -> List[ChemIQInventory]:
"""Get all inventory items using this SDS."""
2.2 Company SDS Mapping Repository Updates
Add method to update mapped_to_inventory_count:
def increment_inventory_count(self, company_id: UUID, sds_id: UUID) -> None:
"""Increment mapped_to_inventory_count when SDS is attached to inventory."""
def decrement_inventory_count(self, company_id: UUID, sds_id: UUID) -> None:
"""Decrement mapped_to_inventory_count when SDS is detached from inventory."""
Phase 3: Service Layer Updates
3.1 SDS Service Enhancements
File: tellus-ehs-hazcom-service/app/services/chemiq/sds_service.py
Add new methods:
async def bulk_upload_sds(
self,
company_id: UUID,
user_id: UUID,
files: List[UploadFile]
) -> SDSBulkUploadResult:
"""
Process bulk SDS upload.
For each file:
1. Validate file type and size
2. Calculate checksum for deduplication
3. Upload to S3
4. Create SDS document record
5. Create company mapping (standalone - mapped_to_inventory_count=0)
6. Queue for parsing
"""
async def upload_new_version(
self,
company_id: UUID,
user_id: UUID,
sds_id: UUID,
file: UploadFile,
revision_date: date,
revision_number: Optional[str]
) -> SDSDocument:
"""
Upload new version of existing SDS.
1. Validate file
2. Upload to S3
3. Create new SDS document with previous_version_id
4. Update old SDS with superseded_by_id and is_current=False
5. Update all CompanyProductCatalog entries pointing to old SDS
6. Create change log
7. Queue new version for parsing
"""
def get_library_details(
self,
company_id: UUID,
sds_id: UUID
) -> SDSLibraryDetails:
"""Get comprehensive SDS details including version history and usage."""
Phase 4: Schema Updates
4.1 New Response Schemas
File: tellus-ehs-hazcom-service/app/schemas/chemiq/sds.py
class SDSLibraryListItem(BaseModel):
"""SDS Library list item with inventory attachment info."""
sds_id: UUID
product_name: str
manufacturer: str
revision_date: date
revision_number: Optional[str]
document_language: str
sds_parsed: bool
parse_confidence: Optional[float]
# Library-specific fields
is_standalone: bool
inventory_attachment_count: int
review_status: str
first_mapped_at: datetime
signal_word: Optional[str] # From hazard info
pictograms: Optional[List[str]] # From hazard info
class SDSLibraryListResponse(BaseModel):
"""Paginated SDS library response."""
items: List[SDSLibraryListItem]
total: int
page: int
page_size: int
total_pages: int
class SDSVersionHistoryItem(BaseModel):
"""SDS version history item."""
sds_id: UUID
revision_date: date
revision_number: Optional[str]
is_current: bool
created_at: datetime
class SDSLibraryDetailResponse(SDSDocumentDetailResponse):
"""Extended SDS details for library view."""
is_standalone: bool
inventory_attachment_count: int
inventory_items: List[InventoryAttachmentItem]
version_history: List[SDSVersionHistoryItem]
review_status: str
internal_notes: Optional[str]
reviewed_by: Optional[str]
reviewed_at: Optional[datetime]
class SDSBulkUploadResult(BaseModel):
"""Single file upload result."""
filename: str
success: bool
sds_id: Optional[UUID]
error_message: Optional[str]
is_duplicate: bool = False
class SDSBulkUploadResponse(BaseModel):
"""Bulk upload response."""
total_files: int
successful_uploads: int
failed_uploads: int
duplicates_found: int
results: List[SDSBulkUploadResult]
Phase 5: Frontend Implementation
5.1 SDS Library Page
File: tellus-ehs-hazcom-ui/src/pages/chemiq/sds-library/index.tsx
Main SDS Library page with:
- Header section: Title, bulk upload button
- Filters bar: Search, source filter (All/Standalone/Inventory), review status
- Data table: Sortable columns (Product, Manufacturer, Revision Date, Status, Inventory Count)
- Pagination: Standard pagination controls
// Route: /chemiq/sds-library
export function SDSLibraryPage() {
// State for filters, pagination, sorting
// Fetch SDS library data
// Render header, filters, table, pagination
}
5.2 SDS Library Components
Files:
SDSLibraryFilters.tsx- Filter bar with search, dropdownsSDSLibraryTable.tsx- Data table with sortable headersSDSBulkUploadModal.tsx- Modal for bulk upload with progressSDSVersionUploadModal.tsx- Modal for uploading new version
5.2.1 Bulk Upload Modal with Chunked Upload & Progress
// SDSBulkUploadModal.tsx - Key implementation details
interface UploadProgress {
totalFiles: number;
uploadedFiles: number;
successCount: number;
failCount: number;
duplicateCount: number;
currentBatch: number;
totalBatches: number;
status: 'idle' | 'uploading' | 'completed' | 'error';
errors: Array<{ filename: string; error: string }>;
}
const BATCH_SIZE = 25; // Files per API request
const handleBulkUpload = async (files: File[]) => {
const batches = chunkArray(files, BATCH_SIZE);
setProgress({
totalFiles: files.length,
uploadedFiles: 0,
successCount: 0,
failCount: 0,
duplicateCount: 0,
currentBatch: 0,
totalBatches: batches.length,
status: 'uploading',
errors: []
});
for (let i = 0; i < batches.length; i++) {
const batch = batches[i];
setProgress(prev => ({ ...prev, currentBatch: i + 1 }));
try {
const result = await sdsLibraryApi.bulkUpload(batch);
setProgress(prev => ({
...prev,
uploadedFiles: prev.uploadedFiles + batch.length,
successCount: prev.successCount + result.successful_uploads,
failCount: prev.failCount + result.failed_uploads,
duplicateCount: prev.duplicateCount + result.duplicates_found,
errors: [...prev.errors, ...result.results.filter(r => !r.success)]
}));
} catch (error) {
// Mark entire batch as failed, continue with next batch
setProgress(prev => ({
...prev,
uploadedFiles: prev.uploadedFiles + batch.length,
failCount: prev.failCount + batch.length,
errors: [...prev.errors, ...batch.map(f => ({ filename: f.name, error: 'Batch upload failed' }))]
}));
}
}
setProgress(prev => ({ ...prev, status: 'completed' }));
};
UI Features:
- Drag-and-drop zone for file selection
- File count display before upload starts
- Progress bar showing overall progress
- Batch progress indicator ("Batch 3 of 20")
- Real-time success/fail/duplicate counters
- Error list with filename and error message
- "Upload More" button after completion
- Cancel button (stops after current batch)
5.2.2 ZIP Upload Option (for 500+ files)
// For very large uploads, offer ZIP option
const handleZipUpload = async (zipFile: File) => {
setProgress({ status: 'uploading-zip' });
const { job_id } = await sdsLibraryApi.bulkUploadZip(zipFile);
// Switch to polling mode
setJobId(job_id);
setProgress({ status: 'processing-background' });
// Poll for status every 5 seconds
const pollInterval = setInterval(async () => {
const status = await sdsLibraryApi.getBulkUploadStatus(job_id);
setProgress({
totalFiles: status.total_files,
uploadedFiles: status.processed_files,
successCount: status.successful_uploads,
failCount: status.failed_uploads,
duplicateCount: status.duplicates_found,
status: status.job_status === 'completed' ? 'completed' : 'processing-background'
});
if (status.job_status === 'completed' || status.job_status === 'failed') {
clearInterval(pollInterval);
}
}, 5000);
};
5.3 SDS Detail Page Enhancement
File: tellus-ehs-hazcom-ui/src/pages/chemiq/sds-library/SDSDetailPage.tsx
SDS detail page showing:
- Info Card: Product name, manufacturer, revision info
- Hazard Card: Signal word, pictograms, H-codes, P-codes
- Composition Card: Ingredients table (from Section 3)
- Version History Card: List of all versions with download links
- Inventory Usage Card: List of inventory items using this SDS
- Actions Panel: Download PDF, Upload New Version, Update Review Status
5.4 Navigation Update
File: tellus-ehs-hazcom-ui/src/components/layout/Sidebar.tsx
Add SDS Library to ChemIQ navigation:
{
name: 'SDS Library',
href: '/chemiq/sds-library',
icon: FileText, // or Library icon
}
Phase 6: Type Definitions
6.1 Frontend Types
File: tellus-ehs-hazcom-ui/src/types/index.ts
// SDS Library Types
export interface SDSLibraryItem {
sds_id: string;
product_name: string;
manufacturer: string;
revision_date: string;
revision_number?: string;
document_language: string;
sds_parsed: boolean;
parse_confidence?: number;
is_standalone: boolean;
inventory_attachment_count: number;
review_status: 'pending' | 'reviewed' | 'approved' | 'flagged';
first_mapped_at: string;
signal_word?: string;
pictograms?: string[];
}
export interface SDSLibraryFilters {
search?: string;
source_filter?: 'all' | 'standalone' | 'inventory_attached';
review_status?: string;
sort_by?: string;
sort_order?: 'asc' | 'desc';
page: number;
page_size: number;
}
export interface SDSLibraryListResponse {
items: SDSLibraryItem[];
total: number;
page: number;
page_size: number;
total_pages: number;
}
export interface SDSBulkUploadResult {
filename: string;
success: boolean;
sds_id?: string;
error_message?: string;
is_duplicate: boolean;
}
export interface SDSBulkUploadResponse {
total_files: number;
successful_uploads: number;
failed_uploads: number;
duplicates_found: number;
results: SDSBulkUploadResult[];
}
Phase 7: API Service Layer (Frontend)
File: tellus-ehs-hazcom-ui/src/services/api/sds-library.api.ts
export const sdsLibraryApi = {
// List SDS library with filters
listLibrary(filters: SDSLibraryFilters): Promise<SDSLibraryListResponse>;
// Get SDS details
getDetails(sdsId: string): Promise<SDSLibraryDetailResponse>;
// Bulk upload SDSs
bulkUpload(files: File[]): Promise<SDSBulkUploadResponse>;
// Upload new version
uploadNewVersion(sdsId: string, file: File, revisionDate: string, revisionNumber?: string): Promise<SDSDocumentResponse>;
// Get presigned download URL
getDownloadUrl(sdsId: string): Promise<{ download_url: string }>;
// Update review status
updateReviewStatus(sdsId: string, status: string, notes?: string): Promise<void>;
};
Phase 8: Background Worker for ZIP Processing
File: tellus-ehs-background-service/app/workers/sds_bulk_upload_worker.py
The background service needs a worker to process ZIP uploads asynchronously.
8.1 ZIP Processing Worker
class SDSBulkUploadWorker:
"""
Background worker for processing bulk SDS ZIP uploads.
Workflow:
1. Poll for pending jobs from chemiq_sds_bulk_upload_jobs
2. Download ZIP from S3
3. Extract PDFs to temp directory
4. Process each PDF (validate, upload to S3, create records)
5. Create parse jobs for each SDS
6. Update job status and progress
"""
async def process_job(self, job_id: UUID):
job = self.get_job(job_id)
self.update_status(job_id, 'extracting')
# Download and extract ZIP
zip_path = await self.download_from_s3(job.s3_bucket, job.s3_key)
pdf_files = self.extract_zip(zip_path)
self.update_job(job_id, total_files=len(pdf_files), job_status='processing')
results = []
for i, pdf_path in enumerate(pdf_files):
try:
# Process single SDS
sds_id = await self.process_single_sds(pdf_path, job.company_id)
results.append({'filename': pdf_path.name, 'success': True, 'sds_id': str(sds_id)})
# Create parse job
self.create_parse_job(sds_id, priority=3)
except DuplicateSDSError:
results.append({'filename': pdf_path.name, 'success': False, 'is_duplicate': True})
except Exception as e:
results.append({'filename': pdf_path.name, 'success': False, 'error': str(e)})
# Update progress every 10 files
if i % 10 == 0:
self.update_progress(job_id, processed_files=i+1)
# Finalize job
self.finalize_job(job_id, results)
async def process_single_sds(self, pdf_path: Path, company_id: UUID) -> UUID:
"""Process a single PDF file."""
# 1. Read file and calculate checksum
content = pdf_path.read_bytes()
checksum = hashlib.sha256(content).hexdigest()
# 2. Check for duplicate
existing = self.sds_repo.get_by_checksum(checksum)
if existing:
# Just create company mapping if SDS exists
self.mapping_repo.ensure_mapping(company_id, existing.sds_id)
raise DuplicateSDSError(existing.sds_id)
# 3. Extract metadata from filename (product_manufacturer_date.pdf)
product_name, manufacturer, revision_date = self.parse_filename(pdf_path.name)
# 4. Upload to S3
s3_key = f"sds/{company_id}/{checksum}.pdf"
self.s3_client.upload_file(pdf_path, s3_key)
# 5. Create SDS record
sds = SDSDocument(
product_name=product_name,
manufacturer=manufacturer,
revision_date=revision_date,
s3_bucket=self.s3_bucket,
s3_key=s3_key,
file_checksum=checksum,
file_size=len(content),
source_type='bulk_upload',
contributed_by_company_id=company_id
)
self.db.add(sds)
# 6. Create company mapping (standalone - count=0)
mapping = CompanySDSMapping(
company_id=company_id,
sds_id=sds.sds_id,
mapped_to_inventory_count=0,
review_status='pending'
)
self.db.add(mapping)
self.db.commit()
return sds.sds_id
8.2 Worker Scheduling
The worker should run continuously or on a schedule:
# In main worker loop
async def run_bulk_upload_worker():
while True:
# Check for pending jobs
pending_jobs = get_pending_bulk_upload_jobs(limit=1)
if pending_jobs:
for job in pending_jobs:
await worker.process_job(job.job_id)
else:
# No pending jobs, wait before next check
await asyncio.sleep(30)
Audit: mapped_to_inventory_count Maintenance
Current State Analysis
The mapped_to_inventory_count field on CompanySDSMapping needs to be properly maintained when:
- SDS is attached to inventory - Increment count
- SDS is detached from inventory - Decrement count
- Inventory item is deleted - Decrement count
- SDS version is replaced - Transfer counts appropriately
Required Code Audit
Check these files for proper count maintenance:
ChemIQService.attach_sds_to_chemical()- Should increment countChemIQService.update_chemical()- Should handle SDS changesChemIQService.delete_chemical()- Should decrement count
Implementation Checklist
- Audit
attach_sds_to_chemicalin ChemIQService - Add count increment when SDS attached
- Add count decrement when SDS detached or inventory deleted
- Handle SDS version replacement (new version inherits count)
File Summary
Backend Files to Create/Modify
| File | Action | Description |
|---|---|---|
app/api/v1/chemiq/sds.py | Modify | Add library, bulk-upload, bulk-upload-zip, new-version endpoints |
app/schemas/chemiq/sds.py | Modify | Add library-specific schemas |
app/services/chemiq/sds_service.py | Modify | Add bulk upload, new version, library details methods |
app/db/repositories/chemiq/sds_repository.py | Modify | Add list_library, version history methods |
app/db/models/chemiq_sds.py | Modify | Add SDSBulkUploadJob model |
alembic/versions/xxx_add_sds_bulk_upload_jobs.py | Create | Migration for bulk upload jobs table |
Background Service Files
| File | Action | Description |
|---|---|---|
app/workers/sds_bulk_upload_worker.py | Create | Worker for processing ZIP uploads |
app/db/repositories/sds_bulk_upload_repository.py | Create | Repository for bulk upload jobs |
Frontend Files to Create/Modify
| File | Action | Description |
|---|---|---|
src/pages/chemiq/sds-library/index.tsx | Create | Main SDS Library page |
src/pages/chemiq/sds-library/SDSDetailPage.tsx | Create | SDS detail page with versions |
src/pages/chemiq/sds-library/components/SDSLibraryFilters.tsx | Create | Filter bar |
src/pages/chemiq/sds-library/components/SDSLibraryTable.tsx | Create | Data table |
src/pages/chemiq/sds-library/components/SDSBulkUploadModal.tsx | Create | Bulk upload modal with progress |
src/pages/chemiq/sds-library/components/SDSVersionUploadModal.tsx | Create | Version upload modal |
src/services/api/sds-library.api.ts | Create | API service layer |
src/types/index.ts | Modify | Add SDS Library types |
src/components/layout/Sidebar.tsx | Modify | Add navigation link |
src/App.tsx or routes file | Modify | Add routes |
Testing Checklist
Backend Tests
- Bulk upload with valid PDFs
- Bulk upload with mixed valid/invalid files
- Bulk upload duplicate detection
- New version upload links versions correctly
- New version updates inventory items
- Library listing with filters
- Library listing pagination and sorting
- mapped_to_inventory_count accuracy
Frontend Tests
- SDS Library page renders
- Filters work correctly
- Sorting works on all columns
- Pagination works
- Bulk upload modal flow
- Version upload modal flow
- SDS detail page shows all sections
- Download PDF works
Implementation Order
Iteration 1: Core SDS Library View (MVP)
- Backend - Enhanced listing endpoint (
/library) - Backend - Repository
list_library()method - Backend - Schema updates for library responses
- Frontend - Main SDS Library page with table, filters, sorting, pagination
- Frontend - Navigation link in sidebar
Iteration 2: Bulk Upload (Chunked)
- Backend -
/bulk-uploadendpoint (25 files/batch) - Backend - Bulk upload service method
- Frontend - Bulk upload modal with progress tracking
- Backend - Audit and fix
mapped_to_inventory_countmaintenance
Iteration 3: SDS Detail & Version Management
- Backend -
/library/{sds_id}/detailsendpoint - Backend -
/{sds_id}/new-versionendpoint - Frontend - SDS detail page with version history
- Frontend - Version upload modal
Iteration 4: ZIP Upload for Large Libraries (Optional)
- Backend - Create
chemiq_sds_bulk_upload_jobstable + migration - Backend -
/bulk-upload-zipand/bulk-upload-status/{job_id}endpoints - Background Service - ZIP processing worker
- Frontend - ZIP upload option in bulk upload modal
Iteration 5: Testing & Polish
- Testing - End-to-end testing of all flows
- Polish - Error handling, edge cases, UX improvements