Support for large file import, display upload progress#1812
Support for large file import, display upload progress#1812gpapp wants to merge 44 commits intodocmost:mainfrom
Conversation
Changed page selection logic
content/url and multiple combinations
Fixed coderabbit recommended TS promise
ion instead of EDIT.
📝 WalkthroughWalkthroughReplaces DATABASE_URL with explicit POSTGRES_* env vars, moves to a pnpm monorepo Docker build, adds Web Share Target + service worker, implements upload/import progress (client + server), enables streaming uploads across storage drivers, and adds file_tasks.progress DB column with migration. Changes
Sequence DiagramsequenceDiagram
autonumber
actor User
participant Client
participant ServiceWorker as SW
participant API as UploadAPI
participant Server
participant Storage
participant DB
User->>Client: Share file or choose upload
alt Web Share POST
User->>SW: POST /share-target (form-data)
SW->>SW: store JSON in Cache 'share-target'
SW-->>Client: redirect to /share-target (303)
Client->>Client: read cached shared content
Client->>API: POST /pages/import (composed content)
else Direct Upload
Client->>Client: show "Uploading..." notification
Client->>API: uploadFile(file, onProgress)
end
API->>Server: receive multipart upload (stream)
Server->>DB: create/update file_task (progress)
loop upload in flight
Server->>Client: progress events via response / websocket
Client->>Client: update Progress UI
end
Server->>Storage: upload(filePath, fileStream)
Storage->>Storage: stream to disk or S3
Storage-->>Server: upload complete
Server->>DB: update file_task (progress=100,status=completed)
Server-->>API: attachment metadata
API-->>Client: success
Client->>Client: hide progress, show success
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
apps/client/src/features/page/components/page-import-modal.tsx (2)
156-168: Missing optional chaining could cause runtime error.On line 162,
err?.response.data.messageuses optional chaining onresponsebut not ondata. Ifresponseexists butdatais undefined, this will throw.🔎 Proposed fix
- message: err?.response.data.message, + message: err?.response?.data?.message,
242-261: Same optional chaining issue in error handler.Line 250 has the same issue:
err.response?.data.messageshould use optional chaining ondataas well.🔎 Proposed fix
- reason: err.response?.data.message, + reason: err.response?.data?.message,apps/server/src/integrations/import/services/file-import-task.service.ts (2)
116-136: Task status not updated when Confluence module is missing.When the Confluence EE module fails to load (line 120-126), the method returns early without updating the task status to
Failed. The user will see the task stuck in a pending/processing state.🔎 Proposed fix
try { // eslint-disable-next-line @typescript-eslint/no-require-imports ConfluenceModule = require('./../../../ee/confluence-import/confluence-import.service'); } catch (err) { this.logger.error( 'Confluence import requested but EE module not bundled in this build', ); + await this.updateTaskStatus( + fileTaskId, + FileTaskStatus.Failed, + 'Confluence import is not available in this build', + ); return; }
381-467: Race condition on shared countertotalPagesProcessed.The variable
totalPagesProcessedis incremented concurrently by up to 5 tasks (line 453) without synchronization. While JavaScript is single-threaded, theawaitpoints allow interleaving, and the progress calculation (lines 455-461) may read stale or inconsistent values.This won't cause crashes but will produce inaccurate progress reports.
🔎 Proposed fix - track progress per level instead
+ let totalPagesProcessed = 0; + for (const level of sortedLevels) { const levelPages = pagesByLevel.get(level)!; const levelPreparedPages: InsertablePage[] = []; const tasks = levelPages.map(([filePath, page]) => processingLimit(async () => { // ... processing logic ... levelPreparedPages.push(insertablePage); allBacklinks.push(...backlinks); - totalPagesProcessed++; - - const totalPages = pagesMap.size; - // Update DB progress (mapping 10% - 95% for page processing) - const progress = Math.min( - 10 + Math.round((totalPagesProcessed / totalPages) * 85), - 95, - ); - await this.updateTaskProgress(fileTask.id, progress); }), ); await Promise.all(tasks); preparedPagesByLevel.set(level, levelPreparedPages); + + // Update progress after each level completes + totalPagesProcessed += levelPreparedPages.length; + const progress = Math.min( + 10 + Math.round((totalPagesProcessed / pagesMap.size) * 85), + 95, + ); + await this.updateTaskProgress(fileTask.id, progress); }This also reduces database writes from once per page to once per level.
🧹 Nitpick comments (12)
.env.example (2)
11-14: Remove quotes from example values to avoid confusion.The quotes around these values will be included literally when the file is sourced or parsed by some tools. In
.envfiles, values typically don't need quotes unless they contain spaces.🔎 Suggested fix
-POSTGRES_DB="docmost" -POSTGRES_USER="docmost" -POSTGRES_PASSWORD="docmost" -POSTGRES_PASSWORD_URLENCODE="docmost" +POSTGRES_DB=docmost +POSTGRES_USER=docmost +POSTGRES_PASSWORD=docmost +POSTGRES_PASSWORD_URLENCODE=docmostBased on static analysis hints from dotenv-linter.
14-14: Consider adding a comment explainingPOSTGRES_PASSWORD_URLENCODE.This variable's purpose (URL-encoded version of the password for DATABASE_URL construction) may not be immediately clear to users. A brief comment would improve documentation.
+# URL-encoded password for DATABASE_URL (needed if password contains special characters) POSTGRES_PASSWORD_URLENCODE="docmost"docker-compose.yml (1)
17-25: Add defaults for MAIL/SMTP variables to prevent empty values.These environment variables reference
.envvars without defaults. If the.envfile doesn't define them, the container will receive empty strings, which could cause issues.🔎 Suggested fix
- MAIL_DRIVER: ${MAIL_DRIVER} - MAIL_FROM_ADDRESS: ${MAIL_FROM_ADDRESS} - MAIL_FROM_NAME: ${MAIL_FROM_NAME} - - SMTP_HOST: ${SMTP_HOST} - SMTP_PORT: ${SMTP_PORT} - SMTP_USERNAME: ${SMTP_USERNAME} - SMTP_PASSWORD: ${SMTP_PASSWORD} - SMTP_SECURE: ${SMTP_SECURE} + MAIL_DRIVER: ${MAIL_DRIVER:-smtp} + MAIL_FROM_ADDRESS: ${MAIL_FROM_ADDRESS:-} + MAIL_FROM_NAME: ${MAIL_FROM_NAME:-Docmost} + + SMTP_HOST: ${SMTP_HOST:-} + SMTP_PORT: ${SMTP_PORT:-587} + SMTP_USERNAME: ${SMTP_USERNAME:-} + SMTP_PASSWORD: ${SMTP_PASSWORD:-} + SMTP_SECURE: ${SMTP_SECURE:-false}apps/server/src/database/migrations/20251228T184500-add_progress_to_file_tasks.ts (1)
3-8: Consider adding NOT NULL constraint to progress column.The progress column is added as nullable with a default of 0. Since progress tracking typically has a known state (0 for not started, 0-100 for in-progress), consider whether the column should be NOT NULL to enforce that progress always has a defined value.
🔎 Proposed change to add NOT NULL constraint
export async function up(db: Kysely<any>): Promise<void> { await db.schema .alterTable('file_tasks') - .addColumn('progress', 'int4', (col) => col.defaultTo(0)) + .addColumn('progress', 'int4', (col) => col.defaultTo(0).notNull()) .execute(); }Note: If you make this change, also update the FileTasks type in
apps/server/src/database/types/db.d.tsfromprogress: Generated<number | null>toprogress: Generated<number>and regenerate types.apps/server/src/core/attachment/services/attachment.service.ts (1)
53-85: Add null safety check for multiPartFile access.Line 85 accesses
preparedFile.multiPartFile.filewithout checking ifmultiPartFileis defined. WhileprepareFileshould always returnmultiPartFile, the optional typing (multiPartFile?:) suggests it could be undefined in some contexts.🔎 Proposed defensive check
const preparedFile: PreparedFile = await prepareFile(filePromise, { skipBuffer: true }); + +if (!preparedFile.multiPartFile?.file) { + throw new BadRequestException('File stream not available for upload'); +} let isUpdate = false;apps/client/src/features/page/services/page-service.ts (1)
131-131: Move import to the top of the file.The
AxiosProgressEventimport is placed mid-file, which breaks the conventional import organization. Moving it to the top with other imports improves readability.🔎 Suggested fix
Add this import at the top of the file with other imports:
import { AxiosProgressEvent } from "axios";Then remove the mid-file import on line 131.
apps/client/src/features/editor/components/video/upload-video-action.tsx (1)
10-64: Implementation is correct, but consider extracting shared logic.The video upload follows the same pattern as image and attachment uploads. The upload notification flow (lines 12-64) is nearly identical across all three files.
Consider extracting a shared utility like
createUploadWithProgressthat encapsulates the notification lifecycle, reducing duplication.apps/server/src/integrations/storage/drivers/s3.driver.ts (1)
8-8:PutObjectCommandimport is now unused.After switching to
Uploadfrom@aws-sdk/lib-storage, thePutObjectCommandimport is no longer used in this file and can be removed.🔎 Proposed fix
import { CopyObjectCommand, DeleteObjectCommand, GetObjectCommand, HeadObjectCommand, NoSuchKey, - PutObjectCommand, S3Client, } from '@aws-sdk/client-s3';apps/server/src/integrations/storage/drivers/local.driver.ts (1)
23-35: Correct implementation for Buffer/Readable handling, but consider code reuse.The dual-type handling is implemented correctly. However, the
Readablebranch duplicates the logic inuploadStream(lines 37-45). Consider extracting the streaming logic to a private method or delegating touploadStreamfor theReadablecase.🔎 Optional: Reduce duplication by delegating to uploadStream
async upload(filePath: string, file: Buffer | Readable): Promise<void> { try { const fullPath = this._fullPath(filePath); if (file instanceof Buffer) { await fs.outputFile(fullPath, file); } else { - await fs.mkdir(dirname(fullPath), { recursive: true }); - await pipeline(file, createWriteStream(fullPath)); + await this.uploadStream(filePath, file); } } catch (err) { throw new Error(`Failed to upload file: ${(err as Error).message}`); } }apps/client/public/sw.js (1)
1-9: VERSION constant is unused for cache management.The
VERSIONconstant suggests cache versioning intent, but there's no logic in theactivatehandler to clean up old caches when the version changes. If you intend to use versioning, consider adding cache cleanup:🔎 Optional: Add cache cleanup on version update
+const CACHE_NAME = `share-target-${VERSION}`; + self.addEventListener('activate', (event) => { - event.waitUntil(clients.claim()); + event.waitUntil( + Promise.all([ + clients.claim(), + caches.keys().then((keys) => + Promise.all( + keys + .filter((key) => key.startsWith('share-target') && key !== CACHE_NAME) + .map((key) => caches.delete(key)) + ) + ), + ]) + ); });Then update line 28 to use
CACHE_NAMEinstead of the hardcoded'share-target'.apps/client/src/pages/share-target/share-target.tsx (1)
89-97: Query will always run even whencurrentNavParentIdis null.The
useGetSidebarPagesQueryis called withpageId: currentNavParentId || undefined, but sincespaceIdis always provided (whenselectedSpaceis set), the query'senabledcondition (!!data?.pageId || !!data?.spaceId) will be true. This means when navigating at root level, bothrootPagesDataandsubPagesDataqueries run simultaneously.Consider conditionally enabling the subpages query only when a parent is selected:
🔎 Proposed optimization
// Query for SUB pages (used when currentNavParentId is set) -const { data: subPagesData, isLoading: isLoadingSub } = useGetSidebarPagesQuery({ - spaceId: selectedSpace || "", - pageId: currentNavParentId || undefined, -}); +const { data: subPagesData, isLoading: isLoadingSub } = useGetSidebarPagesQuery( + currentNavParentId ? { + spaceId: selectedSpace || "", + pageId: currentNavParentId, + } : null +);apps/server/src/integrations/import/services/file-import-task.service.ts (1)
506-511: Event emitted before transaction commits.The
PAGE_CREATEDevent is emitted inside the transaction callback. If the transaction fails to commit after the callback returns, downstream handlers (e.g., search indexing) will process pages that don't exist.Consider moving the event emission outside the transaction, after
executeTxreturns successfully.🔎 Proposed refactor
- await executeTx(this.db, async (trx) => { + const createdPageIds = await executeTx(this.db, async (trx) => { // Insert prepared pages level by level within the transaction for (const level of sortedLevels) { // ... batch insert logic ... } // ... backlink insertion ... - if (validPageIds.size > 0) { - this.eventEmitter.emit(EventName.PAGE_CREATED, { - pageIds: Array.from(validPageIds), - workspaceId: fileTask.workspaceId, - }); - } this.logger.log( `Successfully imported ${totalPagesProcessed} pages with ${filteredBacklinks.length} backlinks`, ); + + return Array.from(validPageIds); }); + + if (createdPageIds.length > 0) { + this.eventEmitter.emit(EventName.PAGE_CREATED, { + pageIds: createdPageIds, + workspaceId: fileTask.workspaceId, + }); + }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (34)
.env.exampleDockerfileapps/client/public/manifest.jsonapps/client/public/sw.jsapps/client/src/App.tsxapps/client/src/features/editor/components/attachment/upload-attachment-action.tsxapps/client/src/features/editor/components/image/upload-image-action.tsxapps/client/src/features/editor/components/video/upload-video-action.tsxapps/client/src/features/file-task/types/file-task.types.tsapps/client/src/features/page/components/page-import-modal.tsxapps/client/src/features/page/services/page-service.tsapps/client/src/features/space/services/space-service.tsapps/client/src/main.tsxapps/client/src/pages/share-target/share-target.tsxapps/server/src/core/attachment/attachment.utils.tsapps/server/src/core/attachment/services/attachment.service.tsapps/server/src/core/search/search.service.tsapps/server/src/database/migrations/20251228T184500-add_progress_to_file_tasks.tsapps/server/src/database/types/db.d.tsapps/server/src/integrations/export/utils.tsapps/server/src/integrations/import/import.controller.tsapps/server/src/integrations/import/processors/file-task.processor.tsapps/server/src/integrations/import/services/file-import-task.service.tsapps/server/src/integrations/import/services/import-attachment.service.tsapps/server/src/integrations/import/services/import.service.tsapps/server/src/integrations/import/utils/import-formatter.tsapps/server/src/integrations/import/utils/import.utils.tsapps/server/src/integrations/storage/drivers/local.driver.tsapps/server/src/integrations/storage/drivers/s3.driver.tsapps/server/src/integrations/storage/interfaces/storage-driver.interface.tsapps/server/src/integrations/storage/storage.service.tsdocker-compose.ymlpackage.jsonpackages/editor-ext/package.json
🧰 Additional context used
🧬 Code graph analysis (10)
apps/client/src/features/editor/components/attachment/upload-attachment-action.tsx (3)
packages/editor-ext/src/lib/attachment/attachment-upload.ts (1)
handleAttachmentUpload(70-126)apps/client/src/features/page/services/page-service.ts (1)
uploadFile(169-190)apps/server/src/core/attachment/attachment.controller.ts (1)
uploadFile(76-149)
apps/client/src/features/editor/components/video/upload-video-action.tsx (2)
packages/editor-ext/src/lib/video/video-upload.ts (1)
handleVideoUpload(66-132)apps/client/src/features/page/services/page-service.ts (1)
uploadFile(169-190)
apps/client/src/pages/share-target/share-target.tsx (4)
apps/client/src/features/search/types/search.types.ts (1)
IPageSearch(6-18)apps/client/src/features/space/queries/space-query.ts (1)
useGetSpacesQuery(34-43)apps/client/src/features/page/queries/page-query.ts (2)
useGetRootSidebarPagesQuery(265-277)useGetSidebarPagesQuery(250-263)apps/server/src/core/search/search.service.ts (1)
searchPage(23-134)
apps/server/src/integrations/import/services/import-attachment.service.ts (1)
apps/server/src/database/types/kysely.types.ts (1)
KyselyDB(4-4)
apps/server/src/core/attachment/services/attachment.service.ts (1)
apps/server/src/core/attachment/attachment.utils.ts (2)
PreparedFile(7-14)prepareFile(16-50)
apps/server/src/integrations/import/services/import.service.ts (2)
apps/server/src/common/helpers/utils.ts (1)
sanitizeFileName(74-79)apps/server/src/integrations/import/utils/file.utils.ts (1)
getFileTaskFolderPath(22-32)
apps/client/src/features/page/components/page-import-modal.tsx (2)
apps/client/src/features/page/services/page-service.ts (2)
importZip(148-167)importPage(133-146)apps/server/src/integrations/import/services/import.service.ts (2)
importZip(220-268)importPage(41-127)
apps/client/src/features/editor/components/image/upload-image-action.tsx (2)
packages/editor-ext/src/lib/image/image-upload.ts (1)
handleImageUpload(62-127)apps/client/src/features/page/services/page-service.ts (1)
uploadFile(169-190)
apps/client/src/App.tsx (1)
apps/client/src/pages/share-target/share-target.tsx (1)
ShareTarget(33-496)
apps/server/src/integrations/import/services/file-import-task.service.ts (3)
apps/server/src/database/types/entity.types.ts (2)
FileTask(116-116)InsertablePage(67-67)apps/server/src/integrations/import/utils/file.utils.ts (1)
extractZip(37-42)apps/server/src/database/utils.ts (1)
executeTx(8-18)
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 3-3: [UnorderedKey] The APP_PORT key should go before the APP_URL key
(UnorderedKey)
[warning] 10-10: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 11-11: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 12-12: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 13-13: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 13-13: [UnorderedKey] The POSTGRES_PASSWORD key should go before the POSTGRES_USER key
(UnorderedKey)
[warning] 14-14: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 14-14: [UnorderedKey] The POSTGRES_PASSWORD_URLENCODE key should go before the POSTGRES_USER key
(UnorderedKey)
🔇 Additional comments (43)
apps/server/src/core/search/search.service.ts (1)
65-65: The code at line 65 already uses logical OR (||), not bitwise OR. No bitwise OR patterns with default numeric values were found in the codebase. The claim that this fixes a bug from bitwise OR is not supported by the actual file content.Likely an incorrect or invalid review comment.
package.json (1)
88-89: LGTM!Adding explicit TypeScript to devDependencies ensures consistent TypeScript version across the project, which complements the broader TypeScript-related improvements in this PR.
packages/editor-ext/package.json (1)
1-12: No functional changes.This file only has a trailing newline formatting change. No concerns.
docker-compose.yml (3)
11-11: Clarify theX_APP_URLnaming or revert toAPP_URL.The
X_APP_URLprefix is unconventional. If this is intentional (e.g., to avoid conflicts), please document the reasoning. Otherwise, consider using the standardAPP_URLto maintain consistency with.env.example.
41-46: Good addition of database healthcheck.The healthcheck with appropriate
start_periodensures the application waits for PostgreSQL to be ready. The 90-second start period is generous enough for initial database setup on slower systems.
27-28: Good configuration for large file support.The
FILE_IMPORT_SIZE_LIMIT: '5gb'aligns with the PR objective of supporting large file imports (>2GB). Ensure the application and any reverse proxies (nginx, etc.) are also configured to handle these sizes.Dockerfile (4)
10-17: Excellent layer caching optimization.Copying package manifests and lockfiles before the full source enables Docker to cache the
pnpm installlayer. This significantly speeds up rebuilds when only source code changes.
28-28: Verify directory ownership propagates correctly.The
chown -R node:node /appruns before copying files. Subsequent COPY commands with--chown=node:nodehandle the copied files, but ensure the/app/data/storagedirectory remains writable at runtime.
46-48: Good security practice: non-root user.Running as
nodeuser instead of root follows container security best practices. The production install with--prodflag correctly excludes devDependencies.
3-6: Good PNPM configuration with corepack.The setup correctly pins pnpm version 10.4.0, matching the
packageManagerfield inpackage.json. Using corepack with Node.js 22 ensures consistent tooling across environments, with verified compatibility for pnpm v10.4.0.apps/server/src/database/types/db.d.ts (2)
172-172: LGTM! Progress field aligns with migration.The
progress: Generated<number | null>field correctly represents the nullable int4 column added in the migration with a default value of 0. The type properly reflects that the column can contain null or number values.
1-11: File is correctly auto-generated by kysely-codegen and does not contain manual edits.The
Generated<T>type definition shown in lines 8-11 is part of the standard kysely-codegen output, not a manual modification. This utility type is extensively used throughout db.d.ts for typing database columns (createdAt, id, updatedAt, etc.) and is the expected generated content. The migration:codegen script is properly configured, the file shows no uncommitted changes, and there is no evidence of manual edits conflicting with generated code.Likely an incorrect or invalid review comment.
apps/server/src/database/migrations/20251228T184500-add_progress_to_file_tasks.ts (1)
10-12: LGTM!The down migration properly reverses the schema change by dropping the progress column, providing a clean rollback path.
apps/server/src/integrations/import/processors/file-task.processor.ts (1)
24-24: LGTM! Typo fix for method name.The correction from
processZIpImporttoprocessZipImportproperly follows camelCase convention.apps/client/src/features/space/services/space-service.ts (1)
72-79: LGTM! Defensive URI decoding with fallback.The try/catch guard around
decodeURIComponentprevents runtime errors when the Content-Disposition filename contains malformed percent-encoding. The fallback to the raw filename ensures the export continues successfully.apps/server/src/integrations/import/utils/import.utils.ts (1)
34-40: LGTM! Defensive URI decoding with logging.The try/catch guard around
decodeURIComponentprevents failures from malformed attachment paths. The warning provides visibility into problematic URIs while the fallback ensures processing continues.apps/server/src/integrations/import/utils/import-formatter.ts (1)
284-292: LGTM! Consistent URI decoding pattern with contextual logging.The try/catch guard follows the same defensive pattern used across the codebase. Including
currentFilePathin the warning message aids debugging.apps/client/public/manifest.json (2)
22-22: LGTM! Icon size correction.Removing the incorrect "180x180" size correctly reflects the actual dimensions of the
app-icon-192x192.pngfile.
29-39: LGTM! Web Share Target API configuration.The
share_targetconfiguration properly implements the Web Share Target API, enabling the PWA to receive shared content from other applications. The parameters mapping for title, text, and url follows the standard specification.apps/server/src/core/attachment/attachment.utils.ts (1)
16-46: LGTM! Streaming upload support with skipBuffer option.The addition of the
skipBufferoption enables memory-efficient uploads for large files by exposing themultiPartFilestream instead of buffering the entire file. WhenskipBufferis true, the initialfileSizeof 0 is expected to be updated by consumers after the upload completes.apps/server/src/integrations/export/utils.ts (1)
103-116: LGTM! Defensive URI decoding with optional context parameter.The optional
currentFilePathparameter enables contextual logging while maintaining backward compatibility. The try/catch guard follows the consistent pattern used across the codebase.apps/server/src/core/attachment/services/attachment.service.ts (1)
239-241: LGTM! Streaming-aware upload signature.The updated signature to accept
Buffer | Readableenables memory-efficient streaming uploads for large files while maintaining backward compatibility with buffer-based uploads (e.g., for image compression).apps/client/src/features/editor/components/image/upload-image-action.tsx (1)
10-64: Well-implemented upload progress UI.The implementation correctly:
- Uses a unique notification ID per upload to avoid conflicts
- Falls back to
file.sizewhenprogressEvent.totalis undefined- Updates the notification in-place with progress percentage and animated bar
- Properly handles errors by updating the notification and re-throwing
This pattern is consistent with the attachment and video upload implementations.
apps/client/src/features/page/services/page-service.ts (1)
121-128: Good defensive handling for malformed URIs.The try/catch around
decodeURIComponentproperly handles cases where the filename contains invalid URI encoding, falling back to the raw filename. This prevents crashes on malformedContent-Dispositionheaders.apps/client/src/features/editor/components/attachment/upload-attachment-action.tsx (1)
10-64: Consistent implementation with the image upload pattern.The attachment upload progress implementation correctly mirrors the image upload flow with proper notification lifecycle management and error handling.
apps/client/src/features/page/components/page-import-modal.tsx (1)
113-133: Progress-aware import flow looks good.The implementation correctly threads progress callbacks through both ZIP and file imports, providing real-time upload feedback to users. The UI updates are well-structured with progress bars and percentage displays.
Also applies to: 284-309
apps/server/src/integrations/import/services/import-attachment.service.ts (1)
38-38: Increased concurrency from 3 to 10 - verify resource impact.Tripling the concurrent upload limit improves throughput for large imports but increases memory pressure and connection pool usage. This is likely fine for typical workloads, but consider:
- S3/storage backend connection limits
- Memory usage with 10 simultaneous file streams
- Server load during large imports
The change aligns with the PR goal of supporting large file imports.
apps/client/src/App.tsx (1)
63-64: New/share-targetroute for Web Share Target API.The route is correctly placed outside the
Layoutcomponent, providing a clean full-page UI for handling shared content. TheShareTargetcomponent handles its own authentication state via the API queries.apps/server/src/integrations/storage/drivers/s3.driver.ts (1)
26-44: Good migration to@aws-sdk/lib-storagefor large file support.Using the
Uploadclass from@aws-sdk/lib-storageenables automatic multipart uploads for files larger than the part size threshold (default 5MB). This correctly supports the PR goal of handling large files (>2GB).apps/server/src/integrations/import/import.controller.ts (3)
77-81: LGTM! Clean input sanitization for optional parentPageId.The type check and trimming logic correctly handles edge cases like empty strings and whitespace-only values, converting them to
undefinedrather than passing invalid values downstream.
153-154: Consistent authorization change for zip imports.Good that both import endpoints now use the same
Createpermission check, maintaining consistency across the import API.
88-98: Authorization change from Edit to Create is correct.The switch to
SpaceCaslAction.Createis appropriate for page imports since importing creates new pages rather than editing existing ones. This aligns with the normal page creation authorization pattern used inpage.controller.ts. TheparentPageIdparameter is correctly extracted from request fields, passed to the import service, and properly used in the page creation logic.apps/server/src/integrations/storage/storage.service.ts (1)
13-16: Good type safety improvement.Narrowing the parameter type from
Buffer | anytoBuffer | Readableprovides better compile-time type checking and aligns with the updatedStorageDriverinterface.apps/client/public/sw.js (1)
11-40: Share target fetch handler implementation looks correct.The handler properly intercepts POST requests to
/share-target, extracts form data with graceful error handling, stores it in the Cache API, and redirects with a 303 status. The client-side code inshare-target.tsxcorrectly deletes the cached content after reading it (line 141).apps/server/src/integrations/storage/interfaces/storage-driver.interface.ts (1)
1-6: Interface correctly updated to support streaming uploads.The
uploadmethod now accepts bothBufferandReadable, whileuploadStreamremains available for cases requiring additional options likerecreateClient. This provides flexibility for different upload scenarios.apps/client/src/pages/share-target/share-target.tsx (3)
223-268: Import handler implementation looks solid.The handler correctly:
- Guards against missing space or data
- Sanitizes the filename (removing invalid characters, limiting length)
- Builds FormData with optional parentPageId
- Navigates to the new page on success or shows error notification
174-179: Loose equality checks are intentional here.Using
url == nullcorrectly catches bothnullandundefined, which is appropriate for URL parameter handling where either value indicates absence. This is a valid pattern when you want to treat both falsy values the same way.
312-316: The data structure access is correct.useGetRootSidebarPagesQueryanduseGetSidebarPagesQueryreturnInfiniteData<IPagination<IPage>>where thepagesarray contains paginated response objects, each with anitemsproperty containing the actual page items. The code correctly accessesdataStats.pages.flatMap(page => page.items)to extract all items from all pages, which is the standard pattern used elsewhere in the codebase (e.g., infetchAllAncestorChildren).apps/server/src/integrations/import/services/import.service.ts (2)
241-243: Good refactoring to stream large files instead of buffering.This change correctly avoids the
RangeErrorfor files larger than the maximum Buffer size (~2GB) by streaming directly to storage. AccessingbytesReadafter the stream completes is the correct approach to get the final file size.
194-218: getNewPagePosition correctly handles parent hierarchy.The query now properly filters by
parentPageIdwhen provided, or byparentPageId IS NULLfor root-level pages. This ensures new pages are positioned correctly within their parent's children list rather than globally.apps/server/src/integrations/import/services/file-import-task.service.ts (3)
54-60: LGTM!Clean helper method for fetching file tasks.
523-542: LGTM!The status update logic correctly sets progress to 100% only on success while preserving the existing progress value for other statuses.
544-554: LGTM!Simple and consistent progress update implementation.
| type: "import" | "export"; | ||
| source: string; | ||
| status: string; | ||
| progress?: number; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's verify the server-side type definition
cat apps/server/src/database/types/db.d.ts | head -100Repository: docmost/docmost
Length of output: 2566
🏁 Script executed:
# Check the client-side type definition
cat apps/client/src/features/file-task/types/file-task.types.tsRepository: docmost/docmost
Length of output: 432
🏁 Script executed:
# Search for progress field usage in client code
rg -n -A3 -B3 "\.progress\b" --type=ts --type=tsx -g 'apps/client/**'Repository: docmost/docmost
Length of output: 86
🏁 Script executed:
# Search for progress field assignments or transformations
rg -n -A3 -B3 "progress:" --type=ts --type=tsx -g 'apps/client/**'Repository: docmost/docmost
Length of output: 86
🏁 Script executed:
# Check if there are any null checks or optional chaining for progress
rg -n "progress\?" --type=ts --type=tsx -g 'apps/client/**'Repository: docmost/docmost
Length of output: 86
🏁 Script executed:
# Search for FileTasks interface in db.d.ts
rg -n "interface FileTasks" apps/server/src/database/types/db.d.ts -A 20Repository: docmost/docmost
Length of output: 671
🏁 Script executed:
# Search for progress field usage in client code (correct file type)
rg -n "\.progress" -t ts -g 'apps/client/**' -A 3 -B 3Repository: docmost/docmost
Length of output: 782
🏁 Script executed:
# Search for progress field in assignments/declarations
rg -n "progress" -t ts -g 'apps/client/**' -A 2 -B 2Repository: docmost/docmost
Length of output: 15046
🏁 Script executed:
# Check if there's any explicit null handling for progress
rg -n "progress.*null|null.*progress" -t ts -g 'apps/client/**'Repository: docmost/docmost
Length of output: 41
Update type to allow null values from server.
The server-side FileTasks type defines progress: Generated<number | null> (db.d.ts:172), which can return null values. The client-side IFileTask type should reflect this as progress?: number | null instead of progress?: number. While the code at page-import-modal.tsx:204 already handles this case with const progress = fileTask.progress || 0, the type definition should be accurate to prevent type-checking issues in other code paths that consume this interface.
🤖 Prompt for AI Agents
In apps/client/src/features/file-task/types/file-task.types.ts around line 6,
the IFileTask progress field is currently declared as "progress?: number" but
the server can return null (Generated<number | null>); update the type to
"progress?: number | null" to reflect nullable responses and run TypeScript
checks to fix any call sites that assume non-null (e.g., coerce or default to 0
where needed).
| onUploadProgress, | ||
| }); | ||
|
|
||
| return req as unknown as IAttachment; |
There was a problem hiding this comment.
Potential bug: should return req.data instead of req.
The api.post call returns an AxiosResponse, so accessing .data is needed to get the actual response body. Other functions in this file (e.g., importPage at line 145) correctly return req.data.
The current cast req as unknown as IAttachment masks this issue but would return the entire AxiosResponse object instead of the attachment data.
🔎 Proposed fix
- return req as unknown as IAttachment;
+ return req.data;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return req as unknown as IAttachment; | |
| return req.data; |
🤖 Prompt for AI Agents
In apps/client/src/features/page/services/page-service.ts around line 189, the
code currently returns the full AxiosResponse via "return req as unknown as
IAttachment"; change it to return the response body instead by returning
req.data typed as IAttachment (or use api.post<IAttachment>(...).then(r =>
r.data)), removing the unnecessary double-cast so the function returns the
actual attachment payload rather than the AxiosResponse wrapper.
| if ('serviceWorker' in navigator) { | ||
| window.addEventListener('load', () => { | ||
| navigator.serviceWorker.register('/sw.js').then( | ||
| (registration) => { | ||
| console.log('ServiceWorker registration successful with scope: ', registration.scope); | ||
|
|
||
| registration.addEventListener('updatefound', () => { | ||
| const newWorker = registration.installing; | ||
| if (newWorker) { | ||
| newWorker.addEventListener('statechange', () => { | ||
| if (newWorker.state === 'installed' && navigator.serviceWorker.controller) { | ||
| // New update available and installed | ||
| // Since we use skipWaiting() in sw.js, it will activate immediately. | ||
| // We can reload the page to use the new version. | ||
| console.log('New version initialized. Reloading...'); | ||
| window.location.reload(); | ||
| } | ||
| }); | ||
| } | ||
| }); | ||
| }, | ||
| (err) => { | ||
| console.log('ServiceWorker registration failed: ', err); | ||
| } | ||
| ); | ||
|
|
||
| // Ensure controller change reloads the page (for instant claim) | ||
| navigator.serviceWorker.addEventListener('controllerchange', () => { | ||
| window.location.reload(); | ||
| }); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Potential double page reload when service worker updates.
Both the updatefound/statechange handler (line 81) and the controllerchange handler (line 94) call window.location.reload(). Since your service worker uses skipWaiting(), both events will fire in quick succession when an update is installed, potentially causing a double reload or race condition.
Consider using a flag to ensure only one reload occurs:
🔎 Proposed fix to prevent double reload
if ('serviceWorker' in navigator) {
+ let refreshing = false;
+
window.addEventListener('load', () => {
navigator.serviceWorker.register('/sw.js').then(
(registration) => {
console.log('ServiceWorker registration successful with scope: ', registration.scope);
registration.addEventListener('updatefound', () => {
const newWorker = registration.installing;
if (newWorker) {
newWorker.addEventListener('statechange', () => {
if (newWorker.state === 'installed' && navigator.serviceWorker.controller) {
- // New update available and installed
- // Since we use skipWaiting() in sw.js, it will activate immediately.
- // We can reload the page to use the new version.
- console.log('New version initialized. Reloading...');
- window.location.reload();
+ console.log('New version installed, waiting for activation...');
}
});
}
});
},
(err) => {
console.log('ServiceWorker registration failed: ', err);
}
);
// Ensure controller change reloads the page (for instant claim)
navigator.serviceWorker.addEventListener('controllerchange', () => {
+ if (refreshing) return;
+ refreshing = true;
window.location.reload();
});
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ('serviceWorker' in navigator) { | |
| window.addEventListener('load', () => { | |
| navigator.serviceWorker.register('/sw.js').then( | |
| (registration) => { | |
| console.log('ServiceWorker registration successful with scope: ', registration.scope); | |
| registration.addEventListener('updatefound', () => { | |
| const newWorker = registration.installing; | |
| if (newWorker) { | |
| newWorker.addEventListener('statechange', () => { | |
| if (newWorker.state === 'installed' && navigator.serviceWorker.controller) { | |
| // New update available and installed | |
| // Since we use skipWaiting() in sw.js, it will activate immediately. | |
| // We can reload the page to use the new version. | |
| console.log('New version initialized. Reloading...'); | |
| window.location.reload(); | |
| } | |
| }); | |
| } | |
| }); | |
| }, | |
| (err) => { | |
| console.log('ServiceWorker registration failed: ', err); | |
| } | |
| ); | |
| // Ensure controller change reloads the page (for instant claim) | |
| navigator.serviceWorker.addEventListener('controllerchange', () => { | |
| window.location.reload(); | |
| }); | |
| }); | |
| } | |
| if ('serviceWorker' in navigator) { | |
| let refreshing = false; | |
| window.addEventListener('load', () => { | |
| navigator.serviceWorker.register('/sw.js').then( | |
| (registration) => { | |
| console.log('ServiceWorker registration successful with scope: ', registration.scope); | |
| registration.addEventListener('updatefound', () => { | |
| const newWorker = registration.installing; | |
| if (newWorker) { | |
| newWorker.addEventListener('statechange', () => { | |
| if (newWorker.state === 'installed' && navigator.serviceWorker.controller) { | |
| console.log('New version installed, waiting for activation...'); | |
| } | |
| }); | |
| } | |
| }); | |
| }, | |
| (err) => { | |
| console.log('ServiceWorker registration failed: ', err); | |
| } | |
| ); | |
| // Ensure controller change reloads the page (for instant claim) | |
| navigator.serviceWorker.addEventListener('controllerchange', () => { | |
| if (refreshing) return; | |
| refreshing = true; | |
| window.location.reload(); | |
| }); | |
| }); | |
| } |
🤖 Prompt for AI Agents
In apps/client/src/main.tsx around lines 66–97 there is a potential double
reload because both the registration.installing.statechange handler and the
navigator.serviceWorker.controllerchange handler call window.location.reload();
add a single boolean flag (e.g., let isReloading = false) in the outer scope of
this block and check it before triggering reload in both places, set it to true
immediately before calling window.location.reload(), and leave the rest of the
logic unchanged so only the first handler to run performs the reload.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
docker-compose.yml (1)
3-3: Previous concern resolved.The image now correctly references the official
docmost/docmost:latestinstead of the local development image that was flagged in the previous review.
🧹 Nitpick comments (1)
docker-compose.yml (1)
26-27: Consider making file size limits configurable.The file size limits are hardcoded. Consider using environment variables with defaults for better flexibility:
- FILE_UPLOAD_SIZE_LIMIT: '100mb' - FILE_IMPORT_SIZE_LIMIT: '5gb' + FILE_UPLOAD_SIZE_LIMIT: '${FILE_UPLOAD_SIZE_LIMIT:-100mb}' + FILE_IMPORT_SIZE_LIMIT: '${FILE_IMPORT_SIZE_LIMIT:-5gb}'This allows users to adjust limits without modifying docker-compose.yml.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docker-compose.yml
🔇 Additional comments (6)
docker-compose.yml (6)
4-8: Improved service startup ordering.Using health-based conditions ensures the docmost service waits for the database to be fully ready, preventing connection errors during startup.
10-10: Verify the X_APP_URL naming convention.The
X_prefix is unconventional for environment variables. Ensure this rename fromAPP_URLtoX_APP_URLis intentional and documented, as it may cause confusion for users expecting the standardAPP_URLvariable name.
16-24: Verify graceful handling of missing mail configuration.All mail/SMTP variables have no defaults and will be empty if not set. Verify that the application gracefully handles missing mail configuration (e.g., disables mail features or logs warnings) rather than failing at runtime when mail functionality is attempted.
29-29: Good practice for port configuration.Making the host port configurable via
APP_PORTwith a sensible default improves flexibility for different deployment scenarios.
37-39: Consistent Postgres defaults.The Postgres environment defaults correctly match the values used in the
DATABASE_URLconstruction (line 13), ensuring the service can connect to the database out-of-the-box.
40-45: Properly configured healthcheck.The Postgres healthcheck is correctly implemented with
pg_isready, reasonable timing parameters, and consistent use of environment variable defaults. The extendedstart_periodis appropriately documented.
| X_APP_URL: '${APP_URL:-http://localhost:3000}' | ||
| APP_SECRET: '${APP_SECRET}' | ||
|
|
||
| DATABASE_URL: 'postgresql://${POSTGRES_USER:-docmost}:${POSTGRES_PASSWORD_URLENCODE:-docmost}@db:5432/${POSTGRES_DB:-docmost}?schema=public' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files | head -20Repository: docmost/docmost
Length of output: 551
🏁 Script executed:
# Find docker-compose files
fd "docker-compose" -t fRepository: docmost/docmost
Length of output: 77
🏁 Script executed:
# Read the docker-compose.yml file to verify the specific lines
cat -n docker-compose.yml | head -50Repository: docmost/docmost
Length of output: 1892
🏁 Script executed:
# Look for documentation files about environment variables
fd -e md -e txt | grep -iE "(readme|doc|env|setup|config)" | head -20Repository: docmost/docmost
Length of output: 142
🏁 Script executed:
# Check .env.example to see documented variables
cat -n .env.exampleRepository: docmost/docmost
Length of output: 1431
🏁 Script executed:
# Check README.md for documentation about environment variables
cat -n README.md | head -100Repository: docmost/docmost
Length of output: 2423
🏁 Script executed:
# Search for any mentions of POSTGRES_PASSWORD in documentation
rg -i "postgres_password" --type md --type txtRepository: docmost/docmost
Length of output: 41
🏁 Script executed:
# Check if there are any comments or explanations about password encoding in the codebase
rg -i "url.?encod|password.*encod" --type md --type yml --type yaml -A 2 -B 2Repository: docmost/docmost
Length of output: 86
🏁 Script executed:
# Look for any helper scripts or utilities related to password encoding
fd -i "password\|encode" --type f | head -20Repository: docmost/docmost
Length of output: 41
🏁 Script executed:
# Check the apps/server/README.md for deployment/setup documentation
cat -n apps/server/README.md | grep -i -A 5 -B 5 "password\|environment\|setup" | head -50Repository: docmost/docmost
Length of output: 41
🏁 Script executed:
# Search for any documentation about docker-compose setup or deployment
rg -i "docker.?compose\|deployment\|self.?host" --type md -A 3 -B 1Repository: docmost/docmost
Length of output: 41
🏁 Script executed:
# Check if there's a docs folder or deployment guide
fd -type d -name "docs\|deployment\|self-host" | head -10Repository: docmost/docmost
Length of output: 229
🏁 Script executed:
# Look at the server README for any environment variable documentation
head -100 apps/server/README.mdRepository: docmost/docmost
Length of output: 3931
🏁 Script executed:
# Search for any comments or documentation about POSTGRES_PASSWORD_URLENCODE
rg -i "urlencode|url.?encod" -B 2 -A 2Repository: docmost/docmost
Length of output: 374
🏁 Script executed:
# Check if there are any setup guides or deployment documentation online references
rg -i "self.?host|deploy|setup" README.md | head -10Repository: docmost/docmost
Length of output: 145
🏁 Script executed:
# Look for any additional env documentation or comments
rg "POSTGRES_PASSWORD" --context 3Repository: docmost/docmost
Length of output: 938
Add clarifying comments about POSTGRES_PASSWORD_URLENCODE in .env.example.
The docker-compose.yml uses POSTGRES_PASSWORD_URLENCODE in the DATABASE_URL while the postgres service uses POSTGRES_PASSWORD. Both variables are defined in .env.example (lines 13-14), but lack comments explaining their relationship and when URL encoding is necessary.
Add a comment in .env.example clarifying:
POSTGRES_PASSWORDis for the postgres containerPOSTGRES_PASSWORD_URLENCODEmust be set to the URL-encoded version of the password (required if password contains special characters like@,:,/)- Link to URL encoding guidance if passwords contain special characters
🤖 Prompt for AI Agents
In docker-compose.yml around line 13, add clarifying comments in the
.env.example (lines ~13-14) explaining that POSTGRES_PASSWORD is the raw
password used by the postgres container, while POSTGRES_PASSWORD_URLENCODE must
contain the URL-encoded form of that password (required when the password
includes special characters like @, :, /), and include a short link to
URL-encoding guidance (e.g. a reference to percent-encoding docs) so users know
how to encode the password.
…ted by new general utility functions.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
apps/server/src/common/helpers/utils.ts (1)
102-102: Move import to the top of the file.The
ReadableandTransformimports should be placed with the other imports at the top of the file (after line 4) rather than inline before the function definition.🔎 Proposed fix
Move this import to line 5 (after other imports):
import * as path from 'path'; import * as bcrypt from 'bcrypt'; import { sanitize } from 'sanitize-filename-ts'; import { FastifyRequest } from 'fastify'; +import { Readable, Transform } from 'stream';And remove lines 101-102.
apps/server/src/integrations/import/services/import.service.ts (2)
229-229: Remove commented-out code.These commented lines are remnants of the previous buffer-based implementation. They should be removed to keep the code clean.
🔎 Proposed fix
const file = await filePromise; - // const fileBuffer = await file.toBuffer(); const fileExtension = path.extname(file.filename).toLowerCase(); const fileName = sanitizeFileName( path.basename(file.filename, fileExtension), ); - // const fileSize = fileBuffer.length; // Removed to avoid RangeErrorAlso applies to: 234-234
241-245: Remove duplicate comment.Line 244 duplicates the comment from line 241.
🔎 Proposed fix
// upload file const { stream, getBytesRead } = createByteCountingStream(file.file); - - // upload file await this.storageService.upload(filePath, stream);
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
apps/server/src/common/helpers/utils.tsapps/server/src/core/attachment/services/attachment.service.tsapps/server/src/integrations/import/services/import.service.ts
🧰 Additional context used
🧬 Code graph analysis (2)
apps/server/src/integrations/import/services/import.service.ts (2)
apps/server/src/common/helpers/utils.ts (2)
sanitizeFileName(74-79)createByteCountingStream(104-117)apps/server/src/integrations/import/utils/file.utils.ts (1)
getFileTaskFolderPath(22-32)
apps/server/src/core/attachment/services/attachment.service.ts (2)
apps/server/src/core/attachment/attachment.utils.ts (2)
PreparedFile(7-14)prepareFile(16-50)apps/server/src/common/helpers/utils.ts (1)
createByteCountingStream(104-117)
🔇 Additional comments (11)
apps/server/src/common/helpers/utils.ts (1)
104-116: Implementation looks correct for the intended use case.The byte counting logic via Transform stream is appropriate for tracking upload progress. Consider whether bidirectional error propagation (transform → source) is needed for cleanup in case downstream consumers error out, though for upload counting this is likely sufficient.
apps/server/src/core/attachment/services/attachment.service.ts (4)
7-7: Imports are appropriate for the streaming implementation.The
Readableimport andcreateByteCountingStreamutility are correctly added to support the new streaming upload flow.Also applies to: 30-30
54-54: Streaming implementation correctly addresses the file size tracking issue.This change properly resolves the previous review concern where
fileSizewas always 0. ThecreateByteCountingStreamwrapper now accurately tracks bytes as they flow through the stream, andgetBytesRead()is called after the upload completes to get the actual file size.Also applies to: 86-91
149-159: Image upload correctly retains buffer-based approach.The image upload path appropriately uses buffering since
compressAndResizeIconrequires a Buffer for processing. This is the correct design choice for images which are typically smaller and require transformation.
242-244: Type signature improvement.The
Buffer | Readableunion type properly documents the two supported upload modes, improving type safety over the previous implicit typing.apps/server/src/integrations/import/services/import.service.ts (6)
1-1: Imports are appropriate for the new functionality.The
NotFoundExceptionandcreateByteCountingStreamimports correctly support the parent page validation and streaming upload features.Also applies to: 13-13
41-48: Method signature appropriately extended.The optional
parentPageIdandtitleparameters, along with the explicit return type, cleanly extend the import functionality to support hierarchical page imports.
78-90: Title extraction logic is well-structured.The conditional handling correctly distinguishes between user-provided titles (preserved with original content) and extracted titles (removed from content). Consider whether whitespace-only titles (e.g.,
" ") should be treated as empty—current logic would use them as-is, which may be intentional.
92-98: Parent page validation correctly placed outside the try block.This addresses the previous review concern about
NotFoundExceptionbeing masked. The validation now occurs before thetryblock at line 100, allowing the specific error to propagate correctly to the client.
195-218: Position calculation correctly accounts for page hierarchy.The conditional filtering ensures new pages are positioned correctly—either as the last sibling under a specific parent or as the last top-level page in the space.
242-247: Streaming implementation correctly enables large file imports.The
createByteCountingStreamwrapper allows files larger than 2GB to be uploaded without buffering into memory, addressing the PR objective. The file size is accurately captured after the stream completes.
|
@coderabbitai generate docstrings |
✅ Actions performedInitiated docstring generation; will generate only if new commits exist. |
|
Note Docstrings generation - SUCCESS |
…ogress` Docstrings generation was requested by @gpapp. * #1812 (comment) The following files were modified: * `apps/client/src/App.tsx` * `apps/client/src/features/page/components/page-import-modal.tsx` * `apps/client/src/features/page/services/page-service.ts` * `apps/client/src/features/space/services/space-service.ts` * `apps/client/src/pages/share-target/share-target.tsx` * `apps/server/src/common/helpers/utils.ts` * `apps/server/src/core/attachment/attachment.utils.ts` * `apps/server/src/database/migrations/20251228T184500-add_progress_to_file_tasks.ts` * `apps/server/src/integrations/export/utils.ts` * `apps/server/src/integrations/import/utils/import-formatter.ts` * `apps/server/src/integrations/import/utils/import.utils.ts`
|
Hi @gpapp, thanks for your PR. It includes some interesting changes. |
This branch allows uploads of large files (+2GB) with progress bar of both the upload and the import processing.
The docker file is also optimized to allow for a faster build.
Summary by CodeRabbit
New Features
Bug Fixes
Chores
Database
✏️ Tip: You can customize this high-level summary in your review settings.