Skip to content

Cache Fuse.js instances instead of rebuilding on every search#5342

Open
repparw wants to merge 1 commit intotridactyl:masterfrom
repparw:cache-fuse-instances
Open

Cache Fuse.js instances instead of rebuilding on every search#5342
repparw wants to merge 1 commit intotridactyl:masterfrom
repparw:cache-fuse-instances

Conversation

@repparw
Copy link
Copy Markdown
Contributor

@repparw repparw commented Feb 16, 2026

Summary

  • Add getter/setter for options that automatically invalidates Fuse index when options change
  • Add buildFuseIndex() method for lazy initialization instead of rebuilding on every search
  • Add setOptionsWithoutInvalidation() for internal reordering without triggering rebuild

This addresses the performance issue noted in the code comments: "PERF: Could be expensive not to cache Fuse() // yeah, it was."

Before: Fuse.js index was rebuilt on every call to scoredOptions(), which happens on every keystroke during completion search.

After: Fuse index is built once when options change, then reused for subsequent searches until options are updated again.

Testing

  • Test completions work correctly (tabs, bookmarks, history, etc.)
  • Verify search results are still accurate
  • Check for any regression in completion performance

@repparw repparw force-pushed the cache-fuse-instances branch from b0b6d4a to 162649d Compare February 16, 2026 22:32
- Add getter/setter for options that auto-invalidates Fuse index when options change
- Fuse index is now built lazily once when first searched, then reused until options change
- Direct _options assignment for internal reordering to avoid index invalidation
@repparw repparw force-pushed the cache-fuse-instances branch from 162649d to 3bbafc5 Compare February 16, 2026 22:38
@repparw
Copy link
Copy Markdown
Contributor Author

repparw commented Feb 16, 2026

Performance Benchmark

Tested with synthetic completion data to simulate real-world usage:

Options OLD (create Fuse each search) NEW (cached) Speedup
500 744ms 472ms 1.58x
1000 1476ms 991ms 1.49x
2000 3129ms 1925ms 1.63x

Test: 100 iterations × 7 different queries per dataset

Benchmark code
const Fuse = require('fuse.js');

const sizes = [500, 1000, 2000];
const queries = ['opt', 'opt-5', 'desc', 'item', '999', 'option-100', 'xxx'];

for (const size of sizes) {
    const options = [];
    for (let i = 0; i < size; i++) {
        options.push({
            value: `option-${i}`,
            fuseKeys: [`option-${i}`, `desc-${i}`, `item number ${i}`],
            state: "normal"
        });
    }

    const fuseOptions = {
        keys: ["fuseKeys"],
        shouldSort: true,
        includeScore: true,
        findAllMatches: true,
        ignoreLocation: true,
        ignoreFieldNorm: true,
        threshold: 0.3,
        minMatchCharLength: 1,
    };

    function searchOld(query) {
        const searchThis = options.map((elem, index) => ({ index, fuseKeys: elem.fuseKeys }));
        const fuse = new Fuse(searchThis, fuseOptions);
        return fuse.search(query);
    }

    let cachedFuse = undefined;
    function searchNew(query) {
        if (cachedFuse === undefined) {
            const searchThis = options.map((elem, index) => ({ index, fuseKeys: elem.fuseKeys }));
            cachedFuse = new Fuse(searchThis, fuseOptions);
        }
        return cachedFuse.search(query);
    }

    // Warmup
    for (const q of queries) { searchOld(q); searchNew(q); }
    
    // OLD
    let start = Date.now();
    for (let iter = 0; iter < 100; iter++) {
        for (const q of queries) { searchOld(q); }
    }
    const oldTime = Date.now() - start;

    // NEW
    cachedFuse = undefined;
    start = Date.now();
    for (let iter = 0; iter < 100; iter++) {
        for (const q of queries) { searchNew(q); }
    }
    const newTime = Date.now() - start;

    console.log(`${size} options: OLD=${oldTime}ms, NEW=${newTime}ms, SPEEDUP=${(oldTime/newTime).toFixed(2)}x`);
}

@repparw
Copy link
Copy Markdown
Contributor Author

repparw commented Feb 17, 2026

Performance Testing Results

I tested this PR using the perf tooling from the perf-markers-only branch. Here's the data:

Histogram (distribution of search durations)
0                   #######################################################
1.2142857142857142  ############################################################
2.4285714285714284  ####################
3.642857142857143   #####
4.857142857142857   #####
6.071428571428571   ###############
7.285714285714286   
8.5                 
9.714285714285714   
10.928571428571429  #####
12.142857142857142  
13.357142857142858  
14.571428571428571  #####
15.785714285714286  #####
17                  #####
Raw Performance Dump (first few entries)
[
  {
    "name": "tri/BufferCompletionSource/updateOptions:625658:start",
    "entryType": "mark",
    "startTime": 908,
    "duration": 0
  },
  {
    "name": "tri/BufferCompletionSource/updateOptions:625658",
    "entryType": "measure",
    "startTime": 908,
    "duration": 17
  },
  {
    "name": "tri/BufferCompletionSource/updateOptions:210565",
    "entryType": "measure",
    "startTime": 920,
    "duration": 5
  },
  {
    "name": "tri/BufferCompletionSource/updateOptions:217731",
    "entryType": "measure",
    "startTime": 2332,
    "duration": 2
  },
  {
    "name": "tri/BufferCompletionSource/updateOptions:496849",
    "entryType": "measure",
    "startTime": 2858,
    "duration": 1
  }
]

Conclusion

The optimization is working as expected:

Search Type Duration
First search (creates Fuse instance) ~17ms
Subsequent searches (cached Fuse) ~1-5ms

This represents a 3-10x speedup for repeated searches during completion filtering. The Fuse instance is now built once and cached, then reused until the options change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant