Skip to content

Conversation

@moreal
Copy link
Contributor

@moreal moreal commented Jan 17, 2026

I manually reviewed the recent benchmark run and upload actions and wrote the metadata file. I also render this metadata on the benchmark page so that it’s possible to identify which commit the results were generated from. It will be displayed as shown below.

https://moreal.github.io/rustpython.github.io/benchmarks
image

See also RustPython/RustPython#6759

Summary by CodeRabbit

Release Notes

  • New Features
    • Benchmarks page now displays generation timestamp and RustPython commit reference for improved transparency.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 17, 2026

📝 Walkthrough

Walkthrough

Added a new JSON data file to store benchmark generation metadata including timestamp and RustPython commit information, and updated the benchmarks layout template to render this metadata block with generation time and a link to the commit.

Changes

Cohort / File(s) Summary
Benchmark Metadata
_data/criterion-metadata.json
New JSON file containing generated_at, rustpython_commit, and rustpython_ref fields for tracking benchmark metadata
Benchmarks Layout
_layouts/benchmarks.html
Added Liquid code block to render generation timestamp and RustPython commit link in the benchmarks template

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

🐰 A timestamp hops into the page,
With commits linked for every age,
The benchmarks now can tell their tale,
Of when they ran without a fail! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Show when benchmarks run' clearly and accurately summarizes the main change: adding visibility into when benchmarks were executed by displaying generation timestamp and commit information.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @moreal, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the transparency and context of the benchmark results displayed on the website. By introducing a dedicated metadata file and integrating its contents into the benchmark layout, users can now easily identify when the benchmarks were run and against which specific version of RustPython, thereby enhancing the reliability and interpretability of the performance data.

Highlights

  • New Benchmark Metadata File: A new JSON file, _data/criterion-metadata.json, has been added to store essential metadata for benchmark runs. This includes the exact timestamp of generation, the specific RustPython commit hash, and the branch reference from which the benchmarks were derived.
  • Benchmark Page Enhancement: The _layouts/benchmarks.html file has been updated to read and display the information from the new metadata file. This enhancement ensures that the benchmark page now clearly shows when the results were generated and provides a direct link to the corresponding RustPython commit, improving traceability and context for the displayed performance data.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a useful feature to display benchmark metadata on the benchmarks page. The implementation is sound, but I have a suggestion to improve the code's maintainability and robustness by using a site-wide variable for the GitHub URL and by adding a check to handle cases where the metadata file might be missing. This will make the code cleaner and more resilient.

Comment on lines +25 to +31
{% assign metadata = site.data["criterion-metadata"] %}
<div class="text-muted mt-2">
<small>
Generated: {{ metadata.generated_at | date: "%Y-%m-%d %H:%M" }} UTC
| RustPython commit: <a href="https://github.com/RustPython/RustPython/commit/{{ metadata.rustpython_commit }}">{{ metadata.rustpython_commit | truncate: 7, "" }}</a>
</small>
</div>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block can be improved for better maintainability and robustness:

  1. Hardcoded URL: The GitHub commit URL is hardcoded. It's better to use the site.github variable from _config.yml to make it more maintainable.
  2. Missing Data Check: The code doesn't check if metadata exists before trying to access its properties. If _data/criterion-metadata.json is missing, this will lead to malformed output. It's more robust to wrap the rendering logic in an {% if metadata %} block.

The suggested code below applies both improvements.

    {% assign metadata = site.data["criterion-metadata"] %}
    {% if metadata %}
        <div class="text-muted mt-2">
            <small>
                Generated: {{ metadata.generated_at | date: "%Y-%m-%d %H:%M" }} UTC
                | RustPython commit: <a href="{{ site.github }}commit/{{ metadata.rustpython_commit }}">{{ metadata.rustpython_commit | truncate: 7, "" }}</a>
            </small>
        </div>
    {% endif %}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant