Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Codspeed - performance benchmarks #471

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open

Codspeed - performance benchmarks #471

wants to merge 6 commits into from

Conversation

tony
Copy link
Member

@tony tony commented Sep 22, 2024

Note

Problem

It's difficult to catch performance degradation or improvements over time, in a PR, etc.

Changes

Add performance benchmarks

TBD

Setup codpseed

Configure on website, set secret, etc.

py(deps[test]) Add pytest-codspeed

See also:

Summary by Sourcery

Add performance benchmarks using Codspeed. Integrate Codspeed into the CI workflow to automatically run performance tests and report results.

CI:

  • Integrate Codspeed into the CI workflow to trigger performance tests on pull requests and pushes, and on demand via manual dispatch.

Tests:

  • Add pytest-codspeed to enable performance testing.

Copy link

codecov bot commented Sep 22, 2024

Codecov Report

Attention: Patch coverage is 83.64780% with 26 lines in your changes missing coverage. Please review.

Project coverage is 64.09%. Comparing base (bc6e897) to head (4ac41d3).

Files with missing lines Patch % Lines
src/libvcs/pytest_plugin.py 80.30% 16 Missing and 10 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #471      +/-   ##
==========================================
+ Coverage   63.85%   64.09%   +0.24%     
==========================================
  Files          40       40              
  Lines        3591     3724     +133     
  Branches      774      790      +16     
==========================================
+ Hits         2293     2387      +94     
- Misses        772      800      +28     
- Partials      526      537      +11     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@tony tony force-pushed the pytest-codspeed branch 3 times, most recently from 61f4aa8 to 5ec5337 Compare September 22, 2024 11:24
@tony tony force-pushed the pytest-codspeed branch 5 times, most recently from 9265999 to 99fa170 Compare October 11, 2024 19:00
@tony tony force-pushed the pytest-codspeed branch 2 times, most recently from f649e7f to d0e1548 Compare October 12, 2024 14:53
@tony tony force-pushed the master branch 2 times, most recently from 9b797e7 to bc6e897 Compare October 12, 2024 15:51
tony added a commit that referenced this pull request Oct 12, 2024
# Problem

Git, Mercurial, and Subversion repositories are unnecessarily reinitialized for each test.

- We're not utilizing session-based scoping.
  - A single initial repo could be created, then copied to [`tmp_path`](https://docs.pytest.org/en/8.3.x/how-to/tmp_path.html#the-tmp-path-fixture) using [`shutil.copytree`](https://docs.python.org/3/library/shutil.html#shutil.copytree) ([source](https://github.com/python/cpython/blob/v3.13.0/Lib/shutil.py#L550-L605)).

Issue #471 highlighted this inefficiency, where benchmarks showed tens of thousands of redundant functional calls.

# Improvement

```
❯ hyperfine -L branch master,pytest-plugin-fixture-caching 'git checkout {branch} && py.test'
Benchmark 1: git checkout master && py.test
  Time (mean ± σ):     32.062 s ±  0.869 s    [User: 41.391 s, System: 9.931 s]
  Range (min … max):   30.878 s … 33.583 s    10 runs

Benchmark 2: git checkout pytest-plugin-fixture-caching && py.test
  Time (mean ± σ):     14.659 s ±  0.495 s    [User: 16.351 s, System: 4.433 s]
  Range (min … max):   13.990 s … 15.423 s    10 runs

Summary
  git checkout pytest-plugin-fixture-caching && py.test ran
    2.19 ± 0.09 times faster than git checkout master && py.test
```

# Changes

## Pytest fixtures overhaul

1. Create a base VCS repo.
2. For subsequent tests, copy and modify from this template.
@tony tony force-pushed the master branch 2 times, most recently from 7f69eab to c480bb4 Compare November 24, 2024 15:12
@tony
Copy link
Member Author

tony commented Jan 11, 2025

@sourcery-ai review

Copy link

sourcery-ai bot commented Jan 11, 2025

Reviewer's Guide by Sourcery

This pull request introduces performance benchmarks using CodSpeed. It sets up CodSpeed configuration, adds the pytest-codspeed dependency, and integrates it into the testing workflow. A benchmark is added to the test_repo_git_obtain_initial_commit_repo test function.

Sequence diagram for benchmark execution flow

sequenceDiagram
    participant Dev as Developer
    participant CI as CI Pipeline
    participant CS as CodSpeed

    Dev->>CI: Push code changes
    activate CI
    CI->>CI: Run tests with pytest
    CI->>CI: Execute benchmarks
    CI->>CS: Send benchmark results
    activate CS
    CS->>CS: Analyze performance
    CS-->>Dev: Report performance changes
    deactivate CS
    deactivate CI
Loading

File-Level Changes

Change Details Files
Set up CodSpeed to collect performance benchmarks.
  • Added a CodSpeed test step to the GitHub Actions workflow.
  • Added the pytest-codspeed Python dependency.
  • Added a workflow_dispatch trigger to allow CodSpeed to perform backtests.
  • Added necessary environment variables for CodSpeed integration.
  • Added type overrides for pytest_codspeed in pyproject.toml
.github/workflows/tests.yml
pyproject.toml
Added a performance benchmark to an existing test.
  • Added a benchmark to the test_repo_git_obtain_initial_commit_repo function using the BenchmarkFixture.
tests/sync/test_git.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @tony - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Remember to add the benchmark annotation/fixture as noted in your TODO. This is needed for proper benchmark function identification.
Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant