Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AIP-8056 remove ReadWriteMany volume #292

Merged
merged 1 commit into from
Mar 12, 2024

Conversation

talebzeghmi
Copy link
Collaborator

  • PyTorch distributed data parallel training ended up being a non scenario in favor scaling up in GPU hence we can remove this unused feature and especially its finicky integration tests!

- PyTorch distributed data parallel training ended up being a non scenario in favor scaling up in GPU hence we can remove this unused feature and especially its finicky integration tests!
Copy link
Collaborator

@cloudw cloudw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for cleaning up!

@talebzeghmi talebzeghmi merged commit dc204ab into feature/aip Mar 12, 2024
4 checks passed
@talebzeghmi talebzeghmi deleted the tz/AIP-8056-remove-rwmany branch March 12, 2024 17:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants