-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New pruning type for our ICLR22 paper: Grouped Kernel Pruning — a densely structured pruning granularity with better pruning freedom than filter/channel methods. #35
Comments
您好,您发给我的邮件已收到!
|
@choh Thanks for your comments! |
@he-y As far as we know, Grouped Kernel (within the realm of structured pruning) is a pretty novel pruning granularity, and there is no previous adaptation of GKP within the scope of publishers your repo tracks. The first close adaption of GKP we know of is [1]; but it requires iterative analysis of intermediate feature maps, a special fine-tuning/retraining process with knowledge distillation, yet it doesn't provide any experiment results comparable with modern pruning arts (only ZF-Net and VGG on ImageNet). We believe these factors might explain why it didn't gain much traffic. There's another one [2] that utilizes grouped convolution + pruning, but its pruned networks are not dense due to unequal group sizes. There's also [3], which is pretty much our ICLR 22's GKP framework without the lottery-driven clustering or the greedy pruning algo, but this also came from our co-authors. We wanted to get this out — for couple admin purposes — before our ICLR 22 paper, but it ended up as a concurrent work. I am up to contribute, but all of them are not within your repo's scope of publishers, so I don't know if you'd like me to include them. At the risk of too actively advertising, we'd humbly argue that our ICLR 22 paper is the first clean adaptation of Grouped Kernel level pruning with SOTA results. There are no new GKP methods after us yet, but we'd welcome the pruning community to explore more within the GKP scope, as we believe it makes sense to pursue a higher degree of pruning freedom yet retain those densely structured properties (empirical results also suggest a one-shot adoption of GKP is often better than most iterative filter pruning methods with much higher retraining/fine-tuning budget). And we feel like — given the mass attention this repo receives — one of the best ways is probably to add a Let me know how you think, thx! [1] Niange Yu et al., Accelerating Convolutional Neural Networks by Group-wise 2D-filter Pruning, IJCNN 2017. |
@choh Thank you for your detailed explanation about this direction! It's something between weight pruning and filter pruning. |
@he-y Thank you, that's fair enough. I have updated my issue title to provide more information, and I will certainly reach out should there be more methods following the GKP direction. Good day :) |
Greetings,
We are the authors of the said paper/code, and we thank you for your inclusion; it has certainly generated some traffic for us.
However, it might be worth noting that our pruning granularity is not
F
(filter level). Our algorithm prunes at a Grouped Kernel level, which is — to the best of our knowledge — the most fine-grained approach under the constraint of outputting a densely structured pruned network, much like channel or filter prunings.Since pushing the pruning freedom further while remaining structured is probably our most important contribution, we'd appreciate a simple fix (and maybe a new type category if you're feeling generous — as we'd certainly welcome more adaptations on the grouped kernel pruning framework). Thanks!
The text was updated successfully, but these errors were encountered: