You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
Hello author! May I ask if you can spare a little time to read my message under this project. https://github.com/C1nDeRainBo0M/AGCA/issues/3 .
I try to improve the backbone of Yolov5 and add an attention mechanism to verify whether it is effective for small target objects.
When I was trying to use other plug-and-play attention mechanisms, such as CBAM, ECA, etc., I found that the model was initialized on the cpu at the beginning, and then when I started training, the model was deployed on the gpu. Since the inputs of these modules are all on the The first-level feature map has no other parameters, so it can run correctly.
But when I use this module, there is a parameter self.A0 that is always deployed on the GPU, so an error will be reported during initialization. However, after I deploy it on the CPU, there is no problem during initialization, but when the actual training starts, other parameters will be placed on the GPU, but A0 is still on the CPU, which makes the module unusable in the backbone. I asked the author who proposed the module, but the conclusion he told me was the training setting problem of Yolov5, so I would like to ask the author what do you think about this issue?
Thank you very much for your patience in reading my problem description and questioning process! Thank you so much and look forward to hearing from you!
Additional
No response
The text was updated successfully, but these errors were encountered:
Thank you for reaching out to us with your question regarding the YOLOv5 project. We appreciate the effort you put into improving the backbone of YOLOv5 by adding an attention mechanism to test its effectiveness on small target objects.
Regarding your issue with initializing the model on the CPU and then deploying it on the GPU during training, we understand that you are encountering an error due to the self.A0 parameter being deployed on the CPU instead of the GPU. This inconsistency is causing the module to be unusable in the backbone.
While we are unable to provide direct support for modifications made to the YOLOv5 project, we recommend checking the training settings in YOLOv5 to ensure that all parameters are correctly deployed on the GPU during training. Additionally, we encourage you to continue discussing this issue with the author who proposed the attention module, as they may have further insights into the specific training setting problem you are facing.
We commend your dedication and enthusiasm in working on this project and hope you find a solution to your issue. If you have any other questions or need further assistance, please don't hesitate to ask.
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
Hello author! May I ask if you can spare a little time to read my message under this project. https://github.com/C1nDeRainBo0M/AGCA/issues/3 .
I try to improve the backbone of Yolov5 and add an attention mechanism to verify whether it is effective for small target objects.
When I was trying to use other plug-and-play attention mechanisms, such as CBAM, ECA, etc., I found that the model was initialized on the cpu at the beginning, and then when I started training, the model was deployed on the gpu. Since the inputs of these modules are all on the The first-level feature map has no other parameters, so it can run correctly.
But when I use this module, there is a parameter self.A0 that is always deployed on the GPU, so an error will be reported during initialization. However, after I deploy it on the CPU, there is no problem during initialization, but when the actual training starts, other parameters will be placed on the GPU, but A0 is still on the CPU, which makes the module unusable in the backbone. I asked the author who proposed the module, but the conclusion he told me was the training setting problem of Yolov5, so I would like to ask the author what do you think about this issue?
Thank you very much for your patience in reading my problem description and questioning process! Thank you so much and look forward to hearing from you!
Additional
No response
The text was updated successfully, but these errors were encountered: