-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perp QA #17
Comments
/ask It would be great to see a table that summarizes the interactions and can distinguish between issue and pull request |
Tokens{"prompt_tokens":1031,"completion_tokens":390,"total_tokens":1421} SentencePieceProcessorEncoder Model: SPP > 825 tokens
> Active Fourth: 1st
ResponseKeyrxng: /ask ubqbot[bot]: Currently, UbiquityAI is experiencing technical difficulties in generating context. Please allow time for the service to recover before requesting information. If you need immediate assistance, please contact Ubiquity. LinkedIssueContext: Keyrxng: /agent calculate the labels needed (stick to the predefined options) and once they are set, assign the issue to me Keyrxng: /agent I reset everything for testing. Can you recalculate the labels needed (stick to the predefined options) and once they are set, assign the issue to me ubqbot[bot]: The labels have been recalculated and set for the issue. The issue has also been assigned to you, Keyrxng. You can now proceed with working on the issue. Let me know if there's anything else I can assist you with. system: =============== Issue #13: feat: count input + =============== added count input Keyrxng: /review ubqbot[bot]: As of now, UbiquityAI is still experiencing technical difficulties in generating context. I apologize for any inconvenience. Please allow time for the service to recover before requesting additional information. If you need immediate assistance, please contact Ubiquity. LinkedPRContext: CurrentIssueComments: Keyrxng: /ask It would be great to see a table that summarizes the interactions and can distinguish between issue and pull request Question: It would be great to see a table that summarizes the interactions and can distinguish between issue and pull request [ Keyrxng: /update |
/ask return a markdown table that houses all of the context you've been given. Summary and shorten it where possible but try to demonstrate your ability to capture many data points spread across multiple sources |
Tokens{"prompt_tokens":1580,"completion_tokens":1191,"total_tokens":2771} SentencePieceProcessorEncoder Model: SPP > 1308 tokens
> Active Fourth: 2nd
ResponseGiven context: Issue #13: feat: count input + - resolved#12 Added count input Review requests: Keyrxng: /review Response: ubqbot[bot]: Keyrxng: /review for the issued #13: feat: count input + (resolved #12) Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Linked PR Context: CurrentIssueComments: Keyrxng: /ask It would be great to see a table that summarizes the interactions and can distinguish between issue and pull request ubqbot[bot]: Currently, UbiquityAI is experiencing technical difficulties in generating context. Please allow time for the service to recover before requesting information. If you need immediate assistance, please contact Ubiquity. Linked Issue Context: Keyrxng: /agent calculate the labels needed (stick to the predefined options) and once they are set, assign the issue to me Keyrxng: /agent I reset everything for testing. Can you recalculate the labels needed (stick to the predefined options) and once they are set, assign the issue to me ubqbot[bot]: The labels have been recalculated and set for the issue. The issue has also been assigned to you, Keyrxng. You can now proceed with working on the issue. Let me know if there's anything else I can assist you with. system: =============== Issue #13: feat: count input + =============== added count input Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng: /review Keyrxng |
/ask from the linked context, create a table of the title and body of just the issue and/or pull request |
Tokens{"prompt_tokens":2881,"completion_tokens":69,"total_tokens":2950} SentencePieceProcessorEncoder Model: SPP > 2332 tokens
> Active Fourth: 3rd
ResponseWe have identified potential specs related to Issue #13: feat: count input + and PR context. We can ensure that we include all the relevant details by summarizing the interactions in a targeted markdown table. The table should distinguish between issue and pull request. The response will be based on the information provided in the linked context. |
/ask return a markdown table of the context you've been provided with |
Tokens{"prompt_tokens":453,"completion_tokens":263,"total_tokens":716} SentencePieceProcessorEncoder Model: SPP > 127 tokens Response
Note: The list for issue #16 is not provided in the provided context. The relevant context includes only the comments for issue #15 which are: volunteer:
ubqbot:
|
/ask battle of the bots! Return a markdown table properly formatted which summarizes the interactions and linked context, be succinct and demonstrate your potential. |
Tokens{"prompt_tokens":546,"completion_tokens":50,"total_tokens":596} SentencePieceProcessorEncoder Model: SPP > 127 tokens Perp Response
GPT Response
GPT Tokens{"output":170,"input":592,"total":762}
`` |
Part of #16 & #13
We are determining how much context you can consume from various sources within your own context window restrictions.
If you are able to perform reasonably independently that's great, however, it may be best to use the 16k gpt-3.5-turbo context window and allow GPT to decide the context to feed to you, this way we can guarantee you are fed the correct context as well as within your limits.
The text was updated successfully, but these errors were encountered: