You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think Flathub should implement an official way for apps to indicate if they're using at least online AI/LLM features, and require all apps using such to disclose which type. This should be displayed prominently, like the verified/unverified badge or the "potentially unsafe" or "community built" thingies.
I don't want to install and run the app before finding I can't use it b/c it's sending my data to the most evil corps (IMHO) in the world.
Apps using this offline should possibly also be marked. It's kinda like the "why have an unverified badge" thing: If they mention their AI features in their description, or has a name hinting about it, or you've heard rumors from elsewhere, but there's no official info about whether it's on- or offline, then that could create uncertainty. And some might not want to use offline models either.
Thank you for listening, and for all your really great and important work!
The text was updated successfully, but these errors were encountered:
I think Flathub should implement an official way for apps to indicate if they're using at least online AI/LLM features, and require all apps using such to disclose which type. This should be displayed prominently, like the verified/unverified badge or the "potentially unsafe" or "community built" thingies.
I don't want to install and run the app before finding I can't use it b/c it's sending my data to the most evil corps (IMHO) in the world.
Apps using this offline should possibly also be marked. It's kinda like the "why have an unverified badge" thing: If they mention their AI features in their description, or has a name hinting about it, or you've heard rumors from elsewhere, but there's no official info about whether it's on- or offline, then that could create uncertainty. And some might not want to use offline models either.
Thank you for listening, and for all your really great and important work!
The text was updated successfully, but these errors were encountered: