-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make autoencoders take a y #368
Conversation
Codecov Report
@@ Coverage Diff @@
## master #368 +/- ##
==========================================
+ Coverage 89.32% 89.49% +0.16%
==========================================
Files 48 52 +4
Lines 2211 2275 +64
==========================================
+ Hits 1975 2036 +61
- Misses 236 239 +3
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting closer!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!!!!
Previously, y was implicitly set as a copy of X. This seperates that logic such that anomaly calculations are done in a scikit-learn like model. Therefore additional changes to the server, client and other components in the codebase are in this commit.
Will close #364
Will also close #371 (because it will no longer be relevant)
Allows an 'autoencoder' to take X
via a Pipeline for example, but then train itself to output
the same unscaled values to match y. Models will no longer implicitly train themselves for
X
(which was often scaled via previous steps in the Pipeline) Therefore the Pipeline/preprocessing steps before the model are part of the encoding
process.
y
inverse-transformed-model-output
doesn't make sense anymore)PRs depending on this
PR depends on:
target_tag_list
in workflow generation if not specified