Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue 139: MLP multi-class classifier #142

Conversation

Hector-hedb12
Copy link
Contributor

Resolves #139

@Hector-hedb12
Copy link
Contributor Author

Hector-hedb12 commented Mar 26, 2019

@csala as softmax is just a value in the last_dense_activation parameter, I was thinking in change the name of this primitive from keras.Sequential.MLPSoftmaxClassifier.json to keras.Sequential.MLPClassifier.json. In that way, we could cover the binary classification case as well. I think it is the same network structure 🤔 . What do you think?

@csala
Copy link
Contributor

csala commented Mar 29, 2019

@Hector-hedb12 I think that we can keep them separated, as there are a couple of other differences, and they clearly respond to different needs (binary/multi-class).

What I would do, though, is renaming this to MLPMultiClassClassifier, so we can have Binary/MultiClass.

"type": "int",
"default": 64
},
"input_dim": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since input_dim represents the number of vectors being passed as input and depends on the fit data, this should be set as a fit argument instead of a hyperparameter.
It should also not have a default, as we want to user to input this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When setting this as a fit argument, I would call it features (see #147), and add a description explaining that this is the number of features in X

"type": "bool",
"default": true
},
"dense_units": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be separated as dense_1_units and dense_2_units and set as tunable hyperparameters.
I'm not sure about the ideal range, but in some other cases we have used (1, 500).

}
},
"tunable": {
"dropout_rate": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be separated as droput_1_rate and dropout_2_rate

},
"init_params": {
"keras.Sequential.MLPMultiClassClassifier#1": {
"epochs": 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this example is very quick, let's remove this and let the default (20) be used.

@Hector-hedb12
Copy link
Contributor Author

Updated @csala

@csala
Copy link
Contributor

csala commented Apr 1, 2019

Looks good @Hector-hedb12 !

@csala csala merged commit 82e4826 into MLBazaar:master Apr 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants