Hyperparameter tuning #3863
Unanswered
mango-picket
asked this question in
Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Everyone,
Context on the problem:
I am training an SK Learn pipeline model with custom pre-processing and an XGBRegressor model under an SKLearn Estimator image in a custom bring my own training script framework.
I have a training and validation split of my data that I am able to load in my training script, train the pipeline model, make predictions on training and validation sets, and also able to log the evaluation metrics like train:mae, val:mae etc...
I am also able to launch a hyper-parameter tuning job with my custom metric-definitions and the job finishes successfully. The objective I choose to minimize is validaiton:mae with "Bayesian" optimization strategy.
Questions:
Please help me understand what's going on in sagemaker hyper-parameter tuning metric logs and objective tuning. I understand bayesian optimization and sagemaker seems to be doing the optimization correctly, but it seems that sagemaker is not using the correct numbers to optimize.
Beta Was this translation helpful? Give feedback.
All reactions