You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Neural Hydrology has updated the way they save the scalars necessary to process input and output data. Training any new LSTM models with neural hydrology would require reading in scalar values from yaml format, rather than pickle files. This is good, because it means that the BMI LSTM is no longer dependent on pickle, at all. Screenshot below shows the changes needed to read in scalars from yaml format. Also pasted below is updated code to read in scalars from yaml. Finally, this change can also be found in this PR from my branch:
with open(scaler_file, 'r') as f:
scaler_data = yaml.safe_load(f)
self.train_data_scaler = scaler_data
self.attribute_means = scaler_data.get('attribute_means', {})
self.attribute_stds = scaler_data.get('attribute_stds', {})
self.feature_scale = {k: v['data'] for k, v in scaler_data['xarray_feature_scale']['data_vars'].items()}
self.feature_center = {k: v['data'] for k, v in scaler_data['xarray_feature_center']['data_vars'].items()}
if self.verbose > 1:
print(self.feature_center)
print(self.feature_scale)
print(self.attribute_means)
print(self.attribute_stds)
#------------------------------------------------------------`
"""Mean and standard deviation for the inputs and LSTM outputs""" self.out_mean = self.train_data_scaler['xarray_feature_center']['data_vars'][self.cfg_train['target_variables'][0]]['data'] self.out_std = self.train_data_scaler['xarray_feature_scale']['data_vars'][self.cfg_train['target_variables'][0]]['data'] self.input_mean.extend([self.train_data_scaler['xarray_feature_center']['data_vars'][x]['data'] for x in self.cfg_train['dynamic_inputs']]) self.input_std.extend([self.train_data_scaler['xarray_feature_scale']['data_vars'][x]['data'] for x in self.cfg_train['dynamic_inputs']]) self.input_std.extend([self.train_data_scaler['attribute_stds'][x] for x in self.cfg_train['static_attributes']]) self.input_std = np.array(self.input_std)
The text was updated successfully, but these errors were encountered:
Neural Hydrology has updated the way they save the scalars necessary to process input and output data. Training any new LSTM models with neural hydrology would require reading in scalar values from yaml format, rather than pickle files. This is good, because it means that the BMI LSTM is no longer dependent on pickle, at all. Screenshot below shows the changes needed to read in scalars from yaml format. Also pasted below is updated code to read in scalars from yaml. Finally, this change can also be found in this PR from my branch:
jmframe#11
` scaler_file = os.path.join(self.cfg_train['run_dir'], 'train_data', 'train_data_scaler.yml')
"""Mean and standard deviation for the inputs and LSTM outputs""" self.out_mean = self.train_data_scaler['xarray_feature_center']['data_vars'][self.cfg_train['target_variables'][0]]['data'] self.out_std = self.train_data_scaler['xarray_feature_scale']['data_vars'][self.cfg_train['target_variables'][0]]['data'] self.input_mean.extend([self.train_data_scaler['xarray_feature_center']['data_vars'][x]['data'] for x in self.cfg_train['dynamic_inputs']]) self.input_std.extend([self.train_data_scaler['xarray_feature_scale']['data_vars'][x]['data'] for x in self.cfg_train['dynamic_inputs']]) self.input_std.extend([self.train_data_scaler['attribute_stds'][x] for x in self.cfg_train['static_attributes']]) self.input_std = np.array(self.input_std)
The text was updated successfully, but these errors were encountered: