Skip to content

Commit

Permalink
Merge pull request #2780 from craigcomstock/ENT-10921/3.18
Browse files Browse the repository at this point in the history
ENT-10921: Improved failure logging during federated reporting schema import (3.18)
  • Loading branch information
craigcomstock authored Nov 28, 2023
2 parents b6513d7 + 5091b7b commit 0dccf1c
Show file tree
Hide file tree
Showing 4 changed files with 47 additions and 7 deletions.
18 changes: 18 additions & 0 deletions MPF.md
Original file line number Diff line number Diff line change
Expand Up @@ -1930,6 +1930,24 @@ config when it notices a change in *policy*.
**History**: Added in 3.11.

### Federated Reporting
#### Debug import process

In order to get detailed logs about import failures define the class `default:cfengine_mp_fr_debug_import` on the _superhub_.

For example, to define this class via Augments:

```json
{
"classes": {
"cfengine_mp_fr_debug_import": [ "any::" ]
}
}
```

**History:**

* Added in CFEngine 3.23.0

#### Enable Federated Reporting Distributed Cleanup

Hosts that report to multiple feeders result in duplicate entries and other issues. Distributed cleanup helps to deal with this condition.
Expand Down
3 changes: 3 additions & 0 deletions cfe_internal/enterprise/federation/federation.cf
Original file line number Diff line number Diff line change
Expand Up @@ -456,6 +456,8 @@ bundle agent federation_manage_files
"workdir" data => parsejson('{"workdir":"$(sys.workdir)"}');
"handle_duplicates_value" string => ifelse("default:cfengine_mp_fr_handle_duplicate_hostkeys", "yes", "no");
"handle_duplicates" data => parsejson('{"handle_duplicates":"$(handle_duplicates_value)"}');
"debug_import_value" string => ifelse("default:cfengine_mp_fr_debug_import", "yes", "no");
"debug_import" data => parsejson('{"debug_import":"$(debug_import_value)"}');

files:
enterprise_edition.(policy_server|am_policy_hub)::
Expand Down Expand Up @@ -519,6 +521,7 @@ bundle agent federation_manage_files
@(feeder_username),
@(feeder),
parsejson('{"superhub_hostkeys": "$(superhub_hostkeys)"}'),
@(debug_import),
@(this_hostkey),
@(cf_version),
@(handle_duplicates),
Expand Down
2 changes: 2 additions & 0 deletions templates/federated_reporting/config.sh.mustache
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,8 @@ CFE_FR_DB_USER="{{db_user}}"
CFE_FR_DB_USER="${CFE_FR_DB_USER:-cfpostgres}"
CFE_FR_HANDLE_DUPLICATES="{{handle_duplicates}}" # default is no (don't handle duplicates as it adds to time to import)
CFE_FR_HANDLE_DUPLICATES="${CFE_FR_HANDLE_DUPLICATES:-no}"
CFE_FR_DEBUG_IMPORT="{{debug_import}}" # default is no (don't run imports with debug level logging)
CFE_FR_DEBUG_IMPORT="${CFE_FR_DEBUG_IMPORT:-no}"

# distributed_cleanup dir
CFE_FR_DISTRIBUTED_CLEANUP_DIR="{{distributed_cleanup_dir}}"
Expand Down
31 changes: 24 additions & 7 deletions templates/federated_reporting/import.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ source "$(dirname "$0")/log.sh"
source "$(dirname "$0")/parallel.sh"

# check that we have all the variables we need
true "${WORKDIR?undefined}"
true "${CFE_BIN_DIR?undefined}"
true "${CFE_FR_SUPERHUB_DROP_DIR?undefined}"
true "${CFE_FR_SUPERHUB_IMPORT_DIR?undefined}"
Expand Down Expand Up @@ -119,26 +120,42 @@ if [ "$CFE_FR_HANDLE_DUPLICATES" = "yes" ]; then
fi
fi

failed=0
one_failed=0
any_failed=0
log "Attaching schemas"
for file in $dump_files; do
if [ ! -f "${file}.failed" ]; then
hostkey=$(basename "$file" | cut -d. -f1)
"$CFE_BIN_DIR"/psql -U $CFE_FR_DB_USER -d cfdb --set "ON_ERROR_STOP=1" \
-c "SET SCHEMA 'public'; SELECT attach_feeder_schema('$hostkey', ARRAY[$table_whitelist]);" \
> schema_attach.log 2>&1 || failed=1
logfile="$WORKDIR"/outputs/"$hostkey"-schema-attach-$(date +%F-%T)-failure.log
if [ "${CFE_FR_DEBUG_IMPORT}" = "yes" ]; then
"$CFE_BIN_DIR"/psql -U $CFE_FR_DB_USER -d cfdb --set "ON_ERROR_STOP=1" "$debug_import_arg" \
-c "SET client_min_messages TO DEBUG5" \
-c "SET SCHEMA 'public'; SELECT attach_feeder_schema('$hostkey', ARRAY[$table_whitelist]);" \
> "$logfile" 2>&1 || one_failed=1
else
"$CFE_BIN_DIR"/psql -U $CFE_FR_DB_USER -d cfdb --set "ON_ERROR_STOP=1" "$debug_import_arg" \
-c "SET SCHEMA 'public'; SELECT attach_feeder_schema('$hostkey', ARRAY[$table_whitelist]);" \
> "$logfile" 2>&1 || one_failed=1
fi
if [ "$one_failed" = "0" ]; then
rm -f "$logfile"
else
any_failed=1
log "Attaching schemas: FAILED for $hostkey, check $logfile for details"
log "last 10 lines of $logfile"
tail -n 10 "$logfile"
fi
one_failed=0
else
rm -f "${file}.failed"
fi
done
if [ "$failed" = "0" ]; then
if [ "$any_failed" = "0" ]; then
log "Attaching schemas: DONE"
else
# attach_feeder_schema() makes sure the feeder's import schema is removed in
# case of failure
log "Attaching schemas: FAILED"
log "last 10 lines of schema_attach.log"
tail -n 10 schema_attach.log
exit 1
fi

Expand Down

0 comments on commit 0dccf1c

Please sign in to comment.