Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main'
Browse files Browse the repository at this point in the history
  • Loading branch information
johannes-darms committed Dec 19, 2024
2 parents da47786 + 28dfe22 commit e012001
Show file tree
Hide file tree
Showing 8 changed files with 142 additions and 104 deletions.
4 changes: 2 additions & 2 deletions k8s/dataverse/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.0
version: 0.9.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "6.3.0"
appVersion: "6.5.0"
34 changes: 9 additions & 25 deletions k8s/dataverse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,10 @@ Postgres is configured to automatically create and store a logical backup in S3.

Before running the script, you must set these env variables:

- `SOURCE_DATAVERSE_NAME`, the deployment name of the source Dataverse
- `SOURCE_DATAVERSE_CONTEXT`, the Kubernetes context name of the source Dataverse
- `DESTINATION_DATAVERSE_NAME`, the deployment name of the destination Dataverse
- `DESTINATION_DATAVERSE_CONTEXT`, the Kubernetes context name of the destination Dataverse
- `LOGICAL_BACKUP_S3_BUCKET`, the S3 bucket where the backup is located
- `SCOPE` and `LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX`, define the directory inside the S3 bucket where the backup is
located
Expand All @@ -23,6 +26,11 @@ Before running the script, you must set these env variables:
The values for `LOGICAL_BACKUP_S3_BUCKET`, `SCOPE` and `LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX` can be found using
`kubectl describe pod` on one of the backup job pods.

Since reindexing the entire Dataverse database into the Solr index may take a long time depending on the number of
datasets, the script also creates and loads a backup of the Solr index instead.
If you don't have a lot of datasets and prefer reindexing, you can comment out the script section responsible for
Solr backup creation and loading, and comment in the section for reindexing.

### Creating a database backup

1. Login into the postgres pod and create and compress a logical backup.
Expand All @@ -35,28 +43,4 @@ The values for `LOGICAL_BACKUP_S3_BUCKET`, `SCOPE` and `LOGICAL_BACKUP_S3_BUCKET

2. Copy the logical backup to your local computer

`kubectl cp $POSTGRES_POD_NAME:/tmp/jd.dump.gz ./jd.dump.gz`

## Solr

If you want to avoid a re-index (which may take a long time), you can also create and restore Solr backups.

### Creating a Solr backup

```
$ kubectl exec -it $SOURCE_DATAVERSE_DEPLOYMENT_NAME-solr-0 -- bash
# curl localhost:8983/solr/collection1/replication?command=backup
# curl localhost:8983/solr/collection1/replication?command=details
$ kubectl cp $SOURCE_DATAVERSE_DEPLOYMENT_NAME-solr-0:/var/solr/data/collection1/data/$SNAPSHOT_FILE_NAME $SNAPSHOT_FILE_NAME
```

Note: replace `$SNAPSHOT_FILE_NAME` with the name given by `curl localhost:8983/solr/collection1/replication?command=details`.

### Restoring a Solr backup

```
$ kubectl cp $SNAPSHOT_FILE_NAME $DESTINATION_DATAVERSE_DEPLOYMENT_NAME-solr-0:/var/solr/data/collection1/data
$ kubectl exec -it $DESTINATION_DATAVERSE_DEPLOYMENT_NAME-solr-0 -- bash
# curl localhost:8983/solr/collection1/replication?command=restore
# curl localhost:8983/solr/collection1/replication?command=restorestatus
```
`kubectl cp $POSTGRES_POD_NAME:/tmp/jd.dump.gz ./jd.dump.gz`
52 changes: 19 additions & 33 deletions k8s/dataverse/persona/nfdi4health/generate-permalink.sql
Original file line number Diff line number Diff line change
@@ -1,47 +1,33 @@
-- A script for creating, through a database stored procedure, sequential
-- 8 character identifiers from a base36 representation of current timestamp.
-- Adapted from: https://guides.dataverse.org/en/latest/_downloads/772201110a1c1b429e7c3336b6e9d36d/identifier_from_timestamp.sql
-- Adapted from: https://github.com/IQSS/dataverse/blob/a36db2d7df0d9976c00179b82f11cfb338a6cfc8/doc/sphinx-guides/source/_static/util/createsequence.sql

CREATE OR REPLACE FUNCTION base36_encode(
IN digits bigint, IN min_width int = 0)
RETURNS varchar AS $$
DO $$
DECLARE
chars char[];
ret varchar;
val bigint;
last_val bigint;
BEGIN
chars := ARRAY[
'0','1','2','3','4','5','6','7','8','9',
'A','B','C','D','E','F','G','H','I','J',
'K','L','M','N','O','P','Q','R','S','T',
'U','V','W','X','Y','Z'];
val := digits;
ret := '';
IF val < 0 THEN
val := val * -1;
END IF;
WHILE val != 0 LOOP
ret := chars[(val % 36)+1] || ret;
val := val / 36;
END LOOP;
-- Get the last value of the existing sequence
SELECT last_value INTO last_val FROM dvobject_id_seq;

IF min_width > 0 AND char_length(ret) < min_width THEN
ret := lpad(ret, min_width, '0');
END IF;
-- Create the new sequence with the desired start and min values
EXECUTE format('
CREATE SEQUENCE datasetidentifier_seq
INCREMENT BY 1
MINVALUE %s
MAXVALUE 9223372036854775807
CACHE 1
', last_val + 1);
END $$;

RETURN ret;
END;
$$ LANGUAGE plpgsql IMMUTABLE;
ALTER TABLE datasetidentifier_seq OWNER TO "dataverse";

-- And now create a PostgreSQL FUNCTION, for JPA to
-- access as a NamedStoredProcedure:

CREATE OR REPLACE FUNCTION generateIdentifierFromStoredProcedure()
RETURNS varchar AS $$
DECLARE
curr_time_msec bigint;
identifier varchar;
identifier varchar;
BEGIN
curr_time_msec := extract(epoch from now())*1000;
identifier := base36_encode(curr_time_msec);
identifier := nextval('datasetidentifier_seq')::varchar;
RETURN identifier;
END;
$$ LANGUAGE plpgsql IMMUTABLE;
10 changes: 6 additions & 4 deletions k8s/dataverse/persona/nfdi4health/init.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,6 @@ echo "# hide progress meter
# fail script on server error
--fail-with-body" > ~/.curlrc

echo "Configuring PID permalink generator function"
PGPASSWORD=$DATAVERSE_DB_PASSWORD psql -h $DATAVERSE_DB_HOST -U $DATAVERSE_DB_USER < /scripts/bootstrap/nfdi4health/generate-permalink.sql
echo

echo "Setting superuser status"
curl -X PUT "${DATAVERSE_URL}/api/admin/superuser/dataverseAdmin" -d true
echo
Expand Down Expand Up @@ -141,6 +137,12 @@ while IFS= read -r DATAVERSE; do
fi
done <<< "${DATAVERSES}"

# This should be done after creating dataverses if we want a chance of keeping the database IDs and Permalink IDs of our
# datasets in sync
echo "Configuring PID permalink generator function"
PGPASSWORD=$DATAVERSE_DB_PASSWORD psql -h $DATAVERSE_DB_HOST -U $DATAVERSE_DB_USER < /scripts/bootstrap/nfdi4health/generate-permalink.sql
echo

# Last step as existence of one block is the indicator for a complete bootstrapped installation
echo "Load custom metadata blocks"
#curl -X POST -H "Content-type: text/tab-separated-values" $DATAVERSE_HOST/api/admin/datasetfield/load --upload-file customMDS.tsv
Expand Down
6 changes: 6 additions & 0 deletions k8s/dataverse/persona/nfdi4health/schema.xml
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,7 @@

<field name="dvName" type="text_en" stored="true" indexed="true" multiValued="false"/>
<field name="dvAlias" type="text_en" stored="true" indexed="true" multiValued="false"/>
<field name="dvParentAlias" type="text_en" stored="true" indexed="true" multiValued="false"/>
<field name="dvAffiliation" type="text_en" stored="true" indexed="true" multiValued="false"/>
<field name="dvDescription" type="text_en" stored="true" indexed="true" multiValued="false"/>

Expand Down Expand Up @@ -205,6 +206,7 @@
<field name="entityId" type="plong" stored="true" indexed="true" multiValued="false"/>

<field name="datasetVersionId" type="plong" stored="true" indexed="true" multiValued="false"/>
<field name="datasetType" type="string" stored="true" indexed="true" multiValued="false"/>

<!-- Added for Dataverse 4.0 alpha 1 to sort by name -->
<!-- https://redmine.hmdc.harvard.edu/issues/3482 -->
Expand Down Expand Up @@ -232,6 +234,7 @@
<field name="datasetValid" type="boolean" stored="true" indexed="true" multiValued="false"/>

<field name="license" type="string" stored="true" indexed="true" multiValued="false"/>
<field name="fileCount" type="plong" stored="true" indexed="true" multiValued="false"/>

<!--
METADATA SCHEMA FIELDS
Expand Down Expand Up @@ -481,6 +484,7 @@
<field name="productionPlace" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="publication" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="publicationCitation" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="publicationRelationType" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="publicationIDNumber" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="publicationIDType" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="publicationURL" type="text_en" multiValued="true" stored="true" indexed="true"/>
Expand Down Expand Up @@ -645,6 +649,7 @@
<copyField source="dvAlias" dest="_text_" maxChars="3000"/>
<copyField source="dvAffiliation" dest="_text_" maxChars="3000"/>
<copyField source="dsPersistentId" dest="_text_" maxChars="3000"/>
<copyField source="datasetType" dest="_text_" maxChars="3000"/>
<!-- copyField commands copy one field to another at the time a document
is added to the index. It's used either to index the same field differently,
or to add multiple fields to the same field for easier/faster searching. -->
Expand Down Expand Up @@ -940,6 +945,7 @@
<copyField source="productionPlace" dest="_text_" maxChars="3000"/>
<copyField source="publication" dest="_text_" maxChars="3000"/>
<copyField source="publicationCitation" dest="_text_" maxChars="3000"/>
<copyField source="publicationRelationType" dest="_text_" maxChars="3000"/>
<copyField source="publicationIDNumber" dest="_text_" maxChars="3000"/>
<copyField source="publicationIDType" dest="_text_" maxChars="3000"/>
<copyField source="publicationURL" dest="_text_" maxChars="3000"/>
Expand Down
16 changes: 4 additions & 12 deletions k8s/dataverse/templates/dataverse-pod.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -97,26 +97,18 @@ spec:
- name: DATAVERSE_PID_PROVIDERS
value: "fake,csh,ctgov,euctr,per,isrctn,ctri,jprn,actrn,drks"
- name: DATAVERSE_PID_DEFAULT_PROVIDER
value: "fake"
- name: DATAVERSE_PID_FAKE_TYPE
value: "FAKE"
- name: DATAVERSE_PID_FAKE_LABEL
value: "Fake DOI Provider"
- name: DATAVERSE_PID_FAKE_SHOULDER
value: "FK2/"
- name: DATAVERSE_PID_FAKE_AUTHORITY
value: "10.5072"
value: "csh"
- name: DATAVERSE_PID_CSH_TYPE
value: "perma"
- name: DATAVERSE_PID_CSH_LABEL
value: "Permalink Provider"
- name: DATAVERSE_PID_CSH_AUTHORITY
value: {{.Values.dataverse.pid.permalink.authority}}
- name: DATAVERSE_PID_CSH_SHOULDER
value: {{.Values.dataverse.pid.permalink.shoulder}}
- name: DATAVERSE_PID_CSH_BASE_URL
value: ""
- name: DATAVERSE_PID_CSH_PERMALINK_BASE_URL
value: {{.Values.dataverse.pid.permalink.base_url}}
- name: DATAVERSE_PID_CSH_SEPARATOR
- name: DATAVERSE_PID_CSH_PERMALINK_SEPARATOR
value: {{.Values.dataverse.pid.permalink.separator | quote}}
- name: DATAVERSE_PID_CSH_IDENTIFIER_GENERATION_STYLE
value: "storedProcGenerated"
Expand Down
4 changes: 2 additions & 2 deletions k8s/dataverse/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,5 +24,5 @@ dataverse:
base_url:
separator:
images:
backend: ghcr.io/nfdi4health/csh-ui/dataverse:6.3
configbaker: ghcr.io/nfdi4health/csh-ui/dataverse-baker:6.3
backend: ghcr.io/nfdi4health/csh-ui/dataverse:6.5
configbaker: ghcr.io/nfdi4health/csh-ui/dataverse-baker:6.5
120 changes: 94 additions & 26 deletions scripts/load_dataverse_backup.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,14 @@
#!/bin/bash

LOGICAL_BACKUP_S3_BUCKET=
SCOPE=
LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX=
S3_CONFIG_FILE=
SOURCE_DATAVERSE_NAME=
SOURCE_DATAVERSE_CONTEXT=
DESTINATION_DATAVERSE_NAME=
DESTINATION_DATAVERSE_CONTEXT=

S3_CONFIG_FILE="${S3_CONFIG_FILE:-'~/.s3cfg'}"

# Computed from env variables above
Expand All @@ -11,13 +20,13 @@ echo "Downloading backup from S3..."
s3cmd get $LAST_BACKUP_FILE . -c $S3_CONFIG_FILE --skip-existing

echo "Copying backup to postgres pod..."
kubectl cp $(basename $LAST_BACKUP_FILE) $POSTGRES_POD_NAME:/tmp/
kubectl cp $(basename $LAST_BACKUP_FILE) $POSTGRES_POD_NAME:/tmp/ --context $DESTINATION_DATAVERSE_CONTEXT

echo "Unzipping backup..."
kubectl exec $POSTGRES_POD_NAME -- gunzip /tmp/$(basename $LAST_BACKUP_FILE)
kubectl exec $POSTGRES_POD_NAME --context $DESTINATION_DATAVERSE_CONTEXT -- gunzip /tmp/$(basename $LAST_BACKUP_FILE)

echo "Emptying database..."
kubectl exec $POSTGRES_POD_NAME -- psql -P pager=off -U dataverse -c "-- Recreate the schema
kubectl exec $POSTGRES_POD_NAME --context $DESTINATION_DATAVERSE_CONTEXT -- psql -P pager=off -U dataverse -c "-- Recreate the schema
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
Expand All @@ -27,35 +36,94 @@ GRANT ALL ON SCHEMA public TO public;"
# source: https://stackoverflow.com/a/61221726

echo "Loading backup into database..."
kubectl exec $POSTGRES_POD_NAME -- psql -P pager=off -U dataverse -f /tmp/$(basename $LAST_BACKUP_FILE .gz) template1
kubectl exec $POSTGRES_POD_NAME --context $DESTINATION_DATAVERSE_CONTEXT -- psql -P pager=off -U dataverse -f /tmp/$(basename $LAST_BACKUP_FILE .gz) template1


echo "Updating database passwords..."
kubectl get secret | grep ${DESTINATION_DATAVERSE_NAME}-dataverse-postgres.credentials.postgresql.acid.zalan.do | awk '{print $1}' | while read SECRET; do kubectl exec $POSTGRES_POD_NAME -- psql -P pager=off -U dataverse -c "ALTER USER $(echo $SECRET | awk -F. '{print $1}') WITH PASSWORD '$(kubectl get secrets/$SECRET -o=jsonpath="{.data.password}" | base64 -d)';"; done
kubectl get secret --context $DESTINATION_DATAVERSE_CONTEXT | grep ${DESTINATION_DATAVERSE_NAME}-dataverse-postgres.credentials.postgresql.acid.zalan.do | awk '{print $1}' | while read SECRET; do kubectl exec $POSTGRES_POD_NAME --context $DESTINATION_DATAVERSE_CONTEXT -- psql -P pager=off -U dataverse -c "ALTER USER $(echo $SECRET | awk -F. '{print $1}') WITH PASSWORD '$(kubectl get secrets/$SECRET -o=jsonpath="{.data.password}" --context $DESTINATION_DATAVERSE_CONTEXT | base64 -d)';"; done

echo "Restarting dataverse pod..."
kubectl delete pod $DATAVERSE_POD_NAME
kubectl wait --for=condition=Ready --timeout=-1s pod/$DATAVERSE_POD_NAME

kubectl delete pod $DATAVERSE_POD_NAME --context $DESTINATION_DATAVERSE_CONTEXT
kubectl wait --for=condition=Ready --timeout=-1s --context $DESTINATION_DATAVERSE_CONTEXT pod/$DATAVERSE_POD_NAME
#
## NOTE: The following block is commented out because it's no longer feasible time-wise to reindex Dataverse after
# loading a backup. Since we have over 25,000 datasets, it takes too long.
# Instead, we also load a backup for the Solr index.
# Using port 8081 because 8080 is often already used if currently developing with Dataverse
DATAVERSE_LOCAL_PORT=8081
DATAVERSE_REMOTE_PORT=8080

echo "Starting re-index..."
kubectl port-forward $DATAVERSE_POD_NAME $DATAVERSE_LOCAL_PORT:$DATAVERSE_REMOTE_PORT >/dev/null &
PORT_FORWARD_PID=$!
# Kill the port-forward when this script exits
trap '{
kill $PORT_FORWARD_PID 2>/dev/null
}' EXIT
# Wait for port to be available
while ! nc -vz localhost $DATAVERSE_LOCAL_PORT >/dev/null 2>&1; do
sleep 0.1
#DATAVERSE_LOCAL_PORT=8086
#DATAVERSE_REMOTE_PORT=8080
#
#echo "Starting re-index..."
#kubectl port-forward $DATAVERSE_POD_NAME $DATAVERSE_LOCAL_PORT:$DATAVERSE_REMOTE_PORT >/dev/null &
#PORT_FORWARD_PID=$!
## Kill the port-forward when this script exits
#trap '{
# kill $PORT_FORWARD_PID 2>/dev/null
#}' EXIT
## Wait for port to be available
#while ! nc -vz localhost $DATAVERSE_LOCAL_PORT >/dev/null 2>&1; do
# sleep 0.1
#done
#curl http://localhost:$DATAVERSE_LOCAL_PORT/api/admin/index/clear
#echo
#curl http://localhost:$DATAVERSE_LOCAL_PORT/api/admin/index
#echo

echo "Checking age of latest backup of source Solr..."
SOLR_BACKUP_RESPONSE=$(kubectl exec -it ${SOURCE_DATAVERSE_NAME}-dataverse-solr-0 --context $SOURCE_DATAVERSE_CONTEXT --container solr -- curl localhost:8983/solr/collection1/replication?command=details)
SOLR_BACKUP_STATUS=$(echo $SOLR_BACKUP_RESPONSE | jq -r '.details.backup.status')
if [[ "$SOLR_BACKUP_STATUS" == "success" ]]; then
SOLR_BACKUP_TIMESTAMP=$(echo $SOLR_BACKUP_RESPONSE | jq -r '.details.backup.snapshotCompletedAt')

SOLR_BACKUP_TIMESTAMP_DATE=$(echo $SOLR_BACKUP_TIMESTAMP | cut -d'T' -f1)
SOLR_BACKUP_TIMESTAMP_HOUR=$(echo $SOLR_BACKUP_TIMESTAMP | cut -d'T' -f2 | cut -d':' -f1)

CURRENT_DATE=$(date -u +"%Y-%m-%d")
CURRENT_HOUR=$(date -u +"%H")

if [[ "$SOLR_BACKUP_TIMESTAMP_DATE" == "$CURRENT_DATE" && "$SOLR_BACKUP_TIMESTAMP_HOUR" == "$CURRENT_HOUR" ]]; then
# The timestamp is within the current hour
echo "Backup is not too old."
else
echo "Backup is too old. Creating backup of source Solr..."
kubectl exec ${SOURCE_DATAVERSE_NAME}-dataverse-solr-0 --context $SOURCE_DATAVERSE_CONTEXT --container solr -- curl -s "localhost:8983/solr/collection1/replication?command=backup&numberToKeep=1"; echo
while true; do
SOLR_BACKUP_RESPONSE=$(kubectl exec ${SOURCE_DATAVERSE_NAME}-dataverse-solr-0 --context $SOURCE_DATAVERSE_CONTEXT --container solr -- curl localhost:8983/solr/collection1/replication?command=details)
SOLR_BACKUP_STATUS=$(echo $SOLR_BACKUP_RESPONSE | jq -r '.details.backup.status')
echo $SOLR_BACKUP_STATUS
if [[ "$SOLR_BACKUP_STATUS" == "success" ]]; then
break
fi
echo "Waiting for Solr backup to be completed..."
sleep 1
done
fi
fi

SOLR_BACKUP_NAME=$(echo $SOLR_BACKUP_RESPONSE | jq -r '.details.backup.directoryName')

if kubectl exec ${DESTINATION_DATAVERSE_NAME}-dataverse-solr-0 --container solr --context $DESTINATION_DATAVERSE_CONTEXT -- sh -c "[ -d /var/solr/data/collection1/data/${SOLR_BACKUP_NAME} ]"; then
echo "Backup was already copied to destination Solr."
else
echo "Copying completed backup ${SOLR_BACKUP_NAME} to destination Solr... (this may take some time)"
kubectl exec ${SOURCE_DATAVERSE_NAME}-dataverse-solr-0 --container solr --context $SOURCE_DATAVERSE_CONTEXT -- tar -zcf /tmp/${SOLR_BACKUP_NAME}.tar.gz -C /var/solr/data/collection1/data/ ${SOLR_BACKUP_NAME} > /dev/null
kubectl cp ${SOURCE_DATAVERSE_NAME}-dataverse-solr-0:/tmp/${SOLR_BACKUP_NAME}.tar.gz ${SOLR_BACKUP_NAME}.tar.gz --container solr --context $SOURCE_DATAVERSE_CONTEXT --retries=-1 > /dev/null
kubectl cp ${SOLR_BACKUP_NAME}.tar.gz ${DESTINATION_DATAVERSE_NAME}-dataverse-solr-0:/tmp/ --container solr --context $DESTINATION_DATAVERSE_CONTEXT --retries=-1 > /dev/null
kubectl exec ${DESTINATION_DATAVERSE_NAME}-dataverse-solr-0 --container solr --context $DESTINATION_DATAVERSE_CONTEXT -- rm -r /var/solr/data/collection1/data/snapshot.*
kubectl exec ${DESTINATION_DATAVERSE_NAME}-dataverse-solr-0 --container solr --context $DESTINATION_DATAVERSE_CONTEXT -- tar -zxf /tmp/${SOLR_BACKUP_NAME}.tar.gz -C /var/solr/data/collection1/data/
fi

kubectl exec ${DESTINATION_DATAVERSE_NAME}-dataverse-solr-0 --context $DESTINATION_DATAVERSE_CONTEXT --container solr -- curl -s "localhost:8983/solr/collection1/replication?command=restore" > /dev/null

while true; do
SOLR_BACKUP_LOAD_RESPONSE=$(kubectl exec -t ${DESTINATION_DATAVERSE_NAME}-dataverse-solr-0 --context $DESTINATION_DATAVERSE_CONTEXT --container solr -- curl -s "localhost:8983/solr/collection1/replication?command=restorestatus")
SOLR_BACKUP_LOAD_STATUS=$(echo $SOLR_BACKUP_LOAD_RESPONSE | jq -r '.restorestatus.status')
if [[ "$SOLR_BACKUP_LOAD_STATUS" == "success" ]]; then
break
fi
echo "Waiting for Solr backup to be loaded..."
sleep 3
done
curl http://localhost:$DATAVERSE_LOCAL_PORT/api/admin/index/clear
echo
curl http://localhost:$DATAVERSE_LOCAL_PORT/api/admin/index
echo

echo
echo "Done! Please wait for the re-indexing to finish, then the backup loading will be complete."
echo "Done! Backup loading complete."

0 comments on commit e012001

Please sign in to comment.