Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to generate report, the report status is "GenerateReportFailed" #1401

Open
xkwangcn opened this issue Sep 15, 2020 · 7 comments
Open

Comments

@xkwangcn
Copy link

xkwangcn commented Sep 15, 2020

The problem is when I create a report, it failed with the massage:

error occurred while generating report: Failed to execute query for Report table hive.metering.report_metering_namespace_cpu_request: presto: query failed (200 OK): "java.lang.ArrayIndexOutOfBoundsException: Internal error"

The target I want is that make the report status "finished", and I can get the correct data with curl --insecure -H "Authorization: Bearer ${token}" "https://${meteringRoute}/api/v1/reports/get?name=${reportName}&namespace=metering&format=$reportFormat" command

@xkwangcn
Copy link
Author

There are some info about this issue:

  1. the report yaml content:
[root@kvm4ocp1 reports]# cat namespace-cpu-request.yaml 
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
  name: namespace-cpu-request
spec:
  reportingStart: '2020-09-10T00:00:00Z'
  reportingEnd: '2020-09-10T23:00:00Z'
  query: "namespace-cpu-request"
  runImmediately: true
  1. the report status:
[root@kvm4ocp1 reports]# oc get report
NAME                     QUERY                    SCHEDULE   RUNNING                FAILED   LAST REPORT TIME       AGE
namespace-cpu-request    namespace-cpu-request               GenerateReportFailed                                   59m
namespace-memory-usage   namespace-memory-usage              Finished                        2020-09-10T23:00:00Z   25h
  1. describe the report:
[root@kvm4ocp1 reports]# oc describe report namespace-cpu-request
Name:         namespace-cpu-request
Namespace:    metering
Labels:       <none>
Annotations:  <none>
API Version:  metering.openshift.io/v1
Kind:         Report
Metadata:
  Creation Timestamp:  2020-09-15T02:21:36Z
  Generation:          1057
  Managed Fields:
    API Version:  metering.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:query:
        f:reportingEnd:
        f:reportingStart:
        f:runImmediately:
    Manager:      oc
    Operation:    Update
    Time:         2020-09-15T02:21:36Z
    API Version:  metering.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
        f:tableRef:
          .:
          f:name:
    Manager:         reporting-operator
    Operation:       Update
    Time:            2020-09-15T03:17:26Z
  Resource Version:  9901050
  Self Link:         /apis/metering.openshift.io/v1/namespaces/metering/reports/namespace-cpu-request
  UID:               401794a3-1c05-4a6e-886f-dcfbf360ce73
Spec:
  Query:            namespace-cpu-request
  Reporting End:    2020-09-10T23:00:00Z
  Reporting Start:  2020-09-10T00:00:00Z
  Run Immediately:  true
Status:
  Conditions:
    Last Transition Time:  2020-09-15T03:17:26Z
    Last Update Time:      2020-09-15T03:17:26Z
    Message:               Report namespace-cpu-request scheduled: runImmediately=true and reporting period [2020-09-10 00:00:00 +0000 UTC to 2020-09-10 23:00:00 +0000 UTC].
    Reason:                RunImmediately
    Status:                True
    Type:                  Running
  Table Ref:
    Name:  report-metering-namespace-cpu-request
Events:    <none>
  1. logs in reporting-operator:
[root@kvm4ocp1 reports]# oc logs reporting-operator-7c8b77c77b-6h9jh -c reporting-operator
...
time="2020-09-15T03:26:34Z" level=info msg="successfully synced ReportDataSource \"metering/persistentvolumeclaim-request-raw\"" ReportDataSource=metering/persistentvolumeclaim-request-raw app=metering component=reportDataSourceWorker logID=CBGxdOMcrz
time="2020-09-15T03:26:35Z" level=info msg="stored a total of 30 metrics for data between 2020-09-15 02:01:00 +0000 UTC and 2020-09-15 02:06:00 +0000 UTC into hive.metering.datasource_metering_node_allocatable_cpu_cores" app=metering chunkSize=5m0s component=PrometheusImporter endTime="2020-09-15 02:06:00 +0000 UTC" logID=bottyZUp9t namespace=metering reportDataSource=node-allocatable-cpu-cores startTime="2020-09-15 02:01:00 +0000 UTC" stepSize=1m0s tableName=hive.metering.datasource_metering_node_allocatable_cpu_cores
time="2020-09-15T03:26:35Z" level=warning msg="Prometheus metrics import backlog detected: imported data for Prometheus ReportDataSource node-allocatable-cpu-cores newest imported metric timestamp 2020-09-15 02:06:00 +0000 UTC is 1h20m35.437128637s away, queuing to reprocess in 13.671167577s" app=metering component=reportDataSourceWorker logID=NwiS3N6GeR namespace=metering reportDataSource=node-allocatable-cpu-cores
time="2020-09-15T03:26:35Z" level=info msg="syncing ReportDataSource metering/node-cpu-allocatable-raw" app=metering component=reportDataSourceWorker logID=IqGgbokA46
time="2020-09-15T03:26:35Z" level=info msg="queuing Prometheus ReportDataSource node-allocatable-cpu-cores to import data again in 13.671167577s at 2020-09-15 03:26:49.121846095 +0000 UTC" app=metering component=reportDataSourceWorker logID=NwiS3N6GeR namespace=metering reportDataSource=node-allocatable-cpu-cores
time="2020-09-15T03:26:35Z" level=info msg="existing ReportQuery ReportDataSource discovered, viewName: hive.metering.datasource_metering_node_cpu_allocatable_raw" app=metering component=reportDataSourceWorker logID=IqGgbokA46 namespace=metering reportDataSource=node-cpu-allocatable-raw
time="2020-09-15T03:26:35Z" level=info msg="successfully synced ReportDataSource \"metering/node-allocatable-cpu-cores\"" ReportDataSource=metering/node-allocatable-cpu-cores app=metering component=reportDataSourceWorker logID=NwiS3N6GeR
time="2020-09-15T03:26:35Z" level=info msg="successfully synced ReportDataSource \"metering/node-cpu-allocatable-raw\"" ReportDataSource=metering/node-cpu-allocatable-raw app=metering component=reportDataSourceWorker logID=IqGgbokA46
time="2020-09-15T03:26:36Z" level=info msg="syncing ReportDataSource metering/pod-limit-memory-bytes" app=metering component=reportDataSourceWorker logID=yF4KtZLQNt
time="2020-09-15T03:26:36Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_pod_limit_memory_bytes" app=metering component=reportDataSourceWorker logID=yF4KtZLQNt namespace=metering reportDataSource=pod-limit-memory-bytes
time="2020-09-15T03:26:36Z" level=warning msg="time range 2020-09-14 19:13:00 +0000 UTC to 2020-09-15 03:26:36.606623091 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-14 19:23:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=sGdCoZxQhN namespace=metering reportDataSource=pod-limit-memory-bytes stepSize=1m0s tableName=hive.metering.datasource_metering_pod_limit_memory_bytes
time="2020-09-15T03:26:36Z" level=info msg="syncing ReportDataSource metering/persistentvolumeclaim-capacity-bytes" app=metering component=reportDataSourceWorker logID=MPGL4yDPHL
time="2020-09-15T03:26:36Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_persistentvolumeclaim_capacity_bytes" app=metering component=reportDataSourceWorker logID=MPGL4yDPHL namespace=metering reportDataSource=persistentvolumeclaim-capacity-bytes
time="2020-09-15T03:26:36Z" level=warning msg="time range 2020-09-15 02:43:00 +0000 UTC to 2020-09-15 03:26:36.843111295 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-15 02:53:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=QM3YClzwwe namespace=metering reportDataSource=persistentvolumeclaim-capacity-bytes stepSize=1m0s tableName=hive.metering.datasource_metering_persistentvolumeclaim_capacity_bytes
time="2020-09-15T03:26:36Z" level=info msg="syncing ReportDataSource metering/node-allocatable-memory-bytes" app=metering component=reportDataSourceWorker logID=lE5TlVmk6C
time="2020-09-15T03:26:36Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_node_allocatable_memory_bytes" app=metering component=reportDataSourceWorker logID=lE5TlVmk6C namespace=metering reportDataSource=node-allocatable-memory-bytes
time="2020-09-15T03:26:36Z" level=warning msg="time range 2020-09-15 02:07:00 +0000 UTC to 2020-09-15 03:26:36.970280483 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-15 02:17:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=QqjcYZKNBV namespace=metering reportDataSource=node-allocatable-memory-bytes stepSize=1m0s tableName=hive.metering.datasource_metering_node_allocatable_memory_bytes
time="2020-09-15T03:26:37Z" level=info msg="syncing ReportDataSource metering/persistentvolumeclaim-phase" app=metering component=reportDataSourceWorker logID=vxZ1OBnbai
time="2020-09-15T03:26:37Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_persistentvolumeclaim_phase" app=metering component=reportDataSourceWorker logID=vxZ1OBnbai namespace=metering reportDataSource=persistentvolumeclaim-phase
time="2020-09-15T03:26:37Z" level=warning msg="time range 2020-09-15 02:25:00 +0000 UTC to 2020-09-15 03:26:37.890023782 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-15 02:35:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=uLBSLYuN7d namespace=metering reportDataSource=persistentvolumeclaim-phase stepSize=1m0s tableName=hive.metering.datasource_metering_persistentvolumeclaim_phase
time="2020-09-15T03:26:38Z" level=info msg="stored a total of 18 metrics for data between 2020-09-15 02:43:00 +0000 UTC and 2020-09-15 02:48:00 +0000 UTC into hive.metering.datasource_metering_persistentvolumeclaim_capacity_bytes" app=metering chunkSize=5m0s component=PrometheusImporter endTime="2020-09-15 02:48:00 +0000 UTC" logID=QM3YClzwwe namespace=metering reportDataSource=persistentvolumeclaim-capacity-bytes startTime="2020-09-15 02:43:00 +0000 UTC" stepSize=1m0s tableName=hive.metering.datasource_metering_persistentvolumeclaim_capacity_bytes
time="2020-09-15T03:26:38Z" level=warning msg="Prometheus metrics import backlog detected: imported data for Prometheus ReportDataSource persistentvolumeclaim-capacity-bytes newest imported metric timestamp 2020-09-15 02:48:00 +0000 UTC is 38m38.05111884s away, queuing to reprocess in 14.457974849s" app=metering component=reportDataSourceWorker logID=MPGL4yDPHL namespace=metering reportDataSource=persistentvolumeclaim-capacity-bytes
time="2020-09-15T03:26:38Z" level=info msg="queuing Prometheus ReportDataSource persistentvolumeclaim-capacity-bytes to import data again in 14.457974849s at 2020-09-15 03:26:52.542594611 +0000 UTC" app=metering component=reportDataSourceWorker logID=MPGL4yDPHL namespace=metering reportDataSource=persistentvolumeclaim-capacity-bytes
time="2020-09-15T03:26:38Z" level=info msg="successfully synced ReportDataSource \"metering/persistentvolumeclaim-capacity-bytes\"" ReportDataSource=metering/persistentvolumeclaim-capacity-bytes app=metering component=reportDataSourceWorker logID=MPGL4yDPHL
time="2020-09-15T03:26:38Z" level=info msg="syncing ReportDataSource metering/persistentvolumeclaim-capacity-raw" app=metering component=reportDataSourceWorker logID=5VqUGg7V6o
time="2020-09-15T03:26:38Z" level=info msg="existing ReportQuery ReportDataSource discovered, viewName: hive.metering.datasource_metering_persistentvolumeclaim_capacity_raw" app=metering component=reportDataSourceWorker logID=5VqUGg7V6o namespace=metering reportDataSource=persistentvolumeclaim-capacity-raw
time="2020-09-15T03:26:38Z" level=info msg="successfully synced ReportDataSource \"metering/persistentvolumeclaim-capacity-raw\"" ReportDataSource=metering/persistentvolumeclaim-capacity-raw app=metering component=reportDataSourceWorker logID=5VqUGg7V6o
time="2020-09-15T03:26:38Z" level=info msg="stored a total of 30 metrics for data between 2020-09-15 02:07:00 +0000 UTC and 2020-09-15 02:12:00 +0000 UTC into hive.metering.datasource_metering_node_allocatable_memory_bytes" app=metering chunkSize=5m0s component=PrometheusImporter endTime="2020-09-15 02:12:00 +0000 UTC" logID=QqjcYZKNBV namespace=metering reportDataSource=node-allocatable-memory-bytes startTime="2020-09-15 02:07:00 +0000 UTC" stepSize=1m0s tableName=hive.metering.datasource_metering_node_allocatable_memory_bytes
time="2020-09-15T03:26:38Z" level=warning msg="Prometheus metrics import backlog detected: imported data for Prometheus ReportDataSource node-allocatable-memory-bytes newest imported metric timestamp 2020-09-15 02:12:00 +0000 UTC is 1h14m38.376215328s away, queuing to reprocess in 14.282562107s" app=metering component=reportDataSourceWorker logID=lE5TlVmk6C namespace=metering reportDataSource=node-allocatable-memory-bytes
time="2020-09-15T03:26:38Z" level=info msg="queuing Prometheus ReportDataSource node-allocatable-memory-bytes to import data again in 14.282562107s at 2020-09-15 03:26:52.728609604 +0000 UTC" app=metering component=reportDataSourceWorker logID=lE5TlVmk6C namespace=metering reportDataSource=node-allocatable-memory-bytes
time="2020-09-15T03:26:38Z" level=info msg="successfully synced ReportDataSource \"metering/node-allocatable-memory-bytes\"" ReportDataSource=metering/node-allocatable-memory-bytes app=metering component=reportDataSourceWorker logID=lE5TlVmk6C
time="2020-09-15T03:26:38Z" level=info msg="syncing ReportDataSource metering/node-memory-allocatable-raw" app=metering component=reportDataSourceWorker logID=6wukjkXa9a
time="2020-09-15T03:26:38Z" level=info msg="existing ReportQuery ReportDataSource discovered, viewName: hive.metering.datasource_metering_node_memory_allocatable_raw" app=metering component=reportDataSourceWorker logID=6wukjkXa9a namespace=metering reportDataSource=node-memory-allocatable-raw
time="2020-09-15T03:26:38Z" level=info msg="successfully synced ReportDataSource \"metering/node-memory-allocatable-raw\"" ReportDataSource=metering/node-memory-allocatable-raw app=metering component=reportDataSourceWorker logID=6wukjkXa9a
time="2020-09-15T03:26:38Z" level=info msg="syncing ReportDataSource metering/pod-usage-cpu-cores" app=metering component=reportDataSourceWorker logID=GODqWWmTQJ
time="2020-09-15T03:26:38Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_pod_usage_cpu_cores" app=metering component=reportDataSourceWorker logID=GODqWWmTQJ namespace=metering reportDataSource=pod-usage-cpu-cores
time="2020-09-15T03:26:38Z" level=warning msg="time range 2020-09-14 18:01:00 +0000 UTC to 2020-09-15 03:26:38.503252054 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-14 18:11:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=SQ3Ehah8se namespace=metering reportDataSource=pod-usage-cpu-cores stepSize=1m0s tableName=hive.metering.datasource_metering_pod_usage_cpu_cores
time="2020-09-15T03:26:38Z" level=info msg="stored a total of 633 metrics for data between 2020-09-14 18:55:00 +0000 UTC and 2020-09-14 19:00:00 +0000 UTC into hive.metering.datasource_metering_pod_limit_cpu_cores" app=metering chunkSize=5m0s component=PrometheusImporter endTime="2020-09-14 19:00:00 +0000 UTC" logID=o9EkI7M4P1 namespace=metering reportDataSource=pod-limit-cpu-cores startTime="2020-09-14 18:55:00 +0000 UTC" stepSize=1m0s tableName=hive.metering.datasource_metering_pod_limit_cpu_cores
time="2020-09-15T03:26:38Z" level=warning msg="Prometheus metrics import backlog detected: imported data for Prometheus ReportDataSource pod-limit-cpu-cores newest imported metric timestamp 2020-09-14 19:00:00 +0000 UTC is 8h26m38.667075014s away, queuing to reprocess in 11.763149426s" app=metering component=reportDataSourceWorker logID=qshMZRnmXN namespace=metering reportDataSource=pod-limit-cpu-cores
time="2020-09-15T03:26:38Z" level=info msg="queuing Prometheus ReportDataSource pod-limit-cpu-cores to import data again in 11.763149426s at 2020-09-15 03:26:50.517905889 +0000 UTC" app=metering component=reportDataSourceWorker logID=qshMZRnmXN namespace=metering reportDataSource=pod-limit-cpu-cores
time="2020-09-15T03:26:38Z" level=info msg="successfully synced ReportDataSource \"metering/pod-limit-cpu-cores\"" ReportDataSource=metering/pod-limit-cpu-cores app=metering component=reportDataSourceWorker logID=qshMZRnmXN
time="2020-09-15T03:26:38Z" level=info msg="syncing ReportDataSource metering/pod-request-memory-bytes" app=metering component=reportDataSourceWorker logID=72QyawRixq
time="2020-09-15T03:26:38Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_pod_request_memory_bytes" app=metering component=reportDataSourceWorker logID=72QyawRixq namespace=metering reportDataSource=pod-request-memory-bytes
time="2020-09-15T03:26:38Z" level=warning msg="time range 2020-09-14 11:37:00 +0000 UTC to 2020-09-15 03:26:38.870987907 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-14 11:47:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=9exTNKiUda namespace=metering reportDataSource=pod-request-memory-bytes stepSize=1m0s tableName=hive.metering.datasource_metering_pod_request_memory_bytes
time="2020-09-15T03:26:39Z" level=info msg="stored a total of 36 metrics for data between 2020-09-15 02:25:00 +0000 UTC and 2020-09-15 02:30:00 +0000 UTC into hive.metering.datasource_metering_persistentvolumeclaim_phase" app=metering chunkSize=5m0s component=PrometheusImporter endTime="2020-09-15 02:30:00 +0000 UTC" logID=uLBSLYuN7d namespace=metering reportDataSource=persistentvolumeclaim-phase startTime="2020-09-15 02:25:00 +0000 UTC" stepSize=1m0s tableName=hive.metering.datasource_metering_persistentvolumeclaim_phase
time="2020-09-15T03:26:39Z" level=warning msg="Prometheus metrics import backlog detected: imported data for Prometheus ReportDataSource persistentvolumeclaim-phase newest imported metric timestamp 2020-09-15 02:30:00 +0000 UTC is 56m39.441280655s away, queuing to reprocess in 10.389262789s" app=metering component=reportDataSourceWorker logID=vxZ1OBnbai namespace=metering reportDataSource=persistentvolumeclaim-phase
time="2020-09-15T03:26:39Z" level=info msg="queuing Prometheus ReportDataSource persistentvolumeclaim-phase to import data again in 10.389262789s at 2020-09-15 03:26:49.845313985 +0000 UTC" app=metering component=reportDataSourceWorker logID=vxZ1OBnbai namespace=metering reportDataSource=persistentvolumeclaim-phase
time="2020-09-15T03:26:39Z" level=info msg="successfully synced ReportDataSource \"metering/persistentvolumeclaim-phase\"" ReportDataSource=metering/persistentvolumeclaim-phase app=metering component=reportDataSourceWorker logID=vxZ1OBnbai
time="2020-09-15T03:26:39Z" level=info msg="syncing ReportDataSource metering/persistentvolumeclaim-phase-raw" app=metering component=reportDataSourceWorker logID=w3cSbx9gVh
time="2020-09-15T03:26:39Z" level=info msg="existing ReportQuery ReportDataSource discovered, viewName: hive.metering.datasource_metering_persistentvolumeclaim_phase_raw" app=metering component=reportDataSourceWorker logID=w3cSbx9gVh namespace=metering reportDataSource=persistentvolumeclaim-phase-raw
time="2020-09-15T03:26:39Z" level=info msg="successfully synced ReportDataSource \"metering/persistentvolumeclaim-phase-raw\"" ReportDataSource=metering/persistentvolumeclaim-phase-raw app=metering component=reportDataSourceWorker logID=w3cSbx9gVh
time="2020-09-15T03:26:39Z" level=info msg="syncing ReportDataSource metering/persistentvolumeclaim-usage-with-phase-raw" app=metering component=reportDataSourceWorker logID=j00xUs8YRT
time="2020-09-15T03:26:39Z" level=info msg="existing ReportQuery ReportDataSource discovered, viewName: hive.metering.datasource_metering_persistentvolumeclaim_usage_with_phase_raw" app=metering component=reportDataSourceWorker logID=j00xUs8YRT namespace=metering reportDataSource=persistentvolumeclaim-usage-with-phase-raw
time="2020-09-15T03:26:39Z" level=info msg="successfully synced ReportDataSource \"metering/persistentvolumeclaim-usage-with-phase-raw\"" ReportDataSource=metering/persistentvolumeclaim-usage-with-phase-raw app=metering component=reportDataSourceWorker logID=j00xUs8YRT
time="2020-09-15T03:26:39Z" level=info msg="stored a total of 1030 metrics for data between 2020-09-14 17:55:00 +0000 UTC and 2020-09-14 18:00:00 +0000 UTC into hive.metering.datasource_metering_pod_usage_memory_bytes" app=metering chunkSize=5m0s component=PrometheusImporter endTime="2020-09-14 18:00:00 +0000 UTC" logID=Fuh1T6bHVY namespace=metering reportDataSource=pod-usage-memory-bytes startTime="2020-09-14 17:55:00 +0000 UTC" stepSize=1m0s tableName=hive.metering.datasource_metering_pod_usage_memory_bytes
time="2020-09-15T03:26:39Z" level=warning msg="Prometheus metrics import backlog detected: imported data for Prometheus ReportDataSource pod-usage-memory-bytes newest imported metric timestamp 2020-09-14 18:00:00 +0000 UTC is 9h26m39.563570713s away, queuing to reprocess in 13.722320628s" app=metering component=reportDataSourceWorker logID=1bevQcI1GW namespace=metering reportDataSource=pod-usage-memory-bytes
time="2020-09-15T03:26:39Z" level=info msg="syncing Report metering/namespace-memory-usage" app=metering component=reportWorker logID=0LrxeUDjwy
time="2020-09-15T03:26:39Z" level=info msg="Report namespace-memory-usage is a previously finished run-once report, not re-processing" app=metering component=reportWorker logID=0LrxeUDjwy namespace=metering report=namespace-memory-usage
time="2020-09-15T03:26:39Z" level=info msg="successfully synced Report \"metering/namespace-memory-usage\"" Report=metering/namespace-memory-usage app=metering component=reportWorker logID=0LrxeUDjwy
time="2020-09-15T03:26:39Z" level=info msg="queuing Prometheus ReportDataSource pod-usage-memory-bytes to import data again in 13.722320628s at 2020-09-15 03:26:53.351758778 +0000 UTC" app=metering component=reportDataSourceWorker logID=1bevQcI1GW namespace=metering reportDataSource=pod-usage-memory-bytes
time="2020-09-15T03:26:39Z" level=info msg="successfully synced ReportDataSource \"metering/pod-usage-memory-bytes\"" ReportDataSource=metering/pod-usage-memory-bytes app=metering component=reportDataSourceWorker logID=1bevQcI1GW
time="2020-09-15T03:26:39Z" level=info msg="syncing ReportDataSource metering/cluster-memory-usage-raw" app=metering component=reportDataSourceWorker logID=s4lxfwbzKa
time="2020-09-15T03:26:39Z" level=info msg="existing ReportQuery ReportDataSource discovered, viewName: hive.metering.datasource_metering_cluster_memory_usage_raw" app=metering component=reportDataSourceWorker logID=s4lxfwbzKa namespace=metering reportDataSource=cluster-memory-usage-raw
time="2020-09-15T03:26:39Z" level=info msg="successfully synced ReportDataSource \"metering/cluster-memory-usage-raw\"" ReportDataSource=metering/cluster-memory-usage-raw app=metering component=reportDataSourceWorker logID=s4lxfwbzKa
time="2020-09-15T03:26:39Z" level=info msg="syncing ReportDataSource metering/pod-memory-usage-raw" app=metering component=reportDataSourceWorker logID=TOVlIuNadm
time="2020-09-15T03:26:39Z" level=info msg="existing ReportQuery ReportDataSource discovered, viewName: hive.metering.datasource_metering_pod_memory_usage_raw" app=metering component=reportDataSourceWorker logID=TOVlIuNadm namespace=metering reportDataSource=pod-memory-usage-raw
time="2020-09-15T03:26:39Z" level=info msg="successfully synced ReportDataSource \"metering/pod-memory-usage-raw\"" ReportDataSource=metering/pod-memory-usage-raw app=metering component=reportDataSourceWorker logID=TOVlIuNadm
time="2020-09-15T03:26:39Z" level=info msg="syncing ReportDataSource metering/cluster-memory-usage-raw" app=metering component=reportDataSourceWorker logID=Ot6po8I33R
time="2020-09-15T03:26:39Z" level=info msg="existing ReportQuery ReportDataSource discovered, viewName: hive.metering.datasource_metering_cluster_memory_usage_raw" app=metering component=reportDataSourceWorker logID=Ot6po8I33R namespace=metering reportDataSource=cluster-memory-usage-raw
time="2020-09-15T03:26:39Z" level=info msg="successfully synced ReportDataSource \"metering/cluster-memory-usage-raw\"" ReportDataSource=metering/cluster-memory-usage-raw app=metering component=reportDataSourceWorker logID=Ot6po8I33R
time="2020-09-15T03:26:39Z" level=info msg="syncing Report metering/namespace-memory-usage" app=metering component=reportWorker logID=UU1DDuQV1x
time="2020-09-15T03:26:39Z" level=info msg="Report namespace-memory-usage is a previously finished run-once report, not re-processing" app=metering component=reportWorker logID=UU1DDuQV1x namespace=metering report=namespace-memory-usage
time="2020-09-15T03:26:39Z" level=info msg="successfully synced Report \"metering/namespace-memory-usage\"" Report=metering/namespace-memory-usage app=metering component=reportWorker logID=UU1DDuQV1x
time="2020-09-15T03:26:39Z" level=info msg="syncing ReportDataSource metering/pod-request-cpu-cores" app=metering component=reportDataSourceWorker logID=8fDZlEdUHV
time="2020-09-15T03:26:39Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_pod_request_cpu_cores" app=metering component=reportDataSourceWorker logID=8fDZlEdUHV namespace=metering reportDataSource=pod-request-cpu-cores
time="2020-09-15T03:26:39Z" level=warning msg="time range 2020-09-11 21:37:00 +0000 UTC to 2020-09-15 03:26:39.767182985 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-11 21:47:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=0rIIvf37iT namespace=metering reportDataSource=pod-request-cpu-cores stepSize=1m0s tableName=hive.metering.datasource_metering_pod_request_cpu_cores
time="2020-09-15T03:26:40Z" level=info msg="syncing ReportDataSource metering/node-capacity-cpu-cores" app=metering component=reportDataSourceWorker logID=hhqcNX5M9g
time="2020-09-15T03:26:40Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_node_capacity_cpu_cores" app=metering component=reportDataSourceWorker logID=hhqcNX5M9g namespace=metering reportDataSource=node-capacity-cpu-cores
time="2020-09-15T03:26:40Z" level=warning msg="time range 2020-09-15 02:25:00 +0000 UTC to 2020-09-15 03:26:40.981847857 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-15 02:35:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=ZGwC8F7x3U namespace=metering reportDataSource=node-capacity-cpu-cores stepSize=1m0s tableName=hive.metering.datasource_metering_node_capacity_cpu_cores
time="2020-09-15T03:26:41Z" level=info msg="syncing ReportDataSource metering/node-capacity-memory-bytes" app=metering component=reportDataSourceWorker logID=X4ZDjaZZWY
time="2020-09-15T03:26:41Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_node_capacity_memory_bytes" app=metering component=reportDataSourceWorker logID=X4ZDjaZZWY namespace=metering reportDataSource=node-capacity-memory-bytes
time="2020-09-15T03:26:41Z" level=warning msg="time range 2020-09-15 02:07:00 +0000 UTC to 2020-09-15 03:26:41.440319727 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-15 02:17:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=yV8AIqzCOe namespace=metering reportDataSource=node-capacity-memory-bytes stepSize=1m0s tableName=hive.metering.datasource_metering_node_capacity_memory_bytes
time="2020-09-15T03:26:41Z" level=info msg="syncing ReportDataSource metering/persistentvolumeclaim-request-bytes" app=metering component=reportDataSourceWorker logID=4m9cmEONM7
time="2020-09-15T03:26:41Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_persistentvolumeclaim_request_bytes" app=metering component=reportDataSourceWorker logID=4m9cmEONM7 namespace=metering reportDataSource=persistentvolumeclaim-request-bytes
time="2020-09-15T03:26:41Z" level=warning msg="time range 2020-09-10 22:01:00 +0000 UTC to 2020-09-15 03:26:41.729268222 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-10 22:11:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=80Gs9Yhnql namespace=metering reportDataSource=persistentvolumeclaim-request-bytes stepSize=1m0s tableName=hive.metering.datasource_metering_persistentvolumeclaim_request_bytes
time="2020-09-15T03:26:42Z" level=info msg="syncing ReportDataSource metering/pod-persistentvolumeclaim-request-info" app=metering component=reportDataSourceWorker logID=CVS05Mt8zA
time="2020-09-15T03:26:42Z" level=info msg="existing Prometheus ReportDataSource discovered, tableName: hive.metering.datasource_metering_pod_persistentvolumeclaim_request_info" app=metering component=reportDataSourceWorker logID=CVS05Mt8zA namespace=metering reportDataSource=pod-persistentvolumeclaim-request-info
time="2020-09-15T03:26:42Z" level=warning msg="time range 2020-09-15 01:19:00 +0000 UTC to 2020-09-15 03:26:42.914216047 +0000 UTC exceeds PrometheusImporter MaxQueryRangeDuration 10m0s, newEndTime: 2020-09-15 01:29:00 +0000 UTC" app=metering chunkSize=5m0s component=PrometheusImporter logID=oPFyYpxO2V namespace=metering reportDataSource=pod-persistentvolumeclaim-request-info stepSize=1m0s tableName=hive.metering.datasource_metering_pod_persistentvolumeclaim_request_info
  1. logs in presto pod:
[root@kvm4ocp1 reports]# oc logs presto-coordinator-0  -c presto
...
2020-09-15T03:25:26.472Z	ERROR	remote-task-callback-11666	io.prestosql.execution.StageStateMachine	Stage 20200915_032526_04432_xi6vv.3 failed
java.lang.ArrayIndexOutOfBoundsException
	at io.prestosql.spi.block.DictionaryBlock.calculateCompactSize(DictionaryBlock.java:223)
	at io.prestosql.spi.block.DictionaryBlock.getSizeInBytes(DictionaryBlock.java:206)
	at io.prestosql.orc.OrcRecordReader.blockLoaded(OrcRecordReader.java:418)
	at io.prestosql.orc.OrcRecordReader.lambda$nextPage$9(OrcRecordReader.java:404)
	at io.prestosql.orc.OrcRecordReader$$Lambda$4598/00000000C41F88F0.accept(Unknown Source)
	at io.prestosql.spi.block.LazyBlock$LazyData.lambda$load$0(LazyBlock.java:403)
	at io.prestosql.spi.block.LazyBlock$LazyData$$Lambda$4599/00000000C414B9E0.accept(Unknown Source)
	at java.util.ArrayList.forEach(ArrayList.java:1257)
	at io.prestosql.spi.block.LazyBlock$LazyData.load(LazyBlock.java:403)
	at io.prestosql.spi.block.LazyBlock$LazyData.getBlock(LazyBlock.java:354)
	at io.prestosql.spi.block.LazyBlock.getBlock(LazyBlock.java:264)
	at io.prestosql.spi.block.LazyBlock.isNull(LazyBlock.java:253)
	at io.prestosql.spi.block.AbstractSingleMapBlock.isNull(AbstractSingleMapBlock.java:58)
	at io.prestosql.spi.type.TypeUtils.readNativeValue(TypeUtils.java:39)
	at io.prestosql.operator.scalar.MapElementAtFunction.elementAt(MapElementAtFunction.java:149)
	at java.lang.invoke.DirectHandle.invokeExact_thunkArchetype_L(DirectHandle.java:298)
	at java.lang.invoke.AsTypeHandle.invokeExact_thunkArchetype_X(AsTypeHandle.java:49)
	at java.lang.invoke.BruteArgumentMoverHandle.invokeExact_thunkArchetype_X(BruteArgumentMoverHandle.java:404)
	at java.lang.invoke.FilterReturnHandle.invokeExact_thunkArchetype_X(FilterReturnHandle.java:59)
	at java.lang.invoke.AsTypeHandle.invokeExact_thunkArchetype_X(AsTypeHandle.java:49)
	at io.prestosql.$gen.PageFilter_20200910_073211_245.filter(Unknown Source)
	at io.prestosql.$gen.PageFilter_20200910_073211_245.filter(Unknown Source)
	at io.prestosql.operator.project.PageProcessor.createWorkProcessor(PageProcessor.java:121)
	at io.prestosql.operator.ScanFilterAndProjectOperator$SplitToPages.lambda$processPageSource$1(ScanFilterAndProjectOperator.java:266)
	at io.prestosql.operator.ScanFilterAndProjectOperator$SplitToPages$$Lambda$3582/00000000BC118D50.apply(Unknown Source)
	at io.prestosql.operator.WorkProcessorUtils.lambda$flatMap$4(WorkProcessorUtils.java:245)
	at io.prestosql.operator.WorkProcessorUtils$$Lambda$3378/00000000B400E700.process(Unknown Source)
	at io.prestosql.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:319)
	at io.prestosql.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
	at io.prestosql.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:306)
	at io.prestosql.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
	at io.prestosql.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:306)
	at io.prestosql.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
	at io.prestosql.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
	at io.prestosql.operator.WorkProcessorUtils.lambda$processStateMonitor$2(WorkProcessorUtils.java:200)
	at io.prestosql.operator.WorkProcessorUtils$$Lambda$3475/00000000C8046A00.process(Unknown Source)
	at io.prestosql.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
	at io.prestosql.operator.WorkProcessorUtils.lambda$flatten$6(WorkProcessorUtils.java:277)
	at io.prestosql.operator.WorkProcessorUtils$$Lambda$3385/00000000B400F6C0.process(Unknown Source)
	at io.prestosql.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:319)
	at io.prestosql.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
	at io.prestosql.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:306)
	at io.prestosql.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
	at io.prestosql.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
	at io.prestosql.operator.WorkProcessorUtils.lambda$processStateMonitor$2(WorkProcessorUtils.java:200)
	at io.prestosql.operator.WorkProcessorUtils$$Lambda$3475/00000000C8046A00.process(Unknown Source)
	at io.prestosql.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
	at io.prestosql.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
	at io.prestosql.operator.WorkProcessorUtils.lambda$finishWhen$3(WorkProcessorUtils.java:215)
	at io.prestosql.operator.WorkProcessorUtils$$Lambda$3546/00000000BC114E80.process(Unknown Source)
	at io.prestosql.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
	at io.prestosql.operator.WorkProcessorSourceOperatorAdapter.getOutput(WorkProcessorSourceOperatorAdapter.java:148)
	at io.prestosql.operator.Driver.processInternal(Driver.java:379)
	at io.prestosql.operator.Driver.lambda$processFor$8(Driver.java:283)
	at io.prestosql.operator.Driver$$Lambda$2797/00000000BC0BD090.get(Unknown Source)
	at io.prestosql.operator.Driver.tryWithLock(Driver.java:675)
	at io.prestosql.operator.Driver.processFor(Driver.java:276)
	at io.prestosql.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1075)
	at io.prestosql.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:163)
	at io.prestosql.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:484)
	at io.prestosql.$gen.Presto_3305fb6_dirty____20200910_034954_2.run(Unknown Source)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:823)


2020-09-15T03:25:26.496Z	INFO	dispatcher-query-30497	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032526_04432_xi6vv :: Transaction:[2dd7ff2a-88e8-4bba-911d-9657f78c5770] :: elapsed 443ms :: planning 118ms :: waiting 218ms :: scheduling 316ms :: running 6ms :: finishing 3ms :: begin 2020-09-15T03:25:26.029Z :: end 2020-09-15T03:25:26.472Z
2020-09-15T03:25:27.316Z	INFO	dispatcher-query-29694	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032526_04434_xi6vv :: Transaction:[39d327aa-2531-492f-a85d-1d3f36e4cd94] :: elapsed 790ms :: planning 85ms :: waiting 144ms :: scheduling 181ms :: running 4ms :: finishing 520ms :: begin 2020-09-15T03:25:26.508Z :: end 2020-09-15T03:25:27.298Z
2020-09-15T03:25:30.804Z	INFO	dispatcher-query-30387	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032528_04436_xi6vv :: Transaction:[7c0812ea-8903-4536-9c38-4238cbdb25e8] :: elapsed 2429ms :: planning 756ms :: waiting 180ms :: scheduling 229ms :: running 21ms :: finishing 1423ms :: begin 2020-09-15T03:25:28.369Z :: end 2020-09-15T03:25:30.798Z
2020-09-15T03:25:31.168Z	INFO	dispatcher-query-30475	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032529_04438_xi6vv :: Transaction:[6ba8819f-0627-490f-b8ad-87c7f06c2372] :: elapsed 1392ms :: planning 412ms :: waiting 87ms :: scheduling 105ms :: running 69ms :: finishing 806ms :: begin 2020-09-15T03:25:29.769Z :: end 2020-09-15T03:25:31.161Z
2020-09-15T03:25:33.617Z	INFO	dispatcher-query-30434	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032530_04441_xi6vv :: Transaction:[93bc4459-17c1-4c27-bb8f-a2380852ecd8] :: elapsed 2078ms :: planning 431ms :: waiting 48ms :: scheduling 62ms :: running 29ms :: finishing 1556ms :: begin 2020-09-15T03:25:30.875Z :: end 2020-09-15T03:25:32.953Z
2020-09-15T03:25:35.259Z	INFO	dispatcher-query-30325	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032533_04443_xi6vv :: Transaction:[8a0b48d7-d420-4ba0-8135-daaaee3afe0d] :: elapsed 1609ms :: planning 576ms :: waiting 82ms :: scheduling 157ms :: running 122ms :: finishing 754ms :: begin 2020-09-15T03:25:33.621Z :: end 2020-09-15T03:25:35.230Z
2020-09-15T03:25:35.260Z	INFO	dispatcher-query-30375	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032531_04442_xi6vv :: Transaction:[e0c09911-f875-4d6d-9780-2e9aa4a1d800] :: elapsed 3281ms :: planning 1477ms :: waiting 607ms :: scheduling 749ms :: running 56ms :: finishing 999ms :: begin 2020-09-15T03:25:31.971Z :: end 2020-09-15T03:25:35.252Z
2020-09-15T03:25:36.723Z	INFO	dispatcher-query-30375	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032535_04444_xi6vv :: Transaction:[bba0637a-ae0c-4017-8af5-6b45f75d5fb2] :: elapsed 1352ms :: planning 268ms :: waiting 56ms :: scheduling 129ms :: running 24ms :: finishing 931ms :: begin 2020-09-15T03:25:35.355Z :: end 2020-09-15T03:25:36.707Z
2020-09-15T03:25:37.093Z	INFO	dispatcher-query-30266	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032526_04433_xi6vv :: Transaction:[ba2d4580-23c5-46aa-9ff8-ca9ed79620a0] :: elapsed 10868ms :: planning 9014ms :: waiting 224ms :: scheduling 1025ms :: running 28ms :: finishing 801ms :: begin 2020-09-15T03:25:26.167Z :: end 2020-09-15T03:25:37.035Z
2020-09-15T03:25:39.074Z	INFO	dispatcher-query-30087	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032536_04445_xi6vv :: Transaction:[51f23b03-0587-41d9-9d38-c6bd322d7d79] :: elapsed 1481ms :: planning 213ms :: waiting 75ms :: scheduling 166ms :: running 16ms :: finishing 1086ms :: begin 2020-09-15T03:25:36.876Z :: end 2020-09-15T03:25:38.357Z
2020-09-15T03:25:40.994Z	INFO	dispatcher-query-30338	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032528_04435_xi6vv :: Transaction:[4a892da6-0a86-4af0-bc94-99ce62c38748] :: elapsed 11632ms :: planning 7752ms :: waiting 434ms :: scheduling 944ms :: running 796ms :: finishing 2140ms :: begin 2020-09-15T03:25:29.280Z :: end 2020-09-15T03:25:40.912Z
2020-09-15T03:25:41.421Z	INFO	dispatcher-query-30469	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032537_04446_xi6vv :: Transaction:[d1fbefd3-2391-47d9-9279-3fefb694831b] :: elapsed 3771ms :: planning 236ms :: waiting 68ms :: scheduling 1198ms :: running 411ms :: finishing 1926ms :: begin 2020-09-15T03:25:37.642Z :: end 2020-09-15T03:25:41.413Z
2020-09-15T03:25:41.697Z	INFO	dispatcher-query-30365	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032539_04447_xi6vv :: Transaction:[ba3df812-0cee-4fee-97dd-9b8657d4489a] :: elapsed 1503ms :: planning 167ms :: waiting 353ms :: scheduling 426ms :: running 5ms :: finishing 905ms :: begin 2020-09-15T03:25:40.186Z :: end 2020-09-15T03:25:41.689Z
2020-09-15T03:25:41.765Z	INFO	dispatcher-query-30338	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032525_04431_xi6vv :: Transaction:[ec5e8c5b-b797-4818-b5f7-a5303be4173a] :: elapsed 15628ms :: planning 11103ms :: waiting 220ms :: scheduling 3522ms :: running 15ms :: finishing 988ms :: begin 2020-09-15T03:25:26.038Z :: end 2020-09-15T03:25:41.666Z
2020-09-15T03:25:41.949Z	INFO	dispatcher-query-30297	io.prestosql.event.QueryMonitor	TIMELINE: Query 20200915_032540_04448_xi6vv :: Transaction:[ab0ddc93-bb55-4367-a775-a2986647dd10] :: elapsed 1770ms :: planning 62ms :: waiting 365ms :: scheduling 419ms :: running 14ms :: finishing 1275ms :: begin 2020-09-15T03:25:40.174Z :: end 2020-09-15T03:25:41.944Z

@xkwangcn
Copy link
Author

xkwangcn commented Sep 15, 2020

Hi, team, the above is my problem.
The OCP version is 4.5, the metering version is 4.7. And this error is not for all kinds reports, some reports could be created successfully and got the data with curl command.
Could you help me with this issue? Thank you very much.

@bentito
Copy link
Member

bentito commented Sep 15, 2020

Hi @xkwangcn , It looks like a pretty specific Presto bug you're encountering. Can you reproduce it each time? Can we get the CR for this deployment?

@pruan-rht
Copy link
Contributor

@xkwangcn you are running metering 4.7? 4.6 is not GAed yet...just want to make sure that's not a typo.

@xkwangcn
Copy link
Author

Hi, @pruan-rht , Thanks for your reply.
4.7 is not a typo.
Because of many factors, I am currently using metering 4.7.
By the way, if 4.7 is not GA, is its function normal? For example, in your environment, could the report namespace-cpu-request work normally?

@xkwangcn
Copy link
Author

Hi @bentito , Thanks for your reply.
Yes, I agree with you, this is a presto bug. I am not aure it is a presto error or reporting-operator error before, so I opened this issue to ask for some advice. Next, I think I should debug it in presto, and ask the presto team about the specific presto errors.

@bentito
Copy link
Member

bentito commented Sep 16, 2020

@xkwangcn That sounds fine to me. That's mostly what I was going to do on your behalf, ask on the prestosql slack workspace. I can verify namespace-cpu-request worked properly for me in roughly the same setup -- a 4.5 OCP cluster and I deployed from a release-4.7 tag checkout using bin/deploy-metering install. I ran roughly that same report, though only on a 9 minute period, and it ran with no error. Would have better fidelity if we could checkout the metering CR you used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants