You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched in the issues and found nothing similar.
Paimon version
release-1.0
Compute Engine
spark-3.2.0
Minimal reproduce step
step 1: To create a date partitioned table, set 'metastore.partitioned-table' = 'false'
step 2: Write the test data of the last N days
step 3: CALL sys.expire_partitions(table => 'db.tb', expiration_time => '1 d', timestamp_formatter => 'yyyy-MM-dd');
What doesn't meet your expectations?
An exception is thrown when sys.expire_partitions is executed and no snapshot is generated, so the partitions that need to expire after the snapshot expires will not be physically deleted.
Anything else?
Exception information:
spark-sql> CALL sys.expire_partitions(table => 'db.tb', expiration_time => '1 d', timestamp_formatter => 'yyyy-MM-dd');
25/01/08 17:50:18 ERROR SparkSQLDriver: Failed in [CALL sys.expire_partitions(table => 'db.tb', expiration_time => '1 d', timestamp_formatter => 'yyyy-MM-dd')]
java.lang.RuntimeException: MetaException(message:Invalid partition key & values; keys [], values [2025-01-01, 15, ])
at org.apache.paimon.operation.PartitionExpire.deleteMetastorePartitions(PartitionExpire.java:175)
at org.apache.paimon.operation.PartitionExpire.doExpire(PartitionExpire.java:162)
at org.apache.paimon.operation.PartitionExpire.expire(PartitionExpire.java:139)
at org.apache.paimon.operation.PartitionExpire.expire(PartitionExpire.java:109)
at org.apache.paimon.spark.procedure.ExpirePartitionsProcedure.lambda$call$2(ExpirePartitionsProcedure.java:115)
at org.apache.paimon.spark.procedure.BaseProcedure.execute(BaseProcedure.java:88)
at org.apache.paimon.spark.procedure.BaseProcedure.modifyPaimonTable(BaseProcedure.java:78)
at org.apache.paimon.spark.procedure.ExpirePartitionsProcedure.call(ExpirePartitionsProcedure.java:87)
at org.apache.paimon.spark.execution.PaimonCallExec.run(PaimonCallExec.scala:32)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
Are you willing to submit a PR?
I'm willing to submit a PR!
The text was updated successfully, but these errors were encountered:
Search before asking
Paimon version
release-1.0
Compute Engine
spark-3.2.0
Minimal reproduce step
step 1: To create a date partitioned table, set 'metastore.partitioned-table' = 'false'
step 2: Write the test data of the last N days
step 3: CALL sys.expire_partitions(table => 'db.tb', expiration_time => '1 d', timestamp_formatter => 'yyyy-MM-dd');
What doesn't meet your expectations?
An exception is thrown when sys.expire_partitions is executed and no snapshot is generated, so the partitions that need to expire after the snapshot expires will not be physically deleted.
Anything else?
Exception information:
spark-sql> CALL sys.expire_partitions(table => 'db.tb', expiration_time => '1 d', timestamp_formatter => 'yyyy-MM-dd');
25/01/08 17:50:18 ERROR SparkSQLDriver: Failed in [CALL sys.expire_partitions(table => 'db.tb', expiration_time => '1 d', timestamp_formatter => 'yyyy-MM-dd')]
java.lang.RuntimeException: MetaException(message:Invalid partition key & values; keys [], values [2025-01-01, 15, ])
at org.apache.paimon.operation.PartitionExpire.deleteMetastorePartitions(PartitionExpire.java:175)
at org.apache.paimon.operation.PartitionExpire.doExpire(PartitionExpire.java:162)
at org.apache.paimon.operation.PartitionExpire.expire(PartitionExpire.java:139)
at org.apache.paimon.operation.PartitionExpire.expire(PartitionExpire.java:109)
at org.apache.paimon.spark.procedure.ExpirePartitionsProcedure.lambda$call$2(ExpirePartitionsProcedure.java:115)
at org.apache.paimon.spark.procedure.BaseProcedure.execute(BaseProcedure.java:88)
at org.apache.paimon.spark.procedure.BaseProcedure.modifyPaimonTable(BaseProcedure.java:78)
at org.apache.paimon.spark.procedure.ExpirePartitionsProcedure.call(ExpirePartitionsProcedure.java:87)
at org.apache.paimon.spark.execution.PaimonCallExec.run(PaimonCallExec.scala:32)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: