Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] CALL sys.expire_partitions failed when using hive metastore and setting 'metastore.partitioned-table' = 'false'. #4873

Open
2 tasks done
JingFengWang opened this issue Jan 9, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@JingFengWang
Copy link

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

release-1.0

Compute Engine

spark-3.2.0

Minimal reproduce step

step 1: To create a date partitioned table, set 'metastore.partitioned-table' = 'false'
step 2: Write the test data of the last N days
step 3: CALL sys.expire_partitions(table => 'db.tb', expiration_time => '1 d', timestamp_formatter => 'yyyy-MM-dd');

What doesn't meet your expectations?

An exception is thrown when sys.expire_partitions is executed and no snapshot is generated, so the partitions that need to expire after the snapshot expires will not be physically deleted.

Anything else?

Exception information:
spark-sql> CALL sys.expire_partitions(table => 'db.tb', expiration_time => '1 d', timestamp_formatter => 'yyyy-MM-dd');
25/01/08 17:50:18 ERROR SparkSQLDriver: Failed in [CALL sys.expire_partitions(table => 'db.tb', expiration_time => '1 d', timestamp_formatter => 'yyyy-MM-dd')]
java.lang.RuntimeException: MetaException(message:Invalid partition key & values; keys [], values [2025-01-01, 15, ])
at org.apache.paimon.operation.PartitionExpire.deleteMetastorePartitions(PartitionExpire.java:175)
at org.apache.paimon.operation.PartitionExpire.doExpire(PartitionExpire.java:162)
at org.apache.paimon.operation.PartitionExpire.expire(PartitionExpire.java:139)
at org.apache.paimon.operation.PartitionExpire.expire(PartitionExpire.java:109)
at org.apache.paimon.spark.procedure.ExpirePartitionsProcedure.lambda$call$2(ExpirePartitionsProcedure.java:115)
at org.apache.paimon.spark.procedure.BaseProcedure.execute(BaseProcedure.java:88)
at org.apache.paimon.spark.procedure.BaseProcedure.modifyPaimonTable(BaseProcedure.java:78)
at org.apache.paimon.spark.procedure.ExpirePartitionsProcedure.call(ExpirePartitionsProcedure.java:87)
at org.apache.paimon.spark.execution.PaimonCallExec.run(PaimonCallExec.scala:32)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)

Are you willing to submit a PR?

  • I'm willing to submit a PR!
@JingFengWang JingFengWang added the bug Something isn't working label Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant