You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Applications integrating with Zecale, such as Zeth for instance, may rely on extra pieces of data to be either encrypted and/or added as part of their transactions, along with a proof (of computational integrity - which may be zk, or not).
In the case of Zeth, we pass ciphertexts of the ZethNotes as part of the transaction object. To limit the overhead associated with these ciphertexts that need to be sent to the chain, it may be possible for aggregators running Zecale to:
Consume BATCH_SIZE Zeth txs in their aggregation pool
Recursively verify all the associated ZKPs to get a single proof of computational integrity for the whole batch (instead of relaying the BATCH_SIZE initial proofs to the chain)
Then, store all the ZethNotes ciphertexts as - say - a JSON object on IPFS
Finally, add this IPFS link to the tx object for the contracts to emit the link via an event (to be listened to by the wallets).
The aggregation tx would then contain all the usual Zeth data: commitments and nullifiers for the batch, the unique Zecale proof, and a link to IPFS pointing to all ZethNotes ciphertexts. That would reduce the size of the Zecale tx, at the expense of adding an extra IPFS step.
The Zeth Receive algorithm in this case would be: listen to the Mixer event, get the IPFS link, retrieve the stored set of ciphertexts and try to decrypt them all.
The immediate remark that comes to mind is: we need to make sure the IPFS link is computed properly and that the aggregator does indeed store the data there, else, the zethnote data is lost and funds are lost as a consequence. Furthermore, this requires changes in the Zeth code/interface, and also requires user wallets to connect to IPFS. This adds complexity on the wallet side, and we need to study potential leakages associated with this new interaction and protocol change.
I haven't deeply thought about the consequences of this. For now, I'm just logging this idea here for future discussion as I'm keen to see whether this is fundamentally broken or whether this could be a promising avenue to reducing Zecale x Zeth transaction size.
The text was updated successfully, but these errors were encountered:
Applications integrating with Zecale, such as Zeth for instance, may rely on extra pieces of data to be either encrypted and/or added as part of their transactions, along with a proof (of computational integrity - which may be zk, or not).
In the case of Zeth, we pass ciphertexts of the ZethNotes as part of the transaction object. To limit the overhead associated with these ciphertexts that need to be sent to the chain, it may be possible for aggregators running Zecale to:
The aggregation tx would then contain all the usual Zeth data: commitments and nullifiers for the batch, the unique Zecale proof, and a link to IPFS pointing to all ZethNotes ciphertexts. That would reduce the size of the Zecale tx, at the expense of adding an extra IPFS step.
The Zeth Receive algorithm in this case would be: listen to the Mixer event, get the IPFS link, retrieve the stored set of ciphertexts and try to decrypt them all.
The immediate remark that comes to mind is: we need to make sure the IPFS link is computed properly and that the aggregator does indeed store the data there, else, the zethnote data is lost and funds are lost as a consequence. Furthermore, this requires changes in the Zeth code/interface, and also requires user wallets to connect to IPFS. This adds complexity on the wallet side, and we need to study potential leakages associated with this new interaction and protocol change.
I haven't deeply thought about the consequences of this. For now, I'm just logging this idea here for future discussion as I'm keen to see whether this is fundamentally broken or whether this could be a promising avenue to reducing Zecale x Zeth transaction size.
The text was updated successfully, but these errors were encountered: