-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot find the contraction order for CPU version. #506
Comments
Is it possible for the CPU version to print debugging informations? |
For Fig.12 (of SciPost paper) import cytnx
import numpy as np
net = cytnx.Network()
net.FromString(["c0: t0-c0, t3-c0",\
"c1: t1-c1, t0-c1",\
"c2: t2-c2, t1-c2",\
"c3: t3-c3, t2-c3",\
"t0: t0-c1, w-t0, t0-c0",\
"t1: t1-c2, w-t1, t1-c1",\
"t2: t2-c3, w-t2, t2-c2",\
"t3: t3-c0, w-t3, t3-c3",\
"w: w-t0, w-t1, w-t2, w-t3",\
"TOUT:",\
"ORDER: ((((((((c0,t0),c1),t3),w),t1),c3),t2),c2)"])
chi = 2
chi_int = chi
chi_bd = chi
# c
c0 = cytnx.UniTensor(cytnx.random.normal([chi_bd, chi_bd],0.,1.))
c1 = cytnx.UniTensor(cytnx.random.normal([chi_bd, chi_bd],0.,1.))
c2 = cytnx.UniTensor(cytnx.random.normal([chi_bd, chi_bd],0.,1.))
c3 = cytnx.UniTensor(cytnx.random.normal([chi_bd, chi_bd],0.,1.))
# t
t0 = cytnx.UniTensor(cytnx.random.normal([chi_bd, chi_int, chi_bd],0.,1.))
t1 = cytnx.UniTensor(cytnx.random.normal([chi_bd, chi_int, chi_bd],0.,1.))
t2 = cytnx.UniTensor(cytnx.random.normal([chi_bd, chi_int, chi_bd],0.,1.))
t3 = cytnx.UniTensor(cytnx.random.normal([chi_bd, chi_int, chi_bd],0.,1.))
# w
w = cytnx.UniTensor(cytnx.random.normal([chi_int, chi_int, chi_int, chi_int],0.,1.))
net.PutUniTensors(["c0","c1","c2","c3"], [c0,c1,c2,c3])
net.PutUniTensors(["t0","t1","t2","t3"], [t0,t1,t2,t3])
net.PutUniTensors(["w"], [w])
net.setOrder(optimal=True)
print(net.getOrder())
res = net.Launch()
res.print_diagram() |
I find something strange. If I modify the bond dimension of c0's t3-c0 bond, but DO NOT modify the bond dimension of t0's t3-c0 bond, I still can get a result from
|
I asked ChatGPT to compare The two codes share a similar purpose—finding an efficient contraction sequence for tensors in a network—but they implement the algorithm differently. Here’s a comparison of their structures and approaches: Similarities
Differences
SummaryWhile both codes aim to optimize tensor contraction sequences, the first code offers a more complex, modular, and constraint-aware approach suitable for large, diverse tensor networks. The second code is a more streamlined implementation, better suited for simpler or smaller tensor networks without extensive constraints. Reading from the paper, it seems we are implementing a simplified version of the algorithm, but further investigations are necessary. |
There seems no check on the bond dimensions in the optimal contraction calculation, only the common labels. Should implement this in the code. |
I believe Cyntx implemented a breadth-first construction as described in Sec. II.A.2 in Jutho's paper PRE 90, 033315(2014), not the most efficient method. Will need to adapt the code to the existing Cytnx implementation. |
The optimal order of the tensor network as the attached figure cannot be found for CPU version, but can be found in GPU version (based on cuQuantum, and all of the UniTensor(s) are defined on gpu device). The following code try to get the optimal order but it cannot finished the run in 1 day.
The network file 'iPEPS_observe.net':
The text was updated successfully, but these errors were encountered: