You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
X = ttb.tenones((2, 2))
X[0, 1] = 0.0
X[1, 0] = 0.0
rank = 2
# Select Gaussian objective
objective = Objectives.GAUSSIAN
# Select LBFGSB solver with 2 max iterations
optimizer = LBFGSB(maxiter=1)
# Compute rank-2 GCP approximation to X with GCP-OPT
# Return result, initial guess, and runtime information
np.random.seed(0) # Creates consistent initial guess
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
data=X, rank=rank, objective=objective, optimizer=optimizer, printitn=1
)
# <-- No output
However, constructing LBFGSB with iprint= works:
# Select LBFGSB solver with 2 max iterations
optimizer = LBFGSB(maxiter=1, iprint=1)
# Compute rank-2 GCP approximation to X with GCP-OPT
# Return result, initial guess, and runtime information
np.random.seed(0) # Creates consistent initial guess
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
data=X, rank=rank, objective=objective, optimizer=optimizer
)
RUNNING THE L-BFGS-B CODE
* * *
Machine precision = 2.220D-16
N = 540 M = 10
At X0 0 variables are exactly at the bounds
At iterate 0 f= 9.14877D+06 |proj g|= 1.23309D+06
At iterate 1 f= 8.91367D+06 |proj g|= 9.45422D+05
* * *
Tit = total number of iterations
Tnf = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip = number of BFGS updates skipped
Nact = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F = final function value
* * *
N Tit Tnf Tnint Skip Nact Projg F
540 1 2 1 0 0 9.454D+05 8.914D+06
F = 8913669.7805772256
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT
Final fit: 0.09313583776378109 (for comparison to f(x) in CP-ALS)
CPU times: user 164 ms, sys: 111 ms, total: 275 ms
Wall time: 36.8 ms
This problem is unconstrained.
If this is intended behavior, it should be documented.
The text was updated successfully, but these errors were encountered:
. Are you saying the LBFGS documentation should be more verbose, or we should be more explicit about gcp_opt controlling higher level print details and the solvers separately controlling the low level prints? I guess additionally we could make the print iteration defaults align between LBFGS and our StochasticSolvers if currently different
All solvers, if possible, should provide the same iterative output.
If not possible, all first k columns are the same and solver-specific details are provided in remaining columns.
All pyttb solvers should control what is sent to the user. E.g., in the case of LBFGS (or some other solver), capture the output or provide a hook (if possible) to get the necessary information every printitn iterations that is formatted in pyttb and sent to output. From a user's POV, having to look through lots of different solver outputs is very confusing.
Do not allow users to access underlying solver output except when requested, and then only through a variable passed through the output (info). Cluttering output with multiple variants of output from different solvers puts pyttb on the hook to make sure our users can make sense/use of this.
Minimal working example
However, constructing
LBFGSB
withiprint=
works:If this is intended behavior, it should be documented.
The text was updated successfully, but these errors were encountered: