Replies: 44 comments
-
Hi there, thanks for reaching out. The principle limitation of InfiniteOpt (in its current form) is support for non-quadratic nonlinear functions of decision variables (e.g., log(c(t))). This is due to our dependence on JuMP which does allow general nonlinear functions/expressions when modeling directly in JuMP but this nonlinear interface is unextendible for packages like ours. This limitation will be lifted once jump-dev/MathOptInterface.jl#846 is addressed. There are plans to make a temporary patch to address this problem (#16). I hope to address this sometime this summer, but we'll see what my schedule allows. It might be possible to use a change of variables to adapt your formulation. Otherwise, you can transcribe the problem manually into a traditional NLP form that can be used by JuMP. |
Beta Was this translation helpful? Give feedback.
-
something like: using JuMP, Ipopt
Δt = 0.01
T = 10
time = 0.0:Δt:T
model = Model(Ipopt.Optimizer)
@variable(model, c[time] >= 0.0001) # To avoid log(0).
@variable(model, B[time])
fix(B[0.0], 100.0)
fix(B[T], 0.0)
@NLobjective(model, Max, sum(exp(-0.01 * t) * log(c[t]) for t in time))
@constraint(
model,
[i = 2:length(T)],
B[t] == B[t - 1] + Δt * (0.05 * B[t - 1] - c[t]),
)
optimize!(model) |
Beta Was this translation helpful? Give feedback.
-
@pulsipher using InfiniteOpt, JuMP, Ipopt, Plots
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10],num_supports = 10_001)
@infinite_variable(m, B(t))
@infinite_variable(m, 0 <= c(t) <= 99) #m, c(t) >= 0
@objective(m, Max, integral( (-1)*(c - 100)^2, t) )
@BDconstraint(m, (t == 0), B == 100) # B[t=0] = 100
@BDconstraint(m, (t == 10), B == 0)
@constraint(m, c1, @deriv(B, t) == 0.01*B - c)
m
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
#c, B, value.(c), value.(B)
c_opt, B_opt = value.(c), value.(B)
u_ts = supports.(c)
u_ts =[u_ts[j][1] for j in eachindex(u_ts)]
plot(u_ts, B_opt, lab = "B: wealth balance")
plot!(u_ts, c_opt, lab = "c: consumption") Here are the control/state vars from your package:
|
Beta Was this translation helpful? Give feedback.
-
@odow thanks! using JuMP, Ipopt, Plots
Δt = 0.01; T = 10.0; time = 0.0:Δt:T;
m = Model(Ipopt.Optimizer)
@variable(m, 0.0001 <= c[time] <= 99.0)
@variable(m, B[time])
fix(B[0.0], 100.0)
fix(B[T], 0.0)
@NLobjective(m, Max, sum((-1.0)*(c[t] - 100.0)^2.0 for t in time) )
@constraint(m,
[i = 2:length(T)],
B[t] == B[t - 1] + Δt * (0.01 * B[t - 1] - c[t])
)
optimize!(m)
objective_value(m)
c_opt, B_opt = value.(c), value.(B)
plot(time, B_opt.data, lab = "B: wealth balance")
plot!(time, c_opt.data, lab = "c: consumption") Do you have any idea what it might be? |
Beta Was this translation helpful? Give feedback.
-
The perils of writing code without checking that it runs: Δt = 0.01; T = 10.0; time = 0.0:Δt:T;
m = Model(Ipopt.Optimizer)
@variable(m, 0.0001 <= c[time] <= 99.0)
@variable(m, B[time])
fix(B[0.0], 100.0)
fix(B[T], 0.0)
@NLobjective(m, Max, sum((-1.0)*(c[t] - 100.0)^2.0 for t in time) )
for i in 2:length(time)
t_1, t = time[i - 1], time[i]
@constraint(m, B[t] == B[t_1] + Δt * (0.01 * B[t_1] - c[t]))
end
optimize!(m) |
Beta Was this translation helpful? Give feedback.
-
@odow works beautifully. Thank you! |
Beta Was this translation helpful? Give feedback.
-
@azev77 That's a cool example, nice job setting it up. Yes you can add in the discount factor since it is only a function of the infinite parameter Option 1: use a parameter function and embed it in the expression: using InfiniteOpt, JuMP, Ipopt, Plots
discount(t) = exp(-0.01 * t) # this will work as a function for our parameter function
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10], num_supports = 100)
@infinite_variable(m, B(t))
@infinite_variable(m, 0 <= c(t) <= 99)
pfunc = parameter_function(discount, t)
@objective(m, Min, integral(pfunc * (c - 100)^2, t))
@BDconstraint(m, (t == 0), B == 100)
@BDconstraint(m, (t == 10), B == 0)
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") EDIT: See correction in the comments below. Option 2: use the using InfiniteOpt, JuMP, Ipopt, Plots
discount(t) = exp(-0.01 * t) # this will work as a weight function in the integral
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10], num_supports = 100)
@infinite_variable(m, B(t))
@infinite_variable(m, 0 <= c(t) <= 99)
@objective(m, Min, integral((c - 100)^2, t, weight_func = discount))
@BDconstraint(m, (t == 0), B == 100)
@BDconstraint(m, (t == 10), B == 0)
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") |
Beta Was this translation helpful? Give feedback.
-
Oh, my bad that temporarily introduces an expression with 3 variable reference multiplied together. This can be alleviated by introducing a placeholder variable: using InfiniteOpt, JuMP, Ipopt, Plots
discount(t) = exp(-0.01 * t) # this will work as a function for our parameter function
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10], num_supports = 100)
@infinite_variable(m, B(t))
@infinite_variable(m, 0 <= c(t) <= 99)
@infinite_variable(m, placeholder(t))
pfunc = parameter_function(discount, t)
@constraint(m, placeholder == (c - 100)^2)
@objective(m, Min, integral(pfunc * placeholder, t))
@BDconstraint(m, (t == 0), B == 100)
@BDconstraint(m, (t == 10), B == 0)
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") Alternatively, option 2 provides a more compact avenue. |
Beta Was this translation helpful? Give feedback.
-
@odow is there a way to do this in JuMP.jl with:
|
Beta Was this translation helpful? Give feedback.
-
No. @pulsipher has done an excellent job adding these extensions to InfiniteOpt instead... |
Beta Was this translation helpful? Give feedback.
-
@pulsipher can I help you add this econ example to the docs? |
Beta Was this translation helpful? Give feedback.
-
@azev77 Ya that would be great. I think this would make a nice addition to the Examples page in the docs. Please refer the contribution guide (https://github.com/pulsipher/InfiniteOpt.jl/blob/master/CONTRIBUTING.md) for info on how to implement this and make a pull request. You can follow-up here with any questions you have. |
Beta Was this translation helpful? Give feedback.
-
Great.
Are these possible yet? |
Beta Was this translation helpful? Give feedback.
-
Unbounded infinite domains can be defined directly In this case, the only integral evaluation method we support is Gauss Laguerre quadrature (see the developmental docs https://pulsipher.github.io/InfiniteOpt.jl/dev/guide/measure/#Evaluation-Methods). However, this does not work properly on the current release, but this will be fixed with the next release coming in about 2 weeks. I am not quite sure how to enforce the limit constraint, perhaps we could try it via a bounded constraint with a sufficiently large t. My experience is limited with infinite time horizon problems, but I can look further in to how to properly model it when I have some free time later this week. |
Beta Was this translation helpful? Give feedback.
-
@azev77 Sorry that it's been a little while, but I finally finished releasing the new version and now am taking a look back at this. First, the new version has some breaking syntax changes for better long term stability, so here is the updated finite horizon case (be sure to update to the latest version first): using InfiniteOpt, Ipopt, Plots
discount(t) = exp(-0.01 * t) # this will work as an additional weight function in the integral
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 10], num_supports = 100)
@variable(m, B, Infinite(t))
@variable(m, 0 <= c <= 99, Infinite(t))
@objective(m, Min, integral((c - 100)^2, t, weight_func = discount))
@constraint(m, B == 100, DomainRestrictions(t => 0))
@constraint(m, B == 0, DomainRestrictions(t => 10))
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") Now we can try to implement the infinite horizon case: using InfiniteOpt, Ipopt, Plots
int_discount(t) = exp(0.99 * t) # additional weight function for the integral (will be multiplied by e^-t by Gauss Laguerre)
discount(t) = exp(-0.01 * t) # weight function for parameter function in the limit constraint
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, Inf])
@variable(m, B, Infinite(t))
@variable(m, 0 <= c <= 99, Infinite(t))
@parameter_function(m, dis == discount(t))
@objective(m, Min, integral((c - 100)^2, t, eval_method = GaussLaguerre(), num_nodes = 100, weight_func = int_discount))
@constraint(m, B == 100, DomainRestrictions(t => 0))
@constraint(m, dis * B == 0, DomainRestrictions(t => supports(t)[end])) # not sure what u'(c_t) is...
@constraint(m, c1, deriv(B, t) == 0.01*B - c)
optimize!(m)
termination_status(m)
opt_obj = objective_value(m) #
c_opt, B_opt = value(c), value(B)
ts = supports(t)
plot(ts, B_opt, lab = "B: wealth balance")
plot!(ts, c_opt, lab = "c: consumption") Let me know if you have any further questions. More information about the various integral evaluation methods is provided here. As for adding an example to the docs, the documentation now details how to do this here. |
Beta Was this translation helpful? Give feedback.
-
Reminder to self: y_sampled = [(t) -> y_analytical(t, randn()) for i=1:n] In objective replace Min w/ Max |
Beta Was this translation helpful? Give feedback.
-
Also, for simplicity in example I just let |
Beta Was this translation helpful? Give feedback.
-
In the example in the docs, we use Max, & u=-(c-k)^2 https://pulsipher.github.io/InfiniteOpt.jl/stable/examples/Optimal%20Control/consumption_savings/ |
Beta Was this translation helpful? Give feedback.
-
Yep, I was mistaken. Thanks for the clarification. Also, an easier way to handle to the geometric Brownian motion and generate the samples would be to use DifferentialEquations.jl with their noise processes: https://diffeq.sciml.ai/stable/features/noise_process/#noise_process For this example, we would have the following: Edit using InfiniteOpt, Ipopt, DifferentialEquations
# Define parameters
ρ = 0.020 # discount rate
k = 90.0 # utility bliss point
T = 10.0 # life horizon
r = 0.05 # interest rate
dt = 0.1 # time step size TODO replace as wanted
ts = collect(0:dt:T) # time support points
B0 = 100.0 # endowment
u(c) = -(c - k)^2 # utility function
discount(t) = exp(-ρ*t) # discount function
# Define the stochastic income info
n = 100 # define number of samples wanted TODO replace with actual number wanted
y0 = 10.0 # TODO replace with actual value
μ = 0; σ= 0.1 # TODO replace with actual values
y_model = GeometricBrownianMotionProcess(μ, σ, 0.0, y0, 1.0) # TODO replace Brownian params as wanted
y_sim = NoiseProblem(y_model, (0.0, T))
y_samples = Vector{Function}(undef, n)
for i in eachindex(y_samples)
y_scenario = DifferentialEquations.solve(y_sim; dt = dt)
y_samples[i] = t -> y_scenario(t)[1]
end
# Define Model
model = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(model, t in [0, T], supports = ts)
@parameter_function(model, y[i = 1:n] == y_samples[i](t))
@variable(model, B[1:n], Infinite(t)) # have a variable for each sample of y
@variable(model, 0 <= c[1:n] <= 99, Infinite(t))
@objective(model, Max, 1/n * sum(∫(u(c[i]), t, weight_func = discount) for i in 1:n)) # use sum for expectation over y
@constraint(model, [i = 1:n], B[i](0) == 100)
@constraint(model, [i = 1:n], B[i](T) == 0)
@constraint(model, [i = 1:n], ∂(B[i], t) == r * B[i] + y[i] - c[i])
# Solve the model and get the results
optimize!(model)
opt_obj = objective_value(model) |
Beta Was this translation helpful? Give feedback.
-
WOW! Amazing! Check out the picks: (note how smooth consumption is rel to inc/wealth, as it should be!) I'll compare to closed-forms later... This is great bc economists usually have to solve non-linear PDEs like Ben Moll & @matthieugomez EconPDEs.jl. A few comments.
|
Beta Was this translation helpful? Give feedback.
-
I am glad it worked out nicely!
It would make since that increasing the noise would make the ODEs harder to solve. You can try adjusting the sample/support amounts to see if that helps. For increased speed, I also noticed that this problem is fully decoupled which means you could instead solve a separate model for each y sample and then aggregate them together (these subproblems could even be solved in parallel). This means that the model portion of the above code can be rewritten: # Set storage arrays
Bs = zeros(n, length(ts))
cs = zeros(n, length(ts))
objs = zeros(n)
# Solve each subproblem
for i in eachindex(y_samples)
model = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(model, t in [0, T], supports = ts)
@parameter_function(model, y == y_samples[i](t))
@variable(model, B, Infinite(t))
@variable(model, 0 <= c <= 99, Infinite(t))
@objective(model, Max, ∫(u(c), t, weight_func = discount))
@constraint(model, B(0) == 100)
@constraint(model, B(T) == 0)
@constraint(model, ∂(B, t) == r * B + y - c)
optimize!(model)
Bs[i, :] = value(B)
cs[i, :] = value(c)
objs[i] = objective_value(model)
end
# Compute the overall objective
expect_obj = 1 / n * sum(objs)
The framework above is quite general where |
Beta Was this translation helpful? Give feedback.
-
Can you explain why it should be? If you don't know the future realization of the process, how you do know the intercept and slope of the consumption to end the time period with 0 wealth? This is only possible if you have perfect foresight of the process? Here's what I get with SDDP.jl: Your consumption is dependent on your wealth and current income, so very much not smooth. |
Beta Was this translation helpful? Give feedback.
-
@odow Optimal consumption should itself be a stochastic process, a function of stochastic income/wealth. |
Beta Was this translation helpful? Give feedback.
-
@odow I think the difference is in how these problems are modeled. In the InfiniteOpt formulation, we treat the uncertainty y as a fixed geometric Brownian process and the solve the problem with respect to sampled trajectories. In other words, we essentially have a generalization of a 2-stage program where we have a stochastic process (giving random time trajectory realizations) instead of a traditional distribution. This means that we treat the uncertainty propagation jointly which implicitly assumes that we know the stochastic process uncertainty apriori and that the decisions we make will not change it. Hence, in this case y is the known stochastic process uncertainty and B & c are the decision functions we are shaping (and denote stochastic processes in and of themselves). In the SDDP formulation, you are using a multi-stage stochastic program which naturally models the uncertainty conditionally (i.e., following a discrete scenario tree). This of course operates under different assumptions leading to the different behavior of c. For reference here is the comparable output of InfiniteOpt: These differences highlight how domain correlated uncertainty is modeled quite differently across various fields. From my research, it seems that a variety of fields (e.g., space-time statisticians, economics, etc.) that use stochastic ODEs and/or random field theory tend to solve treat the uncertainty jointly in like manner to the above formulation. Whereas the stochastic programming community tend to multi-stage formulations that are based on scenario trees. The question here becomes how do we "best" model the uncertainty in these systems? Handling it jointly doesn't account for conditional decision making which may be desired, and on the other hand, to my knowledge multi-stage models don't generalize to continuous time and it is not obvious how they could be applied to other domains (e.g., spatial uncertainty). These are the types questions that are coming up in developing the unifying abstraction for infinite-dimensional optimization problems that is behind InfiniteOpt. |
Beta Was this translation helpful? Give feedback.
-
Before I forget, it would be nice to try an example w a bang-bang solution at some point: http://www.sfu.ca/~wainwrig/Econ331/Chapter20-bangbang-example.pdf |
Beta Was this translation helpful? Give feedback.
-
Reminder to self: https://academic.oup.com/ectj/article-abstract/23/3/S1/5802896 |
Beta Was this translation helpful? Give feedback.
-
@azev77 If it is alright with you, I would like to close this question discussion here and recommend that any further discussion be done in our new discussion forum instead: https://github.com/pulsipher/InfiniteOpt.jl/discussions. |
Beta Was this translation helpful? Give feedback.
-
Continued in #194. |
Beta Was this translation helpful? Give feedback.
-
Just a comment in passing: While I haven't looked at the details of the modelling strategy, it seems to me that consumption is too smooth here. In the typical setup, consumption satisfies a form of Milton Friedman's "permanent income theory". Now the devil is in the details and how the stochastic process is set up matters a lot obviously, and how expectations are formed matters too; still, one would expect consumption to fluctuate according to the "permanent" shocks, while absorbing the "transitory" shocks. In the case of a finite horizon, the marginal propensity would typically be expected to rise non-linearly near the end of the horizon (the finite-life consumer gulps all the remaining wealth before blissfully expiring). So perhaps the assumptions made here are a little different or the parameter values not typical... Here's a clear overview of the model assumptions with some Julia code, for comparison: https://julia.quantecon.org/dynamic_programming/perm_income.html |
Beta Was this translation helpful? Give feedback.
-
@ptoche The smoothness of the above answer can likely be attributed to the anticipative nature of the simple discretization approach used to solve it. Admittedly, this simplifying assumption is not satisfactory for a number of problems such as this one. A nonanticipative approach should probably be used instead. This would entail correctly applying stochastic calculus (e.g., Ito calculus) to integrate the SDEs (above simplifying assumption negated this consideration). Currently, InfiniteOpt.jl does not implement this, but we have plans to add modeling capabilities for general random fields (e.g., stochastic processes) along with solution capabilities that can employ nonanticipative stochastic calculus approaches. |
Beta Was this translation helpful? Give feedback.
-
Hi & thank you for this package!
I'd like to use it to solve a very simple (routine) problem in economics.
This is a consumption-saving problem.
The household lives during time: t ∈ [0, 10]
The household's wealth @ time t is: B(t)
The household's choice is consumption: c(t)
The household is endowed w/ $100: B(0) = 100
The household cannot die in debt: B(10) = 0
I have the closed form solution for comparison.
Is it possible to solve this problem w/ your package?
Beta Was this translation helpful? Give feedback.
All reactions