Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does MILP agent execute any action? #2

Open
filipesaraiva opened this issue Aug 28, 2024 · 2 comments
Open

Does MILP agent execute any action? #2

filipesaraiva opened this issue Aug 28, 2024 · 2 comments

Comments

@filipesaraiva
Copy link

Hello, firstly thank you for this project.

We are running MILP agent against different grid2op instances (like rte_case5, l2rpn_wcci_2022, l2rpn_neurips_2020) for several timesteps (100) but the agent is not executing any action over the environment.

Maybe we are doing some mistake. Could you point if there is any instance and timestep such MILP will perform any action?

Thank you again and best regards.

@BDonnot
Copy link
Collaborator

BDonnot commented Aug 29, 2024

Hello,

100 steps might not be enough. It might not encounter "hard enough" situation. Try to run it for an entire day (288 steps for these env)

Let us know if it still does work

Best

@DEUCE1957
Copy link

DEUCE1957 commented Nov 5, 2024

Hi having the same issue where the agent does the DoNothing action at all timesteps. Using the CBC solver on the 'l2rpn_case14_sandbox' with the 'global_topology' agent type. Managed to reach 1091 timesteps (of DoNothing) before it fails on episode '0000'.

Hard to say if this means the solver is not working or that there is no topological action that can rescue the system, may try an exhaustive search on that timestep later.

import matplotlib.pyplot as plt
import ipywidgets as widgets
import milp_agent
import grid2op
from pathlib import Path
from tqdm.notebook import trange
from grid2op.PlotGrid import PlotMatplot
from milp_agent.agent import MILPAgent

env = grid2op.make("l2rpn_case14_sandbox")
plotter = PlotMatplot(env.observation_space, gen_name=False, load_name=False, dpi=150)

# >> Setup MILP Agent <<
env.set_id("0000")
init_obs = env.reset()
print(f"Solving '{env.name}' (Episode: {env.chronics_handler.get_name()}) with 'global_topology'")

RHO_LIMIT = 0.95
margins = RHO_LIMIT*np.ones(init_obs.n_line)
Solver_type = milp_agent.MIP_CBC

agent = MILPAgent(env, 
                  agent_solver="global_topology", 
                  solver_name=Solver_type,
                  max_overflow_percentage=margins, 
                  zone_instance=None,
                  clustering_path=None,
                  zone_level=0,
                  logger=logger)


# >> Use Agent <<
done = False
do_nothing = env.action_space({})
initial_timestep = 1090 # Reach 1090 timesteps where agent does nothing
prev_obs = init_obs
env.fast_forward_chronics(initial_timestep)
for t in trange(initial_timestep, env.max_episode_duration()):
    action = agent.act(init_obs, reward=0.0, done=False)
    obs, reward, done, info = env.step(action)
    
    if action.as_dict() != do_nothing.as_dict():
       # This code is never reached
        print(action)
        plotter.plot_obs(obs, figure=plt.figure(figsize=(12,8)))
        plt.show()
    if done:
        break
    prev_obs = obs
plotter.plot_obs(prev_obs)
plotter.plot_obs(obs)
plt.show()

Result (timestep before GameOver):
image
Result (timestep of GameOver):
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants