Power networks transport electricity across states, countries and even continents. They are the backbone of power distribution, playing a central economical and societal role by supplying reliable power to industry, services, and consumers. Their importance appears even more critical today as we transition towards a more sustainable world within a carbon-free economy, and concentrate energy distribution in the form of electricity. Problems that arise within the power grid range from transient brownouts to complete electrical blackouts which can create significant economic and social perturbations, i.e.de facto freezing society. Grid operators are still responsible for ensuring that a reliable supply of electricity is provided everywhere, at all times. With the advent of renewable energy, electric mobility, and limitations placed on engaging in new grid infrastructure projects, the task of controlling existing grids is becoming increasingly difficult, forcing grid operators to do “more with less”. This challenge aims at testing the potential of AI to address this important real-world problem for our future.
The goal of this challenge is to test the potential of Reinforcement Learning (RL) to control electrical power transmission, in the most cost-effective manner, while keeping people and equipment safe from harm. Solving this challenge may have very positive impacts on society, as governments move to decarbonize the electricity sector and to electrify other sectors, to help reach IPCC climate goals. Existing software, computational methods and optimal powerflow solvers are not adequate for real-time network operations on short temporal horizons in a reasonable computational time. With recent changes in electricity generation and consumption patterns, system operation is moving to become more of a stochastic rather than a deterministic control problem. In order to overcome these complexities, new computational methods are required. The intention of this challenge is to explore RL as a solution method for electricity network control. There may be under-utilized, cost-effective flexibility in the power network that RL techniques can identify and capitalize on, that human operators and traditional solution techniques are unaware of or unaccustomed to.
On the way towards a sustainable future, this competition aims at unleashing the power of
reinforcement learning for a real-world industrial application: controlling electricity power transmission and
moving closer to truly “smart” grids using underutilized flexibilities.
In track 1, develop your agent to be robust
to unexpected events and keep delivering reliable electricity everywhere even in difficult circumstances.
In track
2, develop your agent to adapt to new energy productions in the grid with an increasing share of less controllable
renewable energies over years.
The Learning to run a power network challenge was first presented at the CIML workshop at Neurips 2018. It introduces the upcoming challenges for power grid operations.