Next Article in Journal
Does Chinese Foreign Direct Investment (FDI) Stimulate Economic Growth in Pakistan? An Application of the Autoregressive Distributed Lag (ARDL Bounds) Testing Approach
Previous Article in Journal
Electricity Sector Reform Performance in Sub-Saharan Africa: A Parametric Distance Function Approach
Previous Article in Special Issue
Constant Voltage Model of DFIG-Based Variable Speed Wind Turbine for Load Flow Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simplified A-Diakoptics for Accelerating QSTS Simulations

Distribution Planning and Operations, EPRI, Knoxville, TN 37932, USA
*
Author to whom correspondence should be addressed.
Submission received: 13 January 2022 / Revised: 1 March 2022 / Accepted: 7 March 2022 / Published: 11 March 2022
(This article belongs to the Special Issue Power System Modeling, Analysis and Simulation)

Abstract

:
The spread of distributed energy resources (DERs) across the distribution power system demands complex planning studies based on quasi-static-time-series (QSTS) simulations, requiring a significant amount of computing time to complete, leading planners to look for alternatives to QSTS. Diakoptics based on actors (A-Diakoptics) is a technique for accelerating simulations combining two computing techniques from different engineering fields: diakoptics and the actor model. Diakoptics is a mathematical method for tearing networks, reducing their complexity by using smaller subcircuits that can be solved independently. The actor model is used to coordinate the interaction between these subcircuits and their control actions, given the pervasive inconsistency that can be found when dealing with large-scale models. A-Diakoptics is a technique that simplifies the power flow problem for improving the simulation time performance, leading to faster QSTS simulations. This paper presents a simplified algorithm version of A-Diakoptics for modernizing sequential power simulation tools to use parallel processing. This simplification eliminates critical points found in previous versions of A-Diakoptics, improving the performance of the algorithm and facilitating its implementation to perform QSTS simulations. The performance of the new version of A-Diakoptics is evaluated by its integration into EPRI’s open-source simulator OpenDSS, which uses standard computing architectures and is publicly available.

1. Introduction

Quasi-static-time-series (QSTS) simulation is a valuable tool for assessing the behavior of distribution power systems over time [1]. By performing daily, yearly, and other time-based simulations, power engineers can characterize the impact of time-varying power devices such as solar photovoltaic generation (PV), energy storage, loads, generators, voltage regulators, and shunt capacitors, among other control devices within the distribution system.
The proliferation of distributed energy resources (DERs) has generated the need for studies such as hosting capacity, interconnection studies, microgrids, etc. These studies are QSTS based, and depending on the time-step resolution and simulation duration, the computational burden can be considerable, requiring a significant computing time to complete; this has led some planners and analysts to disregard QSTS simulations as an option for planning and other studies.
On the other hand, current computing architectures are branded by the introduction of multicore computing, a feature that allows computer applications to be accelerated by distributing tasks on multiple cores to work concurrently. This is conceptually straightforward, but it requires applications to be compatible with the parallel programming paradigm.
In the power system simulations domain, there are several techniques for making simulations compatible with parallel processing. Some of these techniques look for reducing the computational burden by dividing the total simulation time in multiple cores using temporal parallelization [2].
Other techniques link the computational burden to the circuit complexity, which is addressed by tearing the interconnected circuit into multiple subcircuits. These new subcircuits are solved separately using multicore computers to obtain the solution for the interconnected circuit. This approach is known as spatial parallelization, which addresses multiple techniques aiming to tear the interconnected model for its processing. These techniques are classified into two large groups, depending on the power flow problem formulation: bordered block diagonal matrix (BBDM) [3] and piecewise methods; the last one is the one in which diakoptics can be found, which is the base for the model presented in this paper called diakoptics based on actors (A-Diakoptics) [4,5]. A more detailed description and bibliography about the differences between the model-partitioning technique groups mentioned above can be found here [4,6].
A-Diakoptics combines two computing techniques from different engineering fields—diakoptics and the actor model. Diakoptics is a mathematical method for tearing networks [7,8]. Initially proposed by Gabriel Kron and later used and modified by other authors, diakoptics is a technique for tearing large physical circuits into several subcircuits to reduce the modeling complexity and accelerate the solution of the power flow problem using a computer network. In previous publications, the authors of this paper offered an extensive literature review on diakoptics as well as provided a populated list of references for the reader to review [4,5,6].
The actor model [9,10] is used to coordinate the interaction between subcircuits. The actor model is an information model for dealing with inconsistency robustness in parallel, concurrent, and asynchronous systems. This information model permits multiple processes to be executed in parallel sharing information using messages. As a result, information consistency can be guaranteed, avoiding common issues when working with parallel processing, such as race conditions and memory underutilization, among others.
A-Diakoptics is a technique that seeks to simplify the power flow problem to achieve a faster solution at each simulation step. Consequently, the total time reduction when performing QSTS will be evident at each simulation step with A-Diakoptics.
This paper presents a simplified algorithm for implementing A-Diakoptics in standard multicore computers. This simplification eliminates critical points found in previous versions of A-Diakoptics, improving the performance of the algorithm and facilitating its implementation to perform QSTS simulations. To evaluate the performance of the new version of A-Diakoptics, this method was integrated into EPRI’s open-source simulator OpenDSS, which can be executed in standard computing architectures and is publicly available.

2. Background

2.1. The Power Flow Problem in OpenDSS

While the power flow problem is probably the most common problem solved with the program, the OpenDSS is not best characterized as a power flow program. Its heritage is from general-purpose power system harmonics analysis tools. Thus, it works differently than most existing power flow tools. This heritage also gives it some unique and powerful capabilities for modeling complex electrical circuits. The program was originally designed to perform nearly all aspects of distribution planning for distributed generation (DG), which includes harmonics analysis. It is relatively easy to make a harmonics analysis program solve a power flow, while it can be quite difficult to make a power flow program perform harmonics analysis.
The OpenDSS program is designed to perform a basic distribution-style power flow in which the bulk power system is the dominant source of energy. However, it differs from the traditional radial circuit solvers in that it solves networked (meshed) distribution systems as easily as radial systems. It is intended to be used for distribution companies that may also have transmission or subtransmission systems. Therefore, it can also be used to solve small- to medium-sized networks with a transmission-style power flow.
Nearly all variables in the formulation result in a matrix or an array (vector) to represent a multiphase system. Many of the variables are complex numbers representing the common phasor notation used in frequency-domain AC power system analysis.
OpenDSS uses a standard nodal admittance formulation that can be found documented in many basic power system analysis texts. The textbook by Arrillaga and Watson [11] is useful for understanding this because it also develops the admittance models for harmonics analysis similarly to how OpenDSS is formulated. A primitive admittance matrix, Yprim, is computed for each circuit element in the model. These small matrices are used to construct the main system admittance matrix, Ysystem, that knits the circuit model together. The solution is mainly focused on solving the nonlinear system admittance equation of the following form:
I P C ( E ) = Y s y s t e m E
where IPC(E) values are the compensation currents from power conversion (PC) elements in the circuit. The currents injected into the circuit from the PC elements, IPC(E), are functions of voltage, as indicated, and represent the nonlinear portion of the currents from elements such as load, generator, PV system, and storage. There are several ways this set of nonlinear equations could be solved. The most popular way in OpenDSS is a simple fixed-point method that can be written concisely [12].
E n + 1 = [ Y s y s t e m ] 1 I P C ( E n )   n = 0 , 1 , 2   u n t i l   c o n v e r g e d
From (2), it can be inferred that every time, an expression such as the following is found:
E = [ Y s y s t e m ] 1 I
The solver in OpenDSS is called an actor in OpenDSS since the introduction of the parallel processing suite [2], and this is the base for the A-Diakoptics analysis.

2.2. The Simplified A-Diakoptics Method

The initial form of A-Diakoptics was developed in 2018 as the result of EPRI’s participation in the SunShot National Laboratory Multiyear Partnership Program (SuNLaMP). SuNLaMP is a government initiative that brought together national laboratories (such as Sandia National Laboratories and National Renewable Energy Laboratory), the private research sector (EPRI), academia (Georgia Institute of Technology), and private industry (CYME International T&D) to investigate novel technologies for accelerating QSTS simulations in power systems using modern computing architectures [13]. From there, the general expression for describing the interactions between the actors (subcircuits) and the coordinator is as follows:
E T = Z T T I 0 ( n 1 ) Z T C Z C C 1 Z C T I 0 ( n )
where ET is the total solution of the system (the voltages in all the nodes of the system). I0 is the vector containing the currents injected by the PC elements, and the time instants n and n + 1 are discrete time instants to describe the different times in which the vector of currents is calculated [6]. ZTT is the trees matrix and contains the admittance matrices for all the subcircuits contained in the interconnected system after the partitioning. The form of ZTT is as follows:
Z T T = [ [ Y 1 ] 1 0 0 0     0 [ Y n ] 1 ]
n = 0 , 1 , 2 #   o f   s u b c i r c u i t s
The subcircuits contained in ZTT are not interconnected in any way. The connections are through external interfaces that define the relationship between them. ZTC, ZCT, and ZCC are interfacing matrices for interfacing the separate subsystems using a graph defined by the contours matrix (C), as described in [6]. ZTTI0(n−1) corresponds to the solutions delivered when solving the subsystems, and ZTCZCC−1ZCTI0(n) is the interconnection matrix to find the total solution to the power flow problem.
The form of ZTT proposes that multiple OpenDSS solvers can find independent partial solutions that when complemented with another set of matrices, can calculate the voltages across the interconnected power system. However, in this approach, the interconnection matrix is a dense matrix that operates on the injection currents vector provided by the latest solution of the subsystems.
Nowadays, sparse matrix solvers are very efficient, and therefore, operating with a dense matrix is not desirable. The aim of this part of the project is to simplify the expression that defines the interconnection matrix to find a sparse equivalent that can be solved using the KLUSolve [14] module already employed in OpenDSS.
Assuming that the times in which the subsystems are solved, and the interconnection matrix is operated, are the same and fit into the same time window (ideally), A-Diakoptics can be reformulated as
E T = Z T T I 0 ( n ) Z T C Z C C 1 Z C T I 0 ( n )
Additionally, in OpenDSS, the interconnected feeder is solved using
E T = [ Y I I ] 1 I 0
where YII is the YBus matrix that describes the interconnected feeder. Equating Equations (6) and (7), the expression is transformed into
[ Y I I ] 1 I 0 ( n ) = Z T T I 0 ( n ) Z T C Z C C 1 Z C T I 0 ( n )
This new expression can be taken to the admittance domain since ZTT is built using the YBus matrices that describe each one of the subsystems created after tearing the interconnected feeder. The new equation is reformulated as
[ Y I I ] 1 I 0 ( n ) = [ Y T T ] 1 I 0 ( n ) Z T C Z C C 1 Z C T I 0 ( n )
[ Y I I ] 1 = [ Y T T ] 1 Z T C Z C C 1 Z C T
Then, the interconnection matrix can be reformulated as
Z T C Z C C 1 Z C T = [ Y X X ] 1 = [ Y T T ] 1 [ Y I I ] 1
This new formulation proposes that the interconnection matrix is equal to an augmented representation of the link branches between the subsystems. To support this conclusion, take the simplification proposed by Happ in [15], where:
e c = Z C T I 0 ( n )
Z C T = C T Z T T e c = C T Z T T I 0 ( n )
This expression is similar to the partial solution formulation of the Diakoptics equation but includes the contours matrix; then, if the partial solution is called ET(0), it is possible to define
e c = C T E T ( 0 )
On the other hand, ZTC, which is the non-conjugate transposed of ZCT, is calculated as follows:
Z T C = Z T T C
Replacing the new equivalences for ZCT and ZTC the equation proposed in (4) is reformulated as follows:
E T = Z T T I 0 ( n ) + Z T T I c
Or, in terms of sparse matrices,
E T = [ Y T T ] 1 I 0 ( n ) + [ Y T T ] 1 I c = [ Y T T ] 1 ( I 0 ( n ) + I c )
where:
I c = C Z C C 1 C T E T ( 0 )
In this equation, its partial and complementary solutions use information calculated from the interconnection matrix ZCC. ZCC is a small matrix that contains information about the link branches, and the calculation of IC does not represent a significant computational burden in medium- and large-scale circuits. With this approach, all the matrix calculations are made using sparse matrix solvers, reducing the computational burden.
The architecture for implementing this simplified approach considers the optimization of computational resources for reducing the burden of the operations around the algorithm. This optimization comprehends memory handling, actor coordination, and message structure between actors, and it is described in detail in [16]. Figure 1 describes the implementation of the simplified A-Diakoptics method in OpenDSS 9.3 and later.
Figure 1 presents how the main loops of the OpenDSS algorithm are decomposed into the set of distributed solvers implemented as actors. Each actor (including the coordinator) is executed in its own CPU and counts with its own memory and hardware. All of the actors are executed in parallel and concurrently in an asynchronous parallel computing environment.
Actors remain on standby when not executing any job. A message sent to an actor triggers an event that sets the busy flag for the actor in the global context and leads the actor to perform the job given in the message. Once the actor has finished the job, it updates its state by marking a ready flag in the global context, making it visible to all of the other actors. When the YBUS matrices within actors >2 are smaller than the interconnected system, it is expected that the solution time required for solving the subnetworks is substantially lower than that required for solving the interconnected model, as discussed in [5,13]. The calculation of IC is very fast, given the size of the mathematical problem proposed; at the same time, it solves and avoids moving values between actors by using pointers, adding a negligible overhead to the solution process in parallel.
With the architecture proposed in this implementation of A-Diakoptics, the algorithm becomes available for all QSTS simulation modes. The previous version of the algorithm was only available for yearly simulation. The new implementation is available for the snap, direct, yearly, daily, time, and duty cycle simulation modes in OpenDSS [17].
Centralizing the results on a single memory space (Actor 1) makes it possible to insert monitors and energy meters and request simulation information directly from the model using a single actor, a feature that was not available in the previous version of the A-Diakoptics suite.
The previous considerations are true only if the matrix ZCC is a fraction of the size of the largest subnetwork in the system, which is the one representing the largest computational burden in the parallel solver, as explained in [16].

3. Results

This section provides a case publicly available at https://sourceforge.net/p/electricdss/code/HEAD/tree/trunk/Version8/Distrib/Examples/ADiakoptics/EPRI_Ckt5-G/ (accessed on 1 January 2022) since October 2021, for illustrating the A-Diakoptics features implemented in OpenDSS v9.2 and later. The features to discuss in this section are as follows:
  • Simulation fidelity;
  • Simulation performance.

3.1. The Test Case: EPRI Circuit 5

Circuit 5 and the other two circuits (7 and 24) are models EPRI provided to the IEEE test feeder database at the IEEE PES Distribution System Analysis Subcommittee [18]. These feeders are real feeder models from real utilities that have been modified for public use and represent medium- and large-scale power systems. Figure 2 shows Circuit 5 and Table 1 presents the technical features of this model.
Figure 2 indicates the link branches for the two-zone and four-zone partitioning with blue and orange arrows, respectively. Table 2 and Table 3 present the partition statistics for two- and four-zone partitioning, respectively. The partition method for obtaining these statistics was the automated option through MeTIS [19]. For more extensive partitioning (five, six, and seven zones), the manual procedure specifying the link branches was used [17].
The partition statistics reveal a moderate imbalance in the case of two zones that is more pronounced in the case of four zones. The imbalances are expected given the representation of single-phase low-voltage networks within the model, making it more difficult for the tearing algorithm to balance the node distribution between zones. The calculation methods for the partition statistics and their interpretation details are presented in [17].

3.2. The Simulation Fidelity and Performance

Given the number of nodes present in this model, to study the simulation, fidelity monitors were located close to the feeder’s head and at the feeder’s end to check the voltage levels at those two points. The voltages were measured in all phases, and the error was estimated per phase in a yearly simulation (8760 h). Figure 3 and Figure 4 present the results obtained for two and four zones, respectively.
As expected, the simulation fidelity comparison reports that the results obtained in the new implementation differ from the original by a negligible margin. In this simulation, each load has a load profile assigned and, based on the previous experiences with controllers modifying the local YBUS matrices at the distributed solvers, the capacitor bank controllers were disabled for the simulation.
The good fidelity when comparing the interconnected model and its partitioned counterparts is a good indicator for this implementation of A-Diakoptics, despite the issues with some controller types. Additionally, it confirms the method’s scalability for allowing a parallel simulation using multicore computing architectures.
The next step was to evaluate the simulation performance. For evaluating the simulation performance, the controllers based on YBUS matrix modification were disabled based on the experiences reported in [16].
The yearly simulations for Circuit 5 were conducted on a laptop with an Intel®® Core™ i7-10750H CPU @ 2.60 GHz, 2592 MHz, 6 Core(s), 12 Logical Processor(s), and 16 GB RAM, which is a standard computer operating under a standard Windows 10 operating system. During the simulation, programs such as antivirus, firewalls, and other business/system-related applications were under execution.
Figure 5 shows the solution times in microseconds for a yearly simulation with the interconnected model and its partitions, using two, four, five, six, and seven zones. The time was measured using the internal process timer included with OpenDSS.
This timer registers the time required for each operation within OpenDSS in real time. Figure 6 displays the simulation time reduction as a percentage of the simulation time for the reference model. Simulation computing time reduces significantly when the number of zones increases; however, the time reduction is the same for the six- and seven-zone partitions, suggesting that the maximum simulation time reduction is reached at that point or that better tearing points need to be located across the feeder to achieve a better reduction.
Figure 7 presents the partition statistics for each case and indicates that the better the circuit reduction, the better the simulation time reduction. Nevertheless, when the circuit reduction is contrasted with the maximum and average imbalances, the parallel solvers are not balanced as expected.
The imbalance level spikes when tearing the model into four zones and goes down slightly when using a larger number of zones. This experimentation signals that the parallel solvers do not necessarily have to be balanced to achieve a considerable simulation time reduction, even though a balanced state is a desirable condition.
Instead, this experiment reveals that simplifying the circuit model complexity requires understanding where the complexity resides in the model, and what challenges exist when locating the tearing points. This criterion can be constructed based on the features of areas that require more exhaustive computer power to be solved given the load models, generators, and topology hosted in the area. This topic is deeply discussed in [16].

4. Conclusions

Spatial parallelization has proven to be a technique for accelerating power system simulations by reducing the circuit model’s complexity. The circuit model’s complexity is reduced by tearing the model into several subnetworks that can be solved faster than the original interconnected model. The partial solutions are put together through a tensor-based interconnection approximation, which delivers an accurate solution that matches the original with a low error margin.
One method for performing spatial parallelization is Diakoptics, which, in combination with a framework for providing inconsistency robustness within a parallel computing architecture, is called A-Diakoptics. The previous implementation of this technique in EPRI’s open-source distribution simulator software OpenDSS provided limited access to this technique, given that the implementation aimed to cover the needs of a particular project.
Through the project presented in this document, the previous implementation of A-Diakoptics in OpenDSS was reformulated using a simplified approach, optimizing critical areas of the algorithm and the mathematical formulation. Many other improvements at the implementation level were performed in this new version of A-Diakoptics to facilitate access to electrical variables through monitors and meters.
This implementation of A-Diakoptics was validated using yearly QSTS simulations in terms of fidelity and performance using as reference the interconnected model. The simulation results revealed the benefits of modernizing sequential power simulation tools into parallel processing, which is the standard for computing architectures nowadays. This paper introduced A-Diakoptics as a method for achieving such modernization and OpenDSS, given that it is distributed as an open-source project using the internet, presents a practical example of how this modernization can be implemented.
A simulation test case was used for illustrating the computational time gains, comparing the sequential and parallel performance on a yearly QSTS. The computing time gains were evaluated for an incremental number of partitions, displaying and discussing the benefits and limits of the algorithm. This test case is publicly available on the internet for the reader to evaluate if desired.
The results presented confirm the performance improvement in QSTS simulations for medium- and large-scale power system models. They also provide guidance on the objectives of circuit tearing for reducing the circuit model complexity to accelerate QSTS simulations.
The test cases presented in this document and the A-Diakoptics capabilities are available in OpenDSS version 9.3, which can be downloaded from the internet. In addition to the test case presented in this paper, five more test cases offering different levels of complexity are available at OpenDSS/Code/[r3354]/trunk/Version8/Distrib/Examples/A-Diakoptics (sourceforge.net) (accessed on 1 January 2022). These cases and their purpose are discussed in [16] and serve as sources of reference for applying A-Diakoptics with OpenDSS in other applications.
Additional to the methods presented in this paper, others using different parallelization techniques and approaches have also been investigated as the result of the SunShot National Laboratory Multiyear Partnership Program (SuNLaMP). The findings, conclusions, and a more extended explanation of these methods can be found in [13]. For future implementations and demonstrations on methods for accelerating simulations, we expect to continue to use OpenDSS as the open-source project for serving as a proof of concept for this type of research, demonstrating viable techniques and methods for modernizing power simulation tools.

Author Contributions

D.M. developed the A-Diakoptics technique and performed the implementation within OpenDSS. He also performed the technical analysis for obtaining the results here presented. R.D. is the original author of the simulation tool OpenDSS, his technical advice was vital for the project development. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded originally by the SunShot National Laboratory Multiyear Partnership Program (SuNLaMP) [13]. The recent improvements on the technique and implementation were funded by EPRI under the Technology Innovation program for 2021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The results of this study and data sources are available at [16], which can be accessed through the EPRI member center at https://www.epri.com/research/programs/0TIZ12/results/3002021419 (accessed on 1 January 2022).

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

A-DiakopticsActor-based diakoptics
BBDMBordered block diagonal matrix
DGDistributed generation
EArray of complex numbers representing the voltages at each node in the model
IPC(E) Array of complex numbers representing compensation currents from power conversion devices (shunt connected)
PCPower conversion device (shunt connected)
PVPhotovoltaic cells array
QSTSQuasi static time series
YSystem/YBusAdmittance matrix of the power system model
YIIYSystem
ZCCConnection’s matrix, built by combining the contours matrix (also called tensors) with ZTT and the links between the subsystems
ZCT/TCComplementary matrices, obtained with partial components of ZCC
ZTTTrees matrix containing the inverted Y matrices of the isolated subsystems when tearing the interconnected system

References

  1. Deboever, J.; Zhang, X.; Reno, M.J.; Broderick, R.J.; Grijalva, S.; Therrien, F. Challenges in Reducing the Computational Time of QSTS Simulations for Distribution System Analysis; Sandia National Laboratories: Albuquerque, NM, USA, 2017.
  2. Montenegro, D.; Dugan, R.C.; Reno, M.J. Open Source Tools for High Performance Quasi-Static-Time-Series Simulation Using Parallel Processing. In Proceedings of the 2017 IEEE 44th Photovoltaic Specialist Conference (PVSC), Washington, DC, USA, 25–30 June 2017; pp. 3055–3060. [Google Scholar] [CrossRef]
  3. Shahidehpour, M.; Wang, Y. Communication and Control in Electric Power Systems: Applications of Parallel and Distributed Processing; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  4. Montenegro, D. Actor’s Based Diakoptics for the Simulation, Monitoring and Control of Smart Grids. Université Grenoble Alpes, 2015GREAT106, 2015. Available online: https://tel.archives-ouvertes.fr/tel-01260398 (accessed on 1 November 2021).
  5. Montenegro, D.; Ramos, G.A.; Bacha, S. Multilevel A-Diakoptics for the Dynamic Power-Flow Simulation of Hybrid Power Distribution Systems. IEEE Trans. Ind. Inform. 2016, 12, 267–276. [Google Scholar] [CrossRef]
  6. Montenegro, D.; Ramos, G.A.; Bacha, S. A-Diakoptics for the Multicore Sequential-Time Simulation of Microgrids Within Large Distribution Systems. IEEE Trans. Smart Grid. 2017, 8, 1211–1219. [Google Scholar] [CrossRef]
  7. Kron, G. Detailed Example of Interconnecting Piece-Wise Solutions. J. Frankl. Institute. 1955, 1, 26. [Google Scholar] [CrossRef]
  8. Kron, G. Diakoptics: The Piecewise Solution of Large-Scale Systems; (No. v. 2); Macdonald: London, UK, 1963. [Google Scholar]
  9. Hewitt, C. Actor Model of Computation: Scalable Robust Information Systems. In Inconsistency Robustness 2011, Stanford University; Standford University, Ed.; 16–18 August 2011; No. 1; Standford University: Stanford, CA, USA, 2012; Volume 1, p. 32. [Google Scholar]
  10. Hewitt, C.; Meijer, E.; Szyperski, C. The Actor Model (Everything You Wanted to Know, but Were Afraid to Ask). Microsoft. Available online: http://channel9.msdn.com/Shows/Going+Deep/Hewitt-Meijer-and-Szyperski-The-Actor-Model-everything-you-wanted-to-know-but-were-afraid-to-ask (accessed on 15 May 2015).
  11. Watson, N.; Arrillaga, J. Institution of Engineering and Technology. Power Systems Electromagnetic Transients Simulation; Institution of Engineering and Technology: London, UK, 2003. [Google Scholar]
  12. Dugan, R. OpenDSS Circuit Solution Technique; p. 1. Available online: https://sourceforge.net/p/electricdss/code/HEAD/tree/trunk/Version8/Distrib/Doc/OpenDSS%20Solution%20Technique.pdf.1773234 (accessed on 1 January 2022).
  13. Rapid QSTS Simulations for High-Resolution Comprehensive Assessment of Distributed PV. SAND2021-2660. 2021. Available online: https://www.osti.gov/servlets/purl/1773234 (accessed on 1 November 2021).
  14. Davis, T.A.; Natarajan, E.P. Algorithm 907: KLU, a direct sparse solver for circuit simulation problems. ACM Trans. Math. Softw. 2010, 37, 36. [Google Scholar] [CrossRef]
  15. Happ, H.H. Piecewise Methods and Applications to Power Systems; Wiley: Hoboken, NJ, USA, 1980. [Google Scholar]
  16. Advancing Spatial Parallel Processing for QSTS: Improving the A-Diakoptics Suite in OpenDSS; EPRI: Palo Alto, CA, USA, 2021.
  17. Montenegro, D.; Dugan, R.C. Diakoptics Based on Actors (A-Diakoptics) Suite for OpenDSS; EPRI, Online; 2021; Available online: https://sourceforge.net/p/electricdss/code/HEAD/tree/trunk/Version8/Distrib/Doc/A-Diakoptics_Suite.pdf (accessed on 1 November 2021).
  18. Fuller, J.; Kersting, W.; Dugan, R.; Jr, S.C. Distribution Test Feeders. IEEE Power and Energy Society. Available online: http://0-ewh-ieee-org.brum.beds.ac.uk/soc/pes/dsacom/testfeeders/ (accessed on 23 October 2013).
  19. Karypis, G.; Kumar, V. A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs. SIAM J. Sci.Comput. 1998, 20, 359–392. [Google Scholar] [CrossRef]
Figure 1. A-Diakoptics algorithm in OpenDSS.
Figure 1. A-Diakoptics algorithm in OpenDSS.
Energies 15 02051 g001
Figure 2. EPRI’s Circuit 5.
Figure 2. EPRI’s Circuit 5.
Energies 15 02051 g002
Figure 3. Errors calculated at Circuit 5 feeder’s head, yearly simulation, two zones; p1, p2, and p3 are the phases.
Figure 3. Errors calculated at Circuit 5 feeder’s head, yearly simulation, two zones; p1, p2, and p3 are the phases.
Energies 15 02051 g003
Figure 4. Errors calculated at Circuit 5 feeder’s head, yearly simulation, four zones; p1, p2, and p3 are the phases.
Figure 4. Errors calculated at Circuit 5 feeder’s head, yearly simulation, four zones; p1, p2, and p3 are the phases.
Energies 15 02051 g004
Figure 5. Simulation time required to solve a yearly simulation of Circuit 5.
Figure 5. Simulation time required to solve a yearly simulation of Circuit 5.
Energies 15 02051 g005
Figure 6. Simulation time reduction as a percentage of the reference.
Figure 6. Simulation time reduction as a percentage of the reference.
Energies 15 02051 g006
Figure 7. Partition statistics considering five different number of zones for Circuit 5.
Figure 7. Partition statistics considering five different number of zones for Circuit 5.
Energies 15 02051 g007
Table 1. Circuit 5 technical features.
Table 1. Circuit 5 technical features.
Feature NameValue
Number of buses2998
Number of nodes3437
Total active power7.281 MW
Total reactive power3.584 Mvar
Table 2. Partition statistics for Circuit 5 feeder, two zones.
Table 2. Partition statistics for Circuit 5 feeder, two zones.
NameValue
Circuit reduction (%)32.24
Maximum imbalance (%)52.43
Average imbalance (%)26.21
Table 3. Partition statistics for Circuit 5 feeder, four zones.
Table 3. Partition statistics for Circuit 5 feeder, four zones.
NameValue
Circuit reduction (%)45.94
Maximum imbalance (%)88.11
Average imbalance (%)53.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Montenegro, D.; Dugan, R. Simplified A-Diakoptics for Accelerating QSTS Simulations. Energies 2022, 15, 2051. https://0-doi-org.brum.beds.ac.uk/10.3390/en15062051

AMA Style

Montenegro D, Dugan R. Simplified A-Diakoptics for Accelerating QSTS Simulations. Energies. 2022; 15(6):2051. https://0-doi-org.brum.beds.ac.uk/10.3390/en15062051

Chicago/Turabian Style

Montenegro, Davis, and Roger Dugan. 2022. "Simplified A-Diakoptics for Accelerating QSTS Simulations" Energies 15, no. 6: 2051. https://0-doi-org.brum.beds.ac.uk/10.3390/en15062051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop