This post is part 3 of a multi-part series of posts intended to provide discussion of some basic debugging tools that I have found to be helpful in developing a pure Python simulation model using a Python Integrated Development Environment (IDE) called PyCharm.
Before I begin this post, the following are links to previous blog posts I have written on this topic:
- Part 1 of this debugging series.
- Part 2 of this debugging series.
- Other discussion of PyCharm functionality I have found useful
In this post I will focus on PyCharm’s “Coverage” features, which are very useful for debugging by allowing you to see what parts of your program (e.g., modules, classes, methods) are/are not being accessed for a given implementation (run) of the model. If instead you are interested in seeing how much time is being spent running particular sections of code, or want to glimpse into the values of variables during execution, see the previous posts I linked above on profiling and breakpoints.
To see what parts of my code are being accessed, I have found it helpful to create and run what are called “unit tests”. You can find more on unit testing here, or just by googling it. (Please note that I am not a computer scientist, so I am not intending to provide a comprehensive summary of all possible approaches you could take to do this. I am just going to describe something that has worked well for me). To summarize, unit testing refers to evaluating sections (units) of source code to determine whether those units are performing as they should. I have been using unit testing to execute a run of my model (called “PySedSim”) to see what sections of my code are and are not being accessed.
I integrated information from the following sources to prepare this post:
- This website has a useful post on how to create a unit test, so some of the first few steps I list below are borrowed from this site, starting in the “Creating test” section.
- PyCharm’s blog offers useful details regarding PyCharm’s code coverage features.
- PyCharm’s home page also offers some useful discussion of code coverage features.
Please follow the following steps:
Step 1. Open the script (or class, or method) you want to assess, and click on the function or method you want to assess.
In my case, I am assessing the top-level python file “PySedSim.py”, which is the file in my program that calls all of the classes to run a simulation (e.g., Reservoirs and River Channels). Within this file, I have clicked on the PySedSim function. Note that these files are already part of a PyCharm project I have created, and Python interpreters have already been established. You need to do that first.
Step 2. With your cursor still on the function/method of interest, click “ctrl + shift + T”.
A window should appear as it does below. Click to “Create New Test”.
Step 3. Create a new test. Specify the location of the script you are testing, and keep the suggested test file and class names, or modify them. Then click to add a check mark to the box next to the Test Method, and click “OK”.
Step 4. Modify the new script that has been created. (In my case, this file is called “test_pySedSim.py”, and appears initially as it does below).
I then modified this file so that it reflects testing I want to conduct on the PySedSim method in the PySedSim.py file.
In my case, it appears like this.
from unittest import TestCase from PySedSim import PySedSim class TestPySedSim(TestCase): def test_PySedSim(self): PySedSim()
Note that there is a ton of functionality that is now possible in this test file. I suggest reviewing this website again carefully for ideas. You can raise errors, and use the self.fail() function, to indicate whether or not your program is producing acceptable results. For example, if the program produces a negative result when it should produce a positive result, you can indicate to PyCharm that this represents a fail, and the test has not been passed. This offers you a lot of flexibility in testing various methods in your program.
In my case, all I am wanting to do is run the model and see which sections were accessed, not to specifically evaluate results it has produced, so in my case PyCharm should execute the model and indicate it has “passed” the unit test (once I create and run the unit test).
Step 5. In the menu at the top of the screen that I show clicked on in the image below, click on “Edit configurations”.
From here, click on the “+” button, and go to Python tests –> Unittests.
Step 6. In the “Run/Debug Configurations” window, give your test a name in the “Name” box, and in the “script” box locate the script you created in Step 4, and indicate its path. Specify any method parameters that need to be specified to run the method. I did not specify any environment preferences, as the interpreter was already filled in. Click OK when you are done.
Step 7. Your test should now appear in the same configuration menu you clicked on earlier in Step 5. So, click the button at the top to “Run with Coverage”. (In my case, run water_programming_blog_post with coverage)
Note that it is likely going to take some time for the test to run (more than it would take for a normal execution of your code).
Step 8. Review the results.
A coverage window should appear to the right of your screen, indicating what portions (%) of the various functions and methods contained in this program were actually entered.
To generate a more detailed report, you can click on the button with the green arrow inside the coverage window, which will offer you options for how to generate a report. I selected the option to generate an html report. If you then select the “index” html file that appears in the directory you’re working in, you can click to see the coverage for each method.
For example, here is an image of a particular class (reservoir.py), showing in green those sections of code that were entered, and in red the sections that were not. I used this to discover that particular portions of some methods were not being accessed when they should have been. The script files themselves also now have red and green text that appears next to the code that was not or was entered, respectively. See image above for an example of this.
PyCharm also indicates whether or not the unittest has passed. (Though I did not actually test for specific outputs from the program, I could have done tests on model outputs as I described earlier, and any test failure would be indicated here).