class Point:
def __init__(self, x_value, y_value):
self._x_value = x_value
self._y_value = y_value
p1 = Point(1.0, 2.0)
p1.__dict__{'_x_value': 1.0, '_y_value': 2.0}
Quasar
November 21, 2025
Context managers are a very useful feature in Python. Most of the time, we see contet managers around resource management. For example, in situations, when we open files, we want to make sure they are closed after processing (so we do not leak file descriptors). Or, if we open a connection to a service (or even a socket), we also want to be sure to close it accordingly. In all of these cases, you would normally have to remember to free all of the resources that were allocated and that is just thinking about the best case - there could be also be exceptions and error handling. Handling all possible combinations and execution paths of our program makes it harder. There is a elegant Pythonic way of handling this.
The with statement (PEP-43) enters the context manager. In this case, the open function implements the context manager protocol, which means that the file will be automatically closed when the block is finished, even if an exception occurred.
Context managers consist of two magic methods __enter__ and __exit__. On the first line of the context manager, the with statement will call the first method, __enter__, and whatever this method returns will be assigned to the variable after as. This is optional - we don’t really need to return anything specific on the __enter__ method, even if we do, there is still no strict reason to assign it to a variable if it is not needed.
After this line is executed, the code enters a new context, where any other Python code can be run. After the last statement on the block is finished, the context will be exited, meaning that Python will call the __exit__ method of the original context manager object we first invoked.
If there is an exception or error inside the context manager block, the __exit__ method will still be called, which makes it convenient for safely cleaning up the conditions. In fact, this method receives the exceptions that was triggered on the block in case we want to handle it in a custom fashion.
Context managers are a good way of seperating concerns and isolating parts of the code that should be kept independent, because if we mix them, then the logic will become harder to maintain.
As an example, consider a situation where we want to run a backup of our database with a script. The caveat is that the backup is offline, which means that we can only do it while the database is not running, and for this we have to stop it. After running the backup, we want to be sure that we start the process again, regardless of how the backup itself went.
Instead of creating a monolithic function to do this, we can tackle this issue with context managers:
def stop_database():
run("systemctl stop postgresql.service")
def start_database():
run("systemctl start postgresql.service")
class DBHandler:
def __enter__(self):
stop_database()
return self
def __exit__(self, exc_type, ex_value, ex_traceback):
start_database()
def db_backup():
run("pg_dump database")
def main():
with DBHandler():
db_backup()In general, we can implement context managers like the one in the previous example. All we need is just a class that implements the __enter__ and __exit__ magic methods, and then that object will be able to support the context manager protocol. While this is the most common way for context managers to be implemented, it is not the only one.
The contextlib module in the Python standard library contains a lot of helper functions and objects to implement context managers or use ones already provided that can help us write more compact code.
Lets start by looking at the contextmanager decorator.
When the contextlib.contextmanager decorator is applied to a function, it converts the code on that function into a context manager. The function in question has to be a particular kind of function called a generator function, which will separate statements into what is going to be on __enter__ and __exit__ magic methods respectively.
The equivalent code in the previous example can be written as:
import contextlib
@contextlib.contextmanager
def db_handler():
try:
stop_database()
yield
finally:
start_database()
with db_handler():
db_backup()Here, we define the generator function and apply the @contextlib.contextmanager decorator to it. The function contains a yield statement, which makes it a generator function. Details on generators are not important at this point. All we need to know, is when the decorator is applied, everything before the yield statement will be run as if it were part of the __enter__ method. Then the yielded value is going to be the result of the context manager evaluation (what __enter__ would return and what would be assigned to the variable if we chose to assign it like as x: - in this case nothing is yielded).
At the yield statement, the generator function is suspended, and the context manager is entered, where, again we run the backup code for our database. After this completes, the execution resumes, so we can consider every line that comes after the yield statement will be part of __exit__ logic.
Writing context managers like this has the advantage that it is easier to refactor existing functions, reuse code and in general a good idea when we need a context manager that doesn’t belong to a particular object.
Using context managers is considered idiomatic.
The use of comprehensions is recommended to create data-structures in a single instruction, instead of multiple operations. For example, if we wanted to create a list with calculations over some numbers in it, instead of:
we could do:
The introduction of assignment expressions in PEP-572 is also very useful.
# Compute partial sums in a list comprehension
total = 0
partial_sums = [total := total + v for v in values]
print("Total:", total)The := operator is informally known as the walrus operator.
Keep in mind however, that a more compact code does not always mean better better code. If to write a one-liner, we have to create a convoluted expression, then its not worth it, and we would be better off using a naive approach. This is related to the Keep it simple, stupid(KISS) principle.
Another good reason for using assignment expressions in general is the performance considerations. If we have to use a function as part of our transformation logic, we don’t want to call that more than is necessary. Assigning the result of the function to a temporary identifier is a good optimization technique.
All of the properties and functions of object are public in Python, which is different from other languages where properties can have the access specifier public, private and protected. That is, there is no point in preventing the caller from invoking any attributes an object has.
There is no strict enforcement, but there are some conventions. An attribute that starts with an underscore is meant to be private to that object, and we expect that no external agent calls it (but again nothing preventing this).
There are some conventions and implementation details that make use of underscores in Python, which is an interesting topic in itself that’s worthy of analysis.
Like I mentioned, by default, all attributes of an object in python are public. Consider the following example :
class Point:
def __init__(self, x_value, y_value):
self._x_value = x_value
self._y_value = y_value
p1 = Point(1.0, 2.0)
p1.__dict__{'_x_value': 1.0, '_y_value': 2.0}
Output:
Attributes that start with an underscore must be respected as private and not called externally. Using a single underscore prefix is the Pythonic way of clearly delimiting the interface of the object.
Note that, using too many internal methods and attributes could be a sign that a class has too many tasks and doesn’t comply with the single responsibility princple.
There is however, a common misconception that some attributes and methods can actually be made private. This is again a misconception. Let us imagine that the x_value and y_value attributes are defined with a leading double underscore instead.
class Point:
def __init__(self, x_value : float, y_value : FloatingPointError):
self.__x_value = x_value
self._y_value = y_value
def scale(self, scale_factor : float):
self.__x_value *= scale_factor
self._y_value *= scale_factor
p1 = Point(1.0, 2.0)
p1.scale(2.0)
p1.__x_valueOutput:
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[7], line 13
11 p1 = Point(1.0, 2.0)
12 p1.scale(2.0)
---> 13 p1.__x_value
AttributeError: 'Point' object has no attribute '__x_value'Some developers use this method to hide some attributes, thinking that x_value is now private and that no other object can modify it. Now, take a look at the exception that it raised when trying to access __x_value. It’s AttributeError saying that it doesn’t exist. It doesn’t say something like this is private,
What’s actually happening is that with the double underscores, Python creates a different name for the attribute (this is called as name mangling). What it does is create the attribute with the following name instead <class_name>__<attribute_name>. In this case the attribute named Point__x_value will be created.