In the middle of the desert you can say anything you want
pytest-datafiles · PyPI is nice but returns a py.path
instead of pathlib.Path
.
Tried to write something to make it convert automatically.
ASSETS_DIR = Path(__file__).parent / "assets"
@pytest.fixture
def pfiles(datafiles):
# Fixture that converts pytest-datafiles' py.path into a pathlib.Path
return Path(str(datafiles))
@pytest.mark.datafiles(PROJ_DIR)
def test_read_meta_json(pfiles):
assert do_sth_with_file(pfiles)
First nontrivial fixture I write, maybe a really bad idea to do it like that. This feels like a general use case and someone had to have had this problem
This looks really interesting! It’s not about the syntax, but about the basic design philosophies + examples of packages that use it.
What’s init for me? Designing for Python package imports | Towards Data Science
Other stuff I learned about __init__.py
:
Stuff I discovered:
pdb
physically into an __init__.py
, and for example look at the stack of what called it with w
Today, I ran this:
git commit -m "TICKETNAME Export of X generated with `name-of-some-utility`"
Commit message on gitlab was
"TICKETNAME Export of X generated with (Starting the export of data, wait till it downloads...)"
Clear but fascinating way it can break.
Do I want to get a clear picture of all the various levels of escaping, including globs, backticks, backslashes etc. happening in the shell?
Why doesn’t the #
in git commit -m "Ticket #1231"
result in a string with the 1234
commented out and a syntax error? I know it doesn’t but I wouldn’t be able to predict that behaviour without this knowledge. Would single quotes change much? How to actually comment the rest of the line this way?
What are the rules that decide whether a *
gets expanded by the shell or passed to, say, scp
as-is? Etc. etc. etc.
It’s all knowable and learnable, but I was never sure whether the ROI was worth it for me. Till now trial and error always worked in the rare instances I have to do something complex with bash scripts, but this is the first time it bites me in real life in an unexpected way.
I find this approach1 brilliant (and of course it works with everything split in separate functions a la my last post: 211124-1744 argparse notes):
import argparse
import logging
parser = argparse.ArgumentParser()
parser.add_argument(
'-d', '--debug',
help="Print lots of debugging statements",
action="store_const", dest="loglevel", const=logging.DEBUG,
default=logging.WARNING,
)
parser.add_argument(
'-v', '--verbose',
help="Be verbose",
action="store_const", dest="loglevel", const=logging.INFO,
)
args = parser.parse_args()
logging.basicConfig(level=args.loglevel)
And TIL about dest=
that will make my life much easier too by outsourcing more logic to argparse.
Connected an external screen, it was dark, googled for a solution after resetting redshift
settings didn’t work.
So, there are a lot of ways to change brightness (SO1).
xbacklight
works with hardware-level brightness for the devices that support it.
For the others, software-level changing of gamma values is what’s usually needed, and what I did with a lot of different programs before. This worked this time:
xrandr --output LVDS1 --brightness 0.5
(As a bonus, it uses the already well-know and well-loved xrandr.)
Sad that arandr
can’t do brightness though, but there are reasons (missing –brightness features (#35) · Issues · arandr / ARandR · GitLab)
From there I learned that ddcondrol
is the way to change brightness for external monitors on hardware level, and that Jolmberg/wmbright: Monitor brightness control dockapp is a back-end that tries to do everything.
TODO, this look really really really good. Explanation of the relationship between python logging root logger and other loggers
(+ Love the way it’s split into separate .py
files)
pytest
took seconds at the “Collecting…” stage.
I had a directory with a lot of tiny files (./data_1234/
) in the poetry package folder, and blamed it initially.
SO1 told me that the syntax to ignore a folder is
[tool:pytest]
norecursedirs = subpath/*
Wildcards are nice and data*/*
was the first attempt.
Nothing.
Then I without success tried this:
testpaths="tests"
After a one-hour saga, I found that the culprit was a package that I was using. The tests imported my package, which imported the slow package, and it takes seconds to do so.
‘Collecting’ seems not to be only “find test files”, but it reads them and imports them and all their dependencies.
Waiting time went back to normal as soon as I commented out importing my package from the test.
From within an issue, use the dropdown left of “Create merge request” -> Create branch, will create a branch with the format “issue_n-issue_title”, for example 3-this-is-issue-number-three
.
If you use a directory structure like this:
resources/
src/project_name/
tests/
[...]
then you get these directories in the same order regardless of the name of the project! Then it’s always uniform, muscle memory has a chance, etc.
<Ctrl-C>
of a program running inside pdb (python3 -m pdb myscript.py
or whatever) doesn’t kill the program, but drops you in the debugger!
Useful when you suspect there’s an infinite loop somewhere, and want to see what exactly is the program doing when it starts using 120% of your CPU