In the middle of the desert you can say anything you want
ralphbean/taskw: python taskwarrior api is a Python lib to talk to Taskwarrior, by default through the import/export functionality.
Looks really neat and is a better way to parse the tasks for my statusbar than my planned “read and parse the shell output of the cli command”
Create a new project, point it at the folder with the sources, and instead of trying to use an existing poetry environment, just create a new one. It will use the same virtualenv as usual when running poetry shell
inside that directory. Nice!1
The project uses ./src/package_name
layout (220105-1142 Order of directories inside a python project), which created issues in the editor (tests and files run fine though). Fixed by adding ./src
as Source Root, then it parses all imports as packgae name
Official Black instructions for Pycharm worked for me: Editor integration — Black 21.12b0 documentation
This was tricky! I found a really nice post2 that showed how to spawn vim from ideavim. I tried following its example but
nmap <leader>f :action Tool_External_Tools_black<CR>
didn’t work.
The post mentioned running :actionlist
inside the editor to get the list of all available actions (I used to rely on a github gist for that!). Well, would you believe, External Tools
has a space inside it.
So the correct line is:
nmap <leader>f :action Tool_External Tools_black<CR>
Wow. …Wow.
In any case works now!
Reddit suggested using poetry env info
, which gives info about the environment, and add that interpreter to pycharm directly ↩︎
Customising IdeaVim - Chathura Colombage; His example .ideavimrc from that post is really really interesting, TODO steal ideas! ↩︎
NLP Course @ lena-voita.github.io
(Ty AA for the link!)
This is a really nice course covering the basics of NLP, putting it here for now, until I finally finish setting https://serhii.net/links/ up.
Covers:
After enabling “strict” newlines for markdown/hugo conformity I had to decide whether newline would be two trailing space or a single backspace (Line breaks in markdown)
Backspaces didn’t work out, so whitespaces it is - how to make them visible when editing?
Obsidian forum1 provided this wonderful snippet:
.cm-trailing-space-new-line, .cm-trailing-space-a, .cm-trailing-space-b, .cm-tab{
font-size: 0;
}
.cm-trailing-space-a::before, .cm-trailing-space-b::before, .cm-trailing-space-new-line::before, .cm-tab::before{
content:'·';
color:var(--text-faint);
font-size: initial;
}
.cm-trailing-space-new-line::before {
content:'↵';
}
.cm-tab::before {
content:'⟶'
}
Works!
(And shows tabs as bonus, perfect.)
I seem to keep googling this. … and this is not final and magic and I should actually understand this on a deeper level.
Not today.
So.
Reading lines in a file:
while IFS="" read -r p || [ -n "$p" ]
do
printf '%s\n' "$p"
done < peptides.txt
For outputs of a command:
while read -r p; do
echo $p;
done < <(echo "one\ntwo")
Otherwise: Easy option that I can memorize, both for lines in command and in file that will will skip the last line if it doesn’t have a trailing newline:
for word in $(cat peptides.txt); do echo $word; done
Same idea but with avoiding this bug:
cat peptides.txt | while read line || -n $line ;
do
# do something with $line here
done
Same as first cat
option above, same drawbacks, but no use of cat
:
while read p; do
echo "$p"
done <peptides.txt
Same as above but without the drawbacks:
while IFS="" read -r p || [ -n "$p" ]
do
printf '%s\n' "$p"
done < peptides.txt
This would make command read from stdin, 10
is arbitrary:
while read -u 10 p; do
...
done 10<peptides.txt
(All this from the same SO answer1).
In general, if you’re using “cat” with only one argument, you’re doing something wrong (or suboptimal).
jq -r $stuff
instead of quoted ‘correct’ values like
"one"
"two"
"three"
would return
one
two
three
Wanted to rename all tasks belonging to a certain project from a certain timeframe.
pro:w.one.two
) heavily and want to keep the children names:
Final command I used:
for p in $(task export "\(pro.is:w or pro:w.\) entry.after:2019-04-30 entry.before:2021-12-31" | jq ".[].project" -r | sort | uniq);
do task entry.after:2019-04-30 entry.before:2021-12-31 pro:$p mod pro:new_project_name$p;
done
Used project:w
for work, now new work, makes sense to rename the previous one for cleaner separation.
To list all tasks created in certain dates (task all
to cover tasks that aren’t just status:pending
as by default):
task all pro:w entry.after:2019-04-30 entry.before:2021-12-31
1213 tasks
. Wow.
Remembering when I was using sprints and renaming them at the end, pro:w
covers pro:w.test
and pro:whatever
.
I was disciplined but wanted to cover all pro:w
and pro:w.whatever
but not pro:whatever
just in case, so tested this, same result:
task all "\(pro.is:w or pro:w.\) entry.after:2019-04-30 entry.before:2021-12-31"
Okay, got them. How to modify? Complexity: I need to change part of the project, so pro:w.one
-> pro:old_w.one
instead of changing all tasks’ project to pro:old_w
There’s prepend
2 but seems to work only for descriptions.
There’s t mod /from/to/
syntax3, couldn’t get it to work part of the project.
There’s regex4, but works only for filters if enabled
There’s json export but I don’t feel like parsing JSON, feels too close to day job :)
You can list projects like this:
# currently used
task projects
# all
task rc.list.all.projects=1 projects
This gives hope, if I get the list of projects I can just iterate through them and rename all of them individually.
Can’t find this documented, but task rc.list.all.projects=1 projects pro:w
filters the projects by ones starting with w
.
Format parses the hierarchy sadly
Project Tasks
w 1107
a 1
aan 1
Can I rename the character used for hierarchy so that I get them as list of separate tags with dots in them? Not exposed through config from what I can see
…alright, JSON export it is
It exists, and of course it accepts filters <3
task export "\(pro.is:w or pro:w.\) entry.after:2019-04-30 entry.before:2021-12-31" | wc -l
1215 lines - about the same ballpark as the number of tasks.
JSON output is an array of these objects:
{
"id": 0,
"description": "write attn mechanism also on token features",
"end": "20191016T143449Z",
"entry": "20191016T120514Z",
"est": "PT1H",
"modified": "20200111T094548Z",
"project": "w",
"sprint": "2019-41",
"status": "completed",
"uuid": "d3f2b2ac-ec20-4d16-bd16-66b2e1e568f9",
"urgency": 2
},
Okay
> task export "\(pro.is:w or pro:w.\) entry.after:2019-04-30 entry.before:2021-12-31" | jq ".[].project" | uniq
"w.lm"
"w.l.p"
"w.lm"
"w.lm"
"w.l.py"
"w.lm"
"w"
Proud that I wrote that from the first try, as trivial as it is. Thank you ExB for teaching me to parse JSONs.
The quotes - jq -r
returns raw output5, so same as above but without quotes.
Final command to get the list of projects:
task export "\(pro.is:w or pro:w.\) entry.after:2019-04-30 entry.before:2021-12-31" | jq ".[].project" -r | sort | uniq
(Remembering that uniq
works only after sort
)
And let’s make it a loop, final command:
for p in $(task export "\(pro.is:w or pro:w.\) entry.after:2019-04-30 entry.before:2021-12-31" | jq ".[].project" -r | sort | uniq);
do task entry.after:2019-04-30 entry.before:2021-12-31 pro:$p mod pro:new_project_name$p;
done
Nice but forgotten stuff:
task summary
(haha see what I did there?) ↩︎
How to remove quotes from the results? · Issue #1735 · stedolan/jq ↩︎
Had /dtb/days/day122.md
-type posts, the older ones, and /dtb/days/1234-1234-my-title.md
-type newer posts. They lived both in the same directory on disk, /content/dtb/days/...
. The latter were converted from Obsidian, which meant (among other things) that deleting a page in Obsidian wouldn’t automatically delete the corresponding converted one in Hugo, and I couldn’t just rm -rf ..../days
before each conversion because that would delete the older day234.md
posts.
I wanted to put them in different folders on disk in ./content/
, but keep the url structure serhii.net/dtb/post-name/
for both of them.
Solution was making all /dtb
posts (incl. pages) use the section (dtb
) in the permalink in config.yaml
:
permalinks:
dtb: '/:section/:filename'
Now they do, regardless of their location on disk.
Then I moved the old posts into ./content/dtb/old_days
, kept the new ones in ./content/dtb/days
Lastly, this removes all converted posts (= all .md
s except _index.md
) before conversion so that no stray markdown posts are left:
find $OLD_DAYS | grep -v _index.md | xargs rm
Google still has serhii.net/dtb/days/...
pages cached, and currently they’re available both from there and from /dtb/...
. I can’t find a way to redirect all of the /dtb/days/...
to /dtb/...
except manually adding stuff to the frontmatter of each. I have scripts for that, but still ugly.
.htaccess
is our friend.
" RewriteRule ^d/dtb(.*)$ /dtb$1 [R=301,NC,L]
RewriteRule ^dtb/days(.*)$ /dtb$1 [R=301,NC,L]
This is getting more and more bloated.
Generally, I see absolutely no reason not to rewrite this mess of build scripts in Python. obyde
is a Python package, handling settings, file operations etc. is more intuitive to me in Python.
Instead I keep re-learning bash/zsh escape syntax every time, and I’m procrastinating doing error handling for the same reasons.
The only non-native thing would be rsync
and git
, which can be handled through a subprocess.
pytest-datafiles · PyPI allows copying files to a temporary directory, then they can be modified etc. Really neat!
Sample:
ASSETS_DIR = Path(__file__).parent / "assets"
PROJ_DIR = ASSETS_DIR / "project_dir"
konfdir = pytest.mark.datafiles(PROJ_DIR)
@konfdir
def test_basedir_validity(datafiles):
assert directory_is_valid(datafiles)
Also love this bit:
Note about maintenance: This project is maintained and bug reports or pull requests will be addressed. There is little activity because it simply works and no changes are required.
SADLY this means that returned path is py.path
, I’m not the only one complaining about that1
Pytest has newer native fixtures that use Pathlib (Temporary directories and files — pytest documentation) but datafiles hasn’t been moved to them.
A conftest.py
file gets imported and run before all the other ones.
Pytest resolves all imports at the very beginning, I used conftest.py
it to import a package so that it’ll be the one used by the imports in files that are imported in the tests (seeing that there’s a mypackage
already imported, subsequent import mypackage
s are ignored)
(Can I think of this as something similar to an __init__.py
?)