serhii.net

In the middle of the desert you can say anything you want

07 Dec 2023

Notes to self and lessons learned, OOP and programming in general

I decided that I should go back to the digital garden roots of this, and use this note as a small journey of conceptual/high-level things that I believe would make me a better programmer.

And that I’ll re-read this every time I think of something to add here.

The master thesis has given me ample occasions to find out about these things, and will give me ample occasions to use them before it’s over. Just like with dashes (231205-1311 Notes from paper review#Hyphens vs dashes (em-dash, en-dash)), practiced enough it will stick.

OOP (2023-12-07)

(the post that started this page)

After refactoring my third program to use OOP this quarter, this be the wisdom:


If I'm starting a one-time simple project that looks like it doesn't need OOP - think hard, because often it does.

(Unless threads/parallelism, then it means think harder).

Crawling and converting and synchronicity (2023-12-08)

Context: UPCrawler & GBIF downloader

TL;DR: downloading bits and writing to disk each is sometimes better than to keep them in a dataframe-like-ish structure that gets written to disk in bulk. And the presence of a file on disk can be signal enough about its state, making separate data structures tracking that unneeded.

Background:

When downloading something big and of many parts, my first instinct is/was to put it into pretty dataclasses-like structures, (maybe serializable through JSONWizard), collect it and write it down.

If I think I need some intermediate results, I’d do checkpoints or something similar, usually in an ugly function of the dataframe class to do file handling etc.

Often one can download the individual bit and write it to disk, maybe inside a folder. Then a check of whether it has been downloaded would be literally a check if the file exists, making them self-documenting in a small way.

(And generally - previously I had this when writing certain converters and the second worst thing I have written in my life - I’d have dataclasses with kinds of data and separate boolean fields with has_X_data and stuff. I could have just used whether the data fields are None to signify if they are there or not instead of …that.)

Synchronicity and threads

Doing it like that makes they can happily be parallelized or whatever, downloaded separately.

In the UPCrawler, I was blocked by the need to add to each article a language-independent tag, that was an URI and one to two translations. I wanted to get the entire chunk, gather all translations of tags from them, label the chunks correctly, then serialize.

This is idiotic if I can just download the articles with the info I have to disk and then run a separate script to gather all tags from them and do this. (Or I can gather the tags in parallel while this is happening but don’t let the need to complete it block my download)

Shortcuts (2023-12-08)

Context: UPCrawler; a pattern I’ve been noticing.

Sitemaps instead of crawling archives

First I crawled and prased pages like Архив 26 сентября 2023 года | Украинская правда to get the URI of the articles published on that day, did permutations of the URI to get the other languages if any, and got the list of URIs of articles to crawl.

Yesterday I realized that UPravda has sitemaps: https://www.pravda.com.ua/sitemap/sitemap-2023-04.xml.gz, and that I can use something like advtools to nicely parse them, and advtools gave me back the data as a pandas DataFrame — leading me to the insight that I can analyze parse regex etc. the uris using pandas. Including things like groupby article ID to give me immediately the 1..3 translations of that article. Instead of me needing to track it inside a (guess what) datastructure based on dataclasses.

This inspired me to look for better solutions of another problem plaguing me - tags, with their UK and RU translations.

Tags

I thought maybe I could check if the website has a nice listing of all existing tags. And of course it does: Теги | Украинская правда

Damn.

Lesson in all that

Make an effort — really, an effort - to look at the forest, and for each problem think if there’s an easier way to do that than the one I started implementing without thinking. Including whether there are already structures in place I know about but from other contexts.

I learned to look for solutions inside python stdlib, remembering about this at the right moments should be easy as well.

Я ускладнюю все, до чого торкаюсь (2023-12-08)

A lot of my code is more complex than needed, and to heavy for its own good/purpose. Connected to the above: think (draw? architect?) of a good design before I start writing the code. A sound structure from the beginning will remove many of the corner cases that end up in ugly code to maintain.

Use a real IDE as soon as needed (2024-01-19)

In the context of 240118-1516 RU interference masterarbeit task embeddings mapping, especially given that the models take a while to load.

  • A Jupyter notebook would have allowed me to experiment much better with the loaded models than a pdbpp interpreter/command line.
  • Pycharm would have allowed me to debug inside gensim and transmat, and therefore understand them, much better and earlier.
Nel mezzo del deserto posso dire tutto quello che voglio.
comments powered by Disqus