In the middle of the desert you can say anything you want
Finally got them! Or maybe wasn’t clear in older versions of the docu.
Lazy objects — Qtile 0.1.dev50+g2b2cd60.d20220610 documentation
Option 1:
from libqtile.config import Key
from libqtile.lazy import lazy
@lazy.function
def my_function(qtile):
...
keys = [
Key(
["mod1"], "k",
my_function
)
]
Option 2:
from libqtile.lazy import lazy
from libqtile.log_utils import logger
def multiply(qtile, value, multiplier=10):
logger.warning(f"Multiplication results: {value * multiplier}")
keys = [
Key(
["mod1"], "k",
lazy.function(multiply, 10, multiplier=2)
)
]
Or decorated version
from libqtile.config import Key
from libqtile.lazy import lazy
from libqtile.log_utils import logger
@lazy.function
def multiply(qtile, value, multiplier=10):
logger.warning(f"Multiplication results: {value * multiplier}")
keys = [
Key(
["mod1"], "k",
multiply(10, multiplier=2)
)
]
Toggle touchpad (enable/disable) in Linux with xinput.:
if xinput list-props 13 | grep "Device Enabled (:digit:\+):\s*1" >/dev/null; then xinput disable 13 && notify-send -u low -i mouse "Trackpad disabled"; else xinput enable 13 && notify-send -u low -i mouse "Trackpad enabled"; fi
With 13 being the xinput id of the touchpad.
My old enable/disable oneliners have bits on how to find the ID:
'bash -c "xinput | grep TouchPad | ag -o "[0-9][0-9]" | xargs xinput disable"'
That said, I don’t remember the ID ever being anything else than 13.
WT told me about these:
I had this:
tm_old() {
local DATE=$(date +'%H:%M:%S %d/%m')
local N="$1"; shift
(utimer -c $N && zenity --info --title="Time's Up" --text="${*:-BING} \n\n $DATE")
}
I used it as tm 3m message and get a popup in three minutes with “message”. Used it for reminders of random stuff like “turn off the stove” or “stop doing X”.
Now utimer seems to be dead, and qtile makes the alert popup messages pop up in the wrong workspace group, usually the one wrote the command in instead of the currently active one.
Today I solved the last part by switching to notify-send. Found dunst, added to startup, now notify-send creates nice visible alerts:
It seems to support a lot of cool stuff like progress bars and images: dunst-project/dunst: Lightweight and customizable notification daemon
Dunst - The Blue Book - nice post, and woohooo a digital garden!
Useful commands:
dunstctl close-alldunstctl history-popAdded the first one as qtile shortcut:
Key(
[mod, ctrl],
"h",
lazy.spawn(cmd.dunst_clearall),
desc="Clear notifications",
),
There’s also dunstify which is a notify-send with more options.
Changed the zsh command to use notify-send. Everything works nicely now.
If utimer stops working I’ll prolly write a python script that does a
countdown1 and then a configured notification/action/.., without relying on .zshrc aliases and bash functions. We’ll see.
Or use existing solutions: alexwlchan/timers: A simple command-line stopwatch and countdown clock ↩︎
Reading Creating and updating figures in Python.
fig.update_layout(title_text="update_layout() Syntax Example",
title_font_size=30)
fig.update_layout(title_text="update_layout() Syntax Example",
title_font=dict(size=30))
fig.update_layout(title=dict(text="update_layout() Syntax Example"),
font=dict(size=30))
fig.update_layout({"title": {"text": "update_layout() Syntax Example",
"font": {"size": 30}}})
fig.update_layout(title=go.layout.Title(text="update_layout() Syntax Example",
font=go.layout.title.Font(size=30)))
<br> and <br /> work, <br/> doesn’t. 1fig.update_layout(margin=dict(l=20, r=20, t=20, b=20))
And I just want to mention the very special design decision to have arguments named tickfont and title_font (with underscore), in the same function, getting identical arguments.
Really nice google colab showing more advanced datasets bits in addition to what’s on the label:
Custom Named Entity Recognition with BERT.ipynb - Colaboratory
Pasting this example from there:
class dataset(Dataset):
def __init__(self, dataframe, tokenizer, max_len):
self.len = len(dataframe)
self.data = dataframe
self.tokenizer = tokenizer
self.max_len = max_len
def __getitem__(self, index):
# step 1: get the sentence and word labels
sentence = self.data.sentence[index].strip().split()
word_labels = self.data.word_labels[index].split(",")
# step 2: use tokenizer to encode sentence (includes padding/truncation up to max length)
# BertTokenizerFast provides a handy "return_offsets_mapping" functionality for individual tokens
encoding = self.tokenizer(sentence,
is_pretokenized=True,
return_offsets_mapping=True,
padding='max_length',
truncation=True,
max_length=self.max_len)
# step 3: create token labels only for first word pieces of each tokenized word
labels = [labels_to_ids[label] for label in word_labels]
# code based on https://huggingface.co/transformers/custom_datasets.html#tok-ner
# create an empty array of -100 of length max_length
encoded_labels = np.ones(len(encoding["offset_mapping"]), dtype=int) * -100
# set only labels whose first offset position is 0 and the second is not 0
i = 0
for idx, mapping in enumerate(encoding["offset_mapping"]):
if mapping[0] == 0 and mapping[1] != 0:
# overwrite label
encoded_labels[idx] = labels[i]
i += 1
# step 4: turn everything into PyTorch tensors
item = {key: torch.as_tensor(val) for key, val in encoding.items()}
item['labels'] = torch.as_tensor(encoded_labels)
return item
def __len__(self):
return self.len
For aligning tokens, there’s Code To Align Annotations With Huggingface Tokenizers. It has a repo: LightTag/sequence-labeling-with-transformers: Examples for aligning, padding and batching sequence labeling data (NER) for use with pre-trained transformer models
Also the official tutorial (Token classification) has a function to do something similar:
def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples[f"ner_tags"]):
word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.
previous_word_idx = None
label_ids = []
for word_idx in word_ids: # Set the special tokens to -100.
if word_idx is None:
label_ids.append(-100)
elif word_idx != previous_word_idx: # Only label the first token of a given word.
label_ids.append(label[word_idx])
else:
label_ids.append(-100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
git rebase -i SHA_of_commit_to_delete^ drops you into the usual screen, three you can change pick
to drop in the first line (or any others) to just delete that commit.
Generally, On undoing, fixing, or removing commits in git seems like The README for that.
git branch -d some-branch deletes a local branchgit push origin --delete some-branch deletes a remote branch(as usual, remembering that branches are pointers to commits)
Changing the timeout delay for wrong logins on linux has a lot of details, in my case the TL;DR was:
/etc/pam.d/login change the number, in microseconds;/etc/pam.d/common-auth by adding nodelay to: auth [success=1 default=ignore] pam_unix.so nullok_secure nodelayThe second one works also for everything inheriting that, which is a lot.
debugging - I have a hardware detection problem, what logs do I need to look into? - Ask Ubuntu:
Then, causing the problem to happen, and listing the system’s logs in reverse order of modification time:
ls -lrt /var/log,tail -n 25on recently modified log files (for reasonable values of 25), anddmesg.Read, wonder, think, guess, test, repeat as needed
Causing the problem and then looking at the recently modified logs is common sense but brilliant.
And saving ls -lrt as “list by modification time”.
-t is “sort by modification time” and is easy to remember.
When debugging an issue I had with my monitor, found a mention of inxi1, which seems to colorfully output basic system (incl. hardware) info.
The post asked for inxi -SMCGx, inxi help told me inxi -F
is the fullest possible output.
Neat!