In the middle of the desert you can say anything you want
git - How to cherry-pick multiple commits - Stack Overflow:
For one commit you just pase its hash.
For multiple you list them, in any order.
For a range, you do oldest-latest
but add ~
, ^
or ~1
to the oldest to include it. Quoting directly from the SO answer:
# A. INCLUDING the beginning_commit
git cherry-pick beginning_commit~..ending_commit
# OR (same as above)
git cherry-pick beginning_commit~1..ending_commit
# OR (same as above)
git cherry-pick beginning_commit^..ending_commit
# B. NOT including the beginning_commit
git cherry-pick beginning_commit..ending_commit
So, given that kubectl cp
was never reliable ever for me, leading to many notes here, incl. 250115-1052 Rancher much better way to copy data to PVCs with various hacks and issues like 250117-1127 Splitting files, 250117-1104 Unzip in alpine is broken issues etc. etc. etc.
For many/large files, I’d have used rsync
, for which ssh access is theoretically needed. Not quite!
rsync files to a kubernetes pod - Server Fault
ksync.sh
(EDIT Updated by ChatGPT to support files with spaces):
if [ -z "$KRSYNC_STARTED" ]; then
export KRSYNC_STARTED=true
exec rsync --blocking-io --rsh "$0" "$@"
fi
# Running as --rsh
namespace=''
pod=$1
shift
# If user uses pod@namespace, rsync passes args as: {us} -l pod namespace ...
if [ "X$pod" = "X-l" ]; then
pod=$1
shift
namespace="-n $1"
shift
fi
# Execute kubectl with proper quoting
exec kubectl $namespace exec -i "$pod" -- "$@"
Usage is same as rsync basically :
./ksync.sh -av --info=progress2 --stats /local/dir/to/copy/ PODNAME@NAMESPACE:/target/dir/
(Or just --progress
for per-file instead of total progress).
Rsync needs to be installed on server for this to work.
For flaky connections (TODO document better): -hvvrPt --timeout1
and while ! rsync ..; do sleep 5; done
1
TL;DR pipx inject target_app package_to_inject
pipx psutil
it refuses, it’s a library, not an app
psutil
for the MemoryGraph widget in (pipx install
-ed) qtile, that doesn’t helppipx inject qtile psutil
❯ pipx inject qtile psutil
injected package psutil into venv qtile
done! ✨ 🌟 ✨
If no real config thingy is required/wanted, then this works (stolen from Parsing Dictionary-Like Key-Value Pairs Using Argparse in Python | Sumit’s Space)1:
def parse_args():
class ParseKwargs(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
setattr(namespace, self.dest, dict())
for value in values:
key, value = value.split("=")
getattr(namespace, self.dest)[key] = value
parser.add_argument("--no-pics", action="store_true", help="Predict only on videos")
# ...
parser.add_argument(
"-k",
"--kwargs",
nargs="*",
action=ParseKwargs,
help="Additional inference params, e.g.: batch=128, conf=0.2.",
)
interesting mix of topics on that website ↩︎
#!/bin/bash
BATTINFO=$(acpi -b)
LIM="00:15:00"
if grep Discharging) && $(echo $BATTINFO | cut -f 5 -d " ") < $LIM ; then
# DISPLAY=:0.0 /usr/bin/notify-send "low battery" "$BATTINFO"
dunstify "low battery" "$BATTINFO"
fi
For this, install and run on startup dunst
, then cron job for the above.
E.g. to upload it somewhere where it’s hard to upload large files
See also: 250117-1104 Unzip in alpine is broken
# split
split -b 2G myfile.zip part_
# back
cat part_* > myfile.zip
TL;DR alpine’s unzip
is busyboxes, and fails for me with
/data/inference_data # unzip rd1.zip
Archive: rd1.zip
unzip: short read
apk add unzip
installs the same real one I have on all other computers, and then it works.
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- node_to_avoid
(operator: In
for the list of the allowed nodes)
Related: 250115-1238 Adding wandb to a CLI yolo run
Reference are surprisingly hard to find on the website: results - Ultralytics YOLO Docs
yolo detect train model=yolo11s.pt data=/data/data/data.yaml project=/data/project/ epochs=500 imgsz=640 device=0,1 name=yolo11s-aug-500epochs-full
YOLOv11 sets default batch_size 16, one can set -1
for it to automatically pick one that’s 60% of GPU, or 0.8
to automatically pick one that’s 80% of GPU
To decrease verbosity in predictions, verbose=False
to model.predict()
(and `.track()) works1.
Changing imgsz=
to something lower may not necessarily make it faster, if a model was trained with a certain size it may predict faster at that size (e.g. OSCF/TrapperAI-v02.2024 predicts at 40+ iterations per second when resized to 640 and ~31 when left to its default 1024pd)
Half-life precision (if supported by GPU) is really cool! half=True
makes stuff faster (no idea about prediction quality yet)
vid_stride
predicts every Nth video frame, was almost going to write that myself
All-in-all I like ultralytics/YOLO
I had exotic not enough shared memory crashes, ty GC for giving me these lines I do not yet understand but that seem to work, later I’ll dig into why (TODO)
apiVersion: v1
kind: Pod
metadata:
name: CHANGEME
namespace: CHANGEME-ns
spec:
restartPolicy: Never
containers:
- name: sh-temp-yolo-container-3
image: ultralytics/ultralytics:latest
command: ["/bin/sh", "-c"]
args:
- "yolo detect train model=yolo11s.pt data=/data/data/data.yaml project=/data/project/ epochs=30 imgsz=640 device=0,1"
resources:
requests:
nvidia.com/gpu: "2" # GPUs for each training run
ephemeral-storage: "12Gi"
limits:
nvidia.com/gpu: "2" # same as requests nvidia.com/gpu
ephemeral-storage: "14Gi"
volumeMounts: # Mount the persistent volume
- name: data
mountPath: /data
- name: shared-memory
mountPath: /dev/shm
volumes:
- name: shared-memory
emptyDir:
medium: Memory
- name: data
persistentVolumeClaim:
claimName: sh-temp-yolo-pvc
Both requests
AND limits
, as well as mount shared memory in volumeMounts
+ volumes
.