Things that live here:

  1. Work log, where I note things I feel I'll have to Google later.
  2. Journal, very similar but about non-IT topics.
  3. Blog for rare longer-form posts (last one below).
  4. Link wiki (almost abandoned) and it's WIP conversion to a static website.

Feel free to look at what you can find here and enjoy yourself.

Latest posts from the Work log

Day 1420 / Benchmark tasks for evaluation of language models

This is a sister page to 221119-2306 LM paper garden, which is the notes/links/refs/research for the paper - this page is the draft for the paper.


  • TODO
  • Lean on this paper1’s intro'
  • TL;DR:
    • eval and benchmarks of LM
    • emphasis on non-English
    • emphasis on domain-specific


  • The field of Natural Language Processing (NLP) covers various topics connected with understanding and computationally processing human (natural) languages. 2

  • History

    • Initially, NLP relied on explicitly written rules, which was time-consuming, not generalizable and inadequate in describing well all language phenomena. 3
      • TODO better source
    • A Language Model (LM) are models that assign a probability to a sequence of words.
    • The use of probability has been an important development, initially using approaches pioneered by Markov(SOURCE) (Markov Chains)4 that were first applied by Shannon(SOURCE) to sequences in the English language, then with a variety of methods and techniques being proposed in the 1980s-1990s(SOURCE).
      • TODO all of this in chapter three refereces of the book4, add real refs
    • In the past, machine learning approaches (naive Bayes, k-nearest-neighbors, conditional random fields (CRF), decision trees, random forests and support vector machines) were used for NLP2
    • But in the last years, increasing availability of both large amounts of text and of computational power, led to these approaches being almost completely replaced by neural models and deep learning.2
  • There are two basic ways to look at a language model2:

    • A probability distribution over words conditional on the preceding or surrounding ones
    • Determining what words mean, especially given that words derive a lot of meaning by the context in which they are placed.
  • And in both roles, LMs are a crucial part of the tasks for which they are used, be it image captioning, translation, summarization, name entity recognition etc.

  • Evaluation

    • Approaches to the evaluation of LMs can be roughly divided into two classes: extrinsic and intrinsic5.
      • Intrinsic evaluation measures the quality of the model independently from any applications
      • Extrinsic evaluation measures the performance of the model in downstream tasks.
    • Examples of intrinsic evaluation methods include both classic statistic ones like perplexity6 and more involved ones like LAMA probes7
    • Extrinsic evaluation can vary based on domain/purpose of LM, but it also includes various benchmarks like GLUE8 which include sets of tasks that each evaluate a separate facet of the LM.
  • Multi-lingual stuff:

    • In the field of NLP, the majority of research has historically been focused on the English language, though this has been changing in recent years9. This has been discussed both from a diversity/inclusion perspective10, and from the angle of language independence11.
    • The large amount of resources available for the English language incentivizes research on it, but “English is neither synonymous with nor representative of natural languages”12 Methods developed and tested on the English language won’t automatically generalize for ‘language in general’.13
    • This makes non-English, multi-lingual and cross-lingual benchmarks even more important.
      • TODO rephrase and source and finish.0
  • TODO mention non-English and domain-specific stuff here

    • There are specific complexities with LMs needed for downsteam tasks on special vocabularies, a la financial language.
      • Perplexity is hard with sparse vocabularies etc.
    • Another thing that’s getting more and more emphasis is non-English stuff, especially emph. in the latest ACL conferences
      • Bender rule etc.
      • -> Exist benchmarks that attempt to evaluate and stimulate multiple languages stuff
  • This papers reviews and systematizes the various approaches that exist to evaluate LMs, with a special focus on non-English and domain-specific LMs.

  • One6 TODO

Evaluation of LMs


Evaluation approaches can be divided into intrinsic and extrinsic. Extrinsic evaluation is usually preferred if the LM is trained for a specific downstream task - for example, if a LM is needed for token classification on financial documents, the logical metric would be train a classifier with different LMs and measure the token classification scores of each. But this is not always possible or reasonable: the downstream task might be prohibitively large or slow to train just for evaluation purposes (or to allow iterating with the needed speed), or the model might be a generic one.

Intrinsic approaches measure the quality of the model independently of any application. They tend to be simpler than extrinsic ones, but have significant drawbacks.

Intrinsic evaluation

Perplexity and probabilistic metrics

LMs are commonly evaluated using probabilistic metrics. For character-based models, average cross-entropy is often used, for word-based models - perplexity.14

The perplexity of a LM on a test set is or the inverse of the (geometric) average probability assigned to each word in the test set by the model.[^evallm] Or formulated differently - the inverse probability of the test set, normalized by the number of words. And minimizing perplexity means increasing the probability of the test set according to the LM.4 It can be viewed as measure of uncertainty when predicting the next token in a sentence15.

It often14 correlates with downstream metrics, but not always, and has other limitations. (TODO: get the citation from the paper)

Perplexity can only be used for if the output is a distribution over probabilities which sum up to i So (for example) a GAN-based text generator returning discrete tokens won’t have a perplexity. 14

Perplexity becomes less meaningful when comparing LMs trained on different vocabularies and datasets, but a number of standard benchmark datasets exist that can be used for comparison. 7 Another notable limitation is context and time-dependence. The One Billion Word Benchmark is a dataset deried from the WMT 2011 News CrGaws Dataset and still commonly used to compare LM ability by reporting the perplexity on it. 15 Even disregarding issues of factuality and media bias, it has been shown that such temporal data has limitations - and a language model’s ability to generate text from news articles from 10 years ago penalizes models with more recent knowledge.15 Common Crawl is a dataset updated annually that can be used to train LMs - and it has been shown that LMs trained on Common Crawl perform worse on the One Billion Word Benchmark over time.

Intrinsic evaluation probing model knowledge

LAMA7, XLAMA , X-FACTR, MickeyProbe, Negated LAMA, Pro etc.5 are a family of probes to estimate factual and common-sense knowledge of LMs. Facts are converted to fill-in-the-blank style questions and the model is evaluated based on the prediction of blank tokens, and various facets of the model are probed: common-sense knowledge, word understanding, negations etc.

Extrinsic evaluation and benchmarks

Extrinsic evaluation assesses the quality of the model on dowstream tasks. For domain-specific models a task is usually known - for example, Finbert16 is a LM for financial domain trained for (and evaluated on) financial sentiment analysis.

For task-independent evaluation, benchmarks17 are used. A benchmark provides a standard way of evaluating the model’s generalization ability across tasks.

A number of both general and domain-specific benchmarks exist, both monolingual and multi-lingual ones.

GLUE, SuperGLUE and general-purpose benchmarks

The General Language Understanding Evaluation (GLUE)8 was introduced to facilitate research on models that can execute a range of different tasks in different domains. It contains nine different natural language understanding tasks, built on established annotated datasets and selected to cover a diverse range of text genres, dataset sizes and degrees of complexity.

For example, CoLa focuses on whether a given sentence is grammatically correct, MRPC on whether two sentences are semantically equivalent, and WNLI is a reading comprehension task in which a system must find the word a pronoun refers to.

GLUE itself is model-agnostic, and can evaluate the outputs of any model capable of producing results on all nine tasks.

According to the AI Index Report 202118, “Progress in NLP has been so swift that technical advances have started to outpace the benchmarks to test for them.”

A little over a year since the introduction of GLUE, the state of the art score of GLUE surpassed the level of non-expert humans, which resulted in the creation of SuperGLUE19 by the same authors. It has the same motivation and design as GLUE but improves upon it in several ways, most prominently more challenging and diverse tasks.

Domain-specific benchmarks

Domain-specific benchmarks exist, such as5 (TODO add sources for each):

  • TweetEval and UMSAB with have social-media-related tasks
  • CodeXGLUE with programming language generation and understanding tasks
  • BLUE, BLURB and CBLUE for the bio-medical domain

Multi-lingual, cross-lingual and on-English benchmarks


  • In 2011 Bender noticed that most papers don’t state the language they’re working on 11, and in a blog post from 2019 she reminded that “English is Neither Synonymous with Nor Representative of Natural Language” 912, and “Always state the language you’re working on, even if it’s English” became known as the “Bender rule”9.
  • Not naming the language studied (usually English) implies the methods developed could work on any other languages, as if English were a neutral and universal language.
  • Language independence and language representation20, availability of non-English and/or multilingual corpora and research is
  • In the field of NLP, Non-English and/or multilingual corpora and benchmarks and research.

Non-English benchmarks

  • Benchmarks for languages other than English exist, for a list of non-English benchmarks, see5.
  • TODO - more here.

Multi-lingual benchmarks

Linguistic Code-switching21 is the phenomenon that happens when speakers alternate languages in the same utterance.






  1.  ↩︎
  2.  ↩︎
  3.  ↩︎
  4.  ↩︎
  5.  ↩︎
  6. Evaluation Metrics For Language Models ↩︎

  7. 1909.01066 Language Models as Knowledge Bases? ↩︎

  8. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding - ACL Anthology ↩︎

  9. Do we Name the Languages we Study? The #BenderRule in LREC and ACL articles - Inria - Institut national de recherche en sciences et technologies du numérique ↩︎

  10. 2004.09095 The State and Fate of Linguistic Diversity and Inclusion in the NLP World ↩︎

  11. That bender 2011 paper

  12. The Bender post

  13.  ↩︎
  14.  ↩︎
  15. No news is good news: An asymmetric model of changing volatility in stock returns - ScienceDirect ↩︎

  16. 1908.10063 FinBERT: Financial Sentiment Analysis with Pre-trained Language Models ↩︎

  17. Challenges and Opportunities in NLP Benchmarking ↩︎

  18. 2103.06312 The AI Index 2021 Annual Report ↩︎

  19. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems ↩︎

  20. Linguistically Naïve != Language Independent: Why NLP Needs Linguistic Typology - ACL Anthology ↩︎

  21. LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation - ACL Anthology ↩︎

Day 1419 / LM paper notes

For the paper I’m writing, I’ll actually try to do a real garden thing. With leaves etc that get updated with new info, not chronologically like my current DTB notes.


Perplexity and intrinsic eval

  • Resources:
  • The above cites that’s longer and so much better!
  • Full link:
    • ![[Screenshot_20221119-233022.png]]
    • P 37 about test set needing to have enough statistical power to measure improvements
    • Sampling
    • Chapter 3 about Shakespeare vs WSJ and genre
    • 42: Smoothing
      • Unknown words so we don’t multiply 0 probs
      • 7 / 130 really nice basics of ml
    • Another take on the same, but love it
    • Links the Roberta paper about the connection between perplexity and downstream it!
    • [[Screenshot_20221120-000131_Fennec.png]]
    • ![[Screenshot_20221119-235918_Fennec.png]]
    • If surprisal lets us quantify how unlikely a single outcome of a possible event is, entropy does the same thing for the event as a whole. It’s the expected value of the surprisal across every possible outcome — the sum of the surprisal of every outcome multiplied by the probability it happens

  • Excellent about the drawbacks of perplexity:
    • First, as we saw in the calculation section, a model’s worst-case perplexity is fixed by the language’s vocabulary size. This means you can greatly lower your model’s perplexity just by, for example, switching from a word-level model (which might easily have a vocabulary size of 50,000+ words) to a character-level model (with a vocabulary size of around 26), regardless of whether the character-level model is really more accurate.

    • Two more
    • about perplexity and news cycle 6- TODO
    • The problem is that news publications cycle through viral buzzwords quickly — just think about how often the Harlem Shake was mentioned 2013 compared to now.

  • - about one million DS news benchmark


Interesting intrinsic eval




  • Much more detailed paper than the glue one!
  • More complex tasks since models better than people at easy ones
  • Goldmine of sources
  • At the end they list the excluded tasks + instructions from the tasks for humans!



  • FinBERT /
    • has other eng lang dataset
    • Discussion about cased etc
    • Eval on sentiment analysis, accuracy regression
    • Redundant content
  • NFinbert knows numbers, there are a lot of them in finance
  • “Context, language modeling and multimodal data on finance”
    • Models trained on mix better than in fin data alone
    • Really nice and involved and financial and I can’t go through it now
    • Almost exclusively sentiment analysis
  • NER on German financial text for anonymisation
    • BERT



Day 1411 / Enums in python - set by name and value

God I need to read documentation, all of it, including not-important sounding first sentences.

Previously: 220810-1201 Huggingface utils ExplicitEnum python bits showing me how to do str enuenums

.. you can set using both.

enum — Support for enumerations — Python 3.11.0 documentation:

  • use call syntax to return members by value
  • use index syntax to return members by name
class MyEnum(str,Enum):
    IG2 = "val1"
    IG3 = "val2"
MyEnum("val1") == MyEnum["IG3"]

Day 1390 / HF token-classification pipeline prediction text

Pipelines: in the predictions, p['word'] is not the exact string from the input text! It’s the recovered one from the subtokens - might have extra spaces etc. For the exact string the offsets should be used.

EDIT - I did another good deed today: Fix error/typo in docstring of TokenClassificationPipeline by pchr8 · Pull Request #19798 · huggingface/transformers

Day 1382 / pytorch dataloaders and friends

Pytorch has torchdata, roughly similar to what I used to know and love in Keras: Tutorial — TorchData main documentation

Day 1374 / Python raise_or_log function

Neat snippet I just wrote that will get rid of a lot of duplicated code:

def exception_or_error(
	message: str,
	fail_loudly: Optional[bool] = False,
	exception_type: Optional[Type[Exception]] = ValueError,
) -> None:
	"""Log error or raise an exception. Needed to control the decider
	in production."""

	# Raise whatever exception
	if fail_loudly:
		raise exception_type(message)


are_we_in_production = True

# will log or raise a ValueError based on the above
exception_or_error("File not found", fail_loudly=are_we_in_production)

# if raising something, will raise a KeyError
exception_or_error("Row not in db", fail_loudly=are_we_in_production,
				  exception_type = KeyError)

Day 1370

You can use screen or tmux for your normal editing things

This goes into “things you’re allowed to do” (Previously: List of good things - territory, but:

  • previously, screen/tmux’s use case was “ssh into a server far away and let things run even when your SSH session disconnects”
  • had two terminals open on a remote server, had to edit the exact two files every time, over days and disconnections
  • just realized that I can just have a screen session open with vim and the files I edit, and just attach to it next time I’d doing something on that server, whenever that is!

Using cloudflared tunnels as proxy in docker

image: cloudflare/cloudflared:latest
command: tunnel run
  - TUNNEL_TOKEN=my-super-secred-tunnel-token
restart: unless-stopped
network_mode: "host"

Then whatever can run in its network with bridge driver:

    driver: bridge
      - nextcloud
	  - "1234:80"

And then in the cloudflare zero trust UI add a tunnel from localhost:1234.

Neat thing is that tunnel type HTTP refers to the connection to the host running cloudflared, but the thing is accessible through cloudflare’s servers as both http and https. No need to manually do any certs stuff!

self-hosting with docker compose resources

frp proxy using docker (-compose)

Wanted to run frp’s client frpc with docker to forward the SSH port.

Main issue was binding to a port already open on the host, and one not controlled by a docker thing.

My first attempt led to this: “: Error starting userland proxy: listen tcp4 bind: address already in use”

After looking around the Internet, found a solution.

Docker’s docker-compose.yml:

    image: chenhw2/frp
    restart: unless-stopped
      - ARGS=frpc
      - ./conf/frpc.ini:/frp/frpc.ini
    network_mode: "host"
      - "22:22"

The key being the “nertwork_mode” part.

Neither frp server nor client configs needed anything special.

Strangely , I didn’t even need to set any capabilities like I did for dns:

    restart: always
    image: strm/dnsmasq
      - ./conf/dnsmasq.conf:/etc/dnsmasq.conf
      - "53:53/udp"
      - NET_ADMIN

Day 1368

Debian linux install hangs on configuring network + debugging linux install issues

  • Allegedly happens when the network is misconfigured.
    • Since a black screen issue I religiously md5sum the ISOs, otherwise that would’ve been the prime suspect
  • In my case I had port forwarding and DMZ and ipv6 configured in the router, disabling all of that fixed the installation issues
  • To debug installation issues, <Ctrl-Shift-F2> to go to the tty and cat /var/log/syslog
    • less is not installed but nano is
    • tty4 has live running logs
      • that are nice for non-graphical install and “is it doing anything now?”

Relevant: 5.4. Troubleshooting the Installation Process

Burn iso onto usb with dd

I always look in zsh history for this string:

sudo dd if=/path/to/debian-live-11.5.0-amd64-cinnamon.iso of=/not/dev/sda bs=1M status=progress

/dev/sda is the usb drive, will be ofc. deleted fully; not a partition like /dev/sdaX but the actual /dev/sda disk itself.

I specifically added /not/dev/sda at the beginning for systems where I have not set up unset zle_bracketed_paste and that might press enter on paste or after I edit the .iso but not of. That way I’m forced to think when editing of.

Day 1366

Python typing annotating functions and callables

For functions/callables, Callable is not the entire story: you can annotate the arguments and returns values of these callables!

From mypy documentation:

The type of a function that accepts arguments A1, , An and returns Rt is Callable[[A1, ..., An], Rt]."

You can only have positional arguments, and only ones without default values, in callable types

Python blending abstractmethod and staticmethod (or other decorators)

If your @abstractmethod should also be a @staticmethod, you can happily blend both, as long as the @staticmethod (or other) decorator comes first.

In other words, @abstractmethod should always be the innermost decorator.1

  1. abc — Abstract Base Classes — Python 3.10.7 documentation↩︎

Latest post from Blog

My custom keyboard layout with dvorak and LEDs


My keyboard setup has always been weird, and usually glued together with multiple programs. Some time ago I decided to re-do it from scratch and this led to some BIG improvements and simplifications I’m really happy about, and want to describe here.

Context: I regularly type in English, German, Russian, Ukrainian, and write a lot of Python code. I use vim, qutebrowser, tiling WMs and my workflows are very keyboard-centric.

TL;DR: This describes my current custom keyboard layout that has:

  • only two sub-layouts (latin/cyrillic)
  • the Caps Lock LED indicating the active one
  • Caps Lock acting both as Ctrl and Escape
  • things like arrow keys, backspace accessible without moving the right hand
  • Python characters moved closer to the main row

It looks like this1: kl_cut.png

and is available on Github.

How I got into custom keyboard layouts

First, one long summer, I switched to the Dvorak keyboard layout2 and loved it.

Then I saw xkcd’s Randall Munroe’s Mirrorboard: A one-handed keyboard layout for the lazy – xkcd. The idea is that it’s easy to repeat with your left hand movements that you do with your right if they are mirrored. This works for blind typing too - if you type l with your right pinky finger, probably your left pinky finger ‘knows’ that reflex as well.

I loved the idea. My right hand usually has either a mouse or a cup of tea in it, and casual left-hand typing without needing to learn an entirely new layout sounded really interesting.

I decided to create such a mirrored layout for Dvorak.

This led me to the topic of customizing xkb keyboard layouts (the Arch wiki describes it very well).

At the end I did create a Dvorak Mirrorboard layout and used it more than expected (for example, image editing is easier if you don’t need to move your hand from the mouse).

But almost immediately I realized the potential of editing layouts and started to add things I needed, like Enter/Backspace, ümläüts and ß etc. - still mirrored, but now not a generic layout anymore. Needing a new name I decided on Pchr8board.

There were N iterations, here’s an old post about one of them: Pchr8board - a mirrored left-hand keyboard layout for Dvorak -

Then I kept adding stuff, in the process abandoning most left-hand features. Slowly we converged to a layout I liked.

Non-xkb weirdness

I had other non-standard changes I was really attached to:

  • Since forever I have my Caps Lock key remapped to Ctrl, which I strongly recommend to literally everyone. Ctrl is used often and that position is much easier to reach, and no one needs Caps Lock. An ugly xmodmap hack on autostart remapped both keys.
  • Caps-Lock-but-now-Ctrl, if released quickly, it acted as Escape. Incredibly neat, for vim especially. I used xcape3 for this.

It all worked but not flawlessly

All together the setup was a net positive, but was very brittle.

Xcape is clearly abandoned, and neither xmodmap nor xcape work in Wayland. But there are far worse problems.

You need to run it manually on startup (could never get it to run automatically in a reliable way, believe me I tried) and every time you connect a new keyboard.4

Sometimes it took multiple attempts to get all parts working. And every time you run xmodmap it resets your layout and you need to re-run setxkbmap.

Not a hypothetical scenario

…Which you may not be able to do, because you’re stuck with a broken layout or Caps Lock on or a Ukrainian layout and no way to change it, because you can’t open a terminal and type a command to do that.

Also all the WM keybindings relying on the former Ctrl key are broken in the process.

You can GUESS where the Alt key is now and then try to get into a tty. Then your Ctrl and Esc are not where you are used to.

Long story short - it was worth the pain, but the pain was there. The setup was band aids on top of other band aids, some applied at the very beginning when I had no idea what I was doing but hey, it works.

One keyboard layout to rule them all

Then I finally did it all from scratch and for real.

The layout in all its glory


Only the changed keys are labelled, with the exception of the default characters when they help me to identify a key.

The keys are read like this: key_with_explanations.png

The key change is that the Left Alt button becomes a modifier key that makes more symbols/actions available. “Latch” is <Level3> and is located on Left Alt5.

For example, to get Ä you press <Shift+Latch+a>. For ä you just press <Latch+a>. A needs only Shift.

In the layout definition itself this is represented like this:

key <AC01> { [	    a,	A, adiaeresis,	Adiaeresis]	};

The Right Alt key still works like a normal Alt.


Notation that I made up6:

  • Written in full (‘Shift’ or ‘Left Alt’) or capitalized (<LALT>) are physical keys on the keyboard. Given as:
    • keylabel / what xkb calls them inside the layout file (<RCTRL>)
    • default Dvorak value (<c> refers to the key that produces an i in QWERTY or ш in Ukrainian/Russian).
  • Shift/LALT are the logical thing after all remappings and modifiers are applied
  • <Shift+q> are keybindings, with modifiers given by their logical/remapped values (Shift can be located anywhere on the keyboard as long as it works as a Shift), and the letters are the usual/normal/Level1 unchanged ones (<Shift+q> produces a Q, but alone the key <q> would be a lowercase q).

So <LCTRL> would be “left physical key on the keyboard with Ctrl written on it”, <Shift-Latch-g> would be “Press whatever keys / pedals / mouse buttons that are your Shift and Latch modifiers, then the key on the keyboard that in dvorak on normal systems results in a q appearing on the screen”.

Two languages instead of four and one LED

I created two layouts, v6 the latin one with umlauts and everything else, and ruua, that contains both Russian and Ukrainian characters in the same layout.

Pressing the right Control key once changes the layout:

	key  <RCTL> {	[ISO_Next_Group]	};

Having only two layouts means you never have to guess which one comes next or set up indicators in the taskbar. You just let your muscle memory automatically do its thing.7

But unlocking the laptop was still a pain. You don’t know the language you were typing in when you locked it, and things like i3lock don’t tell you the layout by default - and you never know if a wrong password is a typo or a wrong layout.

The grp_led option allows you to use keyboard LEDs as indicators.

setxkbmap -option -option 'grp_led:caps' vv,ruua

Now anytime I’m typing in Russian or Ukrainian the Caps Lock LED is on, regardless of what is shown on the display. Pressing RCTRL changes the layout and makes the LED turn off, and you know it worked.

Custom modifiers defined in the layout itself

No xmodmap anymore! Caps Lock is now Ctrl, with Latch it becomes an Escape (and the former Ctrl button is a new modifier key Hyper_L, guaranteed not to collide with anything).

That took time to get right, the key was making Caps Lock a four-level key5, then we can define what happens to it with Level3/Latch/<LALT>:

key <CAPS> { type[Group1] = "FOUR_LEVEL", symbols[Group1] = [ Control_L, Control_L, Escape, NoSymbol] };
modifier_map Control { <CAPS> };

For more, look into modifier_map8 and real/virtual modifiers9 on the Arch Linux Wiki.

Arrow keys and Backspace easy to reach


Shown in purple, directly in the right hand resting position:

  • Arrow keys (CHTN is the new WASD!).10
  • <Backspace> and <Delete>!
key <AD10> { [	    l,	L, BackSpace, Delete		]	};
key <AC07> { [	    h,	H,	Left,	Left		]	};
key <AC08> { [	    t,	T,	Down,	Down   ]	};
key <AC09> { [	    n,	N,	Right,	Right		]	};

Being able to quickly delete text with my ring finger without stopping to type to reach the backspace key feels as good as it sounds.

Best thing, all this works with keyboard shortcuts! <Ctrl-Alt-R> deletes the entire previous word, etc.

Programming features

Mostly the improvements cluster in two areas:

  • Move all brackets closer to resting position.
  • No Shift for frequent characters
    • Python and programming:+,-,=
    • Vim and vim-like things: :!
      • ; now needs a Shift, a sacrifice I’m ready to make.

Left Alt as modifier key works very well for them - it’s easier to reach for my left thumb than Shift was for any finger ever.

Some redundancy and left-hand features

There are two additional Enter keys, one on Space and the other one under Escape. Both closer than the real one, and the latter needs only the left hand. (I found I need a left-hand Enter more often than any other.)

On that same tilde key there’s a Compose key too, which allows to type some exotic characters that are used too rarely to get their own key.

There is also more than one way to do slashes, this mostly has to do with old layouts I had and still remember if I’m tired or stressed.


After you read ArchWiki’s Precautions and preparations and assuming you need both the latin and cyrillic layouts:

  • Copy the source files to /usr/share/X11/xkb/symbols/. (Or maybe create a symlink to a version-controlled version of the layout, then you can do your modifications and test them more easily.)
  • Name them something reasonable, the file name will be the name used by setxkbmap to refer to the layout.11

For the full experience

  1. Assuming the layouts are in /usr/share/X11/xkb/symbols/v6 and /usr/share/X11/xkb/symbols/ruua, run:
    setxkbmap -option -option 'grp_led:caps' v6,ruua
  2. If it works, add that command to autostart, it should work.

Light-mode experience

If you don’t want to go all-out:

  1. Run setxkbmap -option us, now you have it in your terminal history
  2. setxkbmap -option -option 'grp_led:caps' v6,us would give you the new layout and on <RCTRL> you get a standard QUERTY one.
  3. If something goes wrong, use the arrow keys to find the command setxkbmap -option us and press Enter to run it, and you’re back in known territory.

The layouts definitions

The sources, .json and the pictures are all available on Github. Pasting them below too for completeness and redundancy.

View the sources of both layouts


// My current layout, no connection to dvorak-mirrorboard anymore

default  partial alphanumeric_keys modifier_keys
xkb_symbols   "sh" {

	name[Group1] = "SH Custom layout";

	// Using L-Alt as modifier instead of Caps lock.
	key <LALT> { type[Group1] = "ONE_LEVEL", symbols[Group1] = [ ISO_Level3_Shift ] };

	// Mod+Space is return
	// TODO
	key <SPCE> { [ space, space, Return ] };

	// Bsp, Enter, **Compose Key **
	key <TLDE> {	[     BackSpace,	Multi_key,	Return,	 NoSymbol]	};

	// Tab, LTab, /, b\

	key  <TAB> {	[ Tab,	backslash, slash, NoSymbol]	};

	// Switch groups by RCTL
	key  <RCTL> {	[ISO_Next_Group]	};

	// Caps is Ctrl, ? <Escape> ?
	// Mapping Escape to Caps+Shift doesn't work for some reason
	key <CAPS> { type[Group1] = "FOUR_LEVEL", symbols[Group1] = [ Control_L, Control_L, Escape, NoSymbol] };
    modifier_map Control { <CAPS> };

	key <LCTL> { type[Group1] = "ONE_LEVEL", symbols[Group1] = [Hyper_L] };
	modifier_map Mod3 { Hyper_L };


	//// FIRST ROW 
	// '"`?
	key <AD01> { [  apostrophe,	quotedbl, quoteleft, NoSymbol] };
	// ,<[?
	key <AD02> { [	comma,	less,   bracketleft, NoSymbol] };
	// .>]?
	key <AD03> { [      period,	greater, bracketright, NoSymbol] };

	key <AD04> { [	    p,	P, asciitilde, NoSymbol		]	};
	key <AD05> { 
		[y,	Y, f, F], 
		[a, a, a, a] 

	// Umlauts
	key <AC01> { [	    a,	A, adiaeresis,	Adiaeresis]	};
	key <AC02> { [	    o,	O, odiaeresis,	Odiaeresis]	};
	key <AC03> { [	    e,	E, ediaeresis,	Ediaeresis]	};
	key <AC04> { [	    u,	U, udiaeresis,	Udiaeresis]	};
	key <AC05> { [	    i,	I, d, D		]	};

	key <AB01> { [   colon,	semicolon,z, Z] };
	key <AB02> { [	    q,	Q, v, V		]	};
	key <AB03> { [	    j,	J, w, W		]	};
	key <AB04> { [	    k,	K, m, M		]	};
	key <AB05> { [	    x,	X, b, B		]	};

	key <AE01> {	[	  1,	exclam,		NoSymbol,	NoSymbol	]	};

	// 2@<{
	key <AE02> {	[	  2,	at,		less,	NoSymbol	]	};
	// 3#>}
	key <AE03> {	[	  3,	numbersign,	greater,	NoSymbol	]	};
	key <AE04> {	[	  4,	dollar,		EuroSign,	NoSymbol	]	};
	key <AE05> {	[	  5,	percent,	NoSymbol,	NoSymbol	]	};

	//// Backspace, arrow keys, ...
	// TODO 
	// key <AD07> { [	    g,	G, Prior, NoSymbol		]	};
	key <AD07> { [	    g,	G, parenleft, braceleft		]	};
	key <AD08> { [	    c,	C,	Up,	 Up	]	};
	key <AD09> { [	    r,	R,	parenright,	braceright		]	};
	// key <AD09> { [	    r,	R,	Next,	Next		]	};
	key <AD10> { [	    l,	L, BackSpace, Delete		]	};
	key <AC07> { [	    h,	H,	Left,	Left		]	};
	key <AC08> { [	    t,	T,	Down,	Down   ]	};
	key <AC09> { [	    n,	N,	Right,	Right		]	};

	key <AD06> { [	    f,	F  		]	};
	// Slash and Backslash
	key <AD11> { [	slash,	question, backslash, NoSymbol	]	};
	key <AD12> { [	equal,	plus		]	};

	// TODO
	key <AC06> { [	    d,	D, NoSymbol, NoSymbol		]	};
    key <AC10> { [	    s,	S,	ssharp,	ssharp		]	};
	key <AC11> { [	minus,	underscore	]	};

	key <AB06> { [	    b,	B		]	};
	key <AB07> { [	    m,	M		]	};
	key <AB08> { [	    w,	W		]	};
	key <AB09> { [	    v,	V		]	};
	key <AB10> { [	    z,	Z		]	};

	// +|\? - the key that by default has only backslash+bar
	key <BKSL> { [  plus,  bar, backslash, NoSymbol             ]       };

	key <AE06> {	[	  6,	asciicircum	]	};
	key <AE07> {	[	  7,	ampersand	]	};
	key <AE08> {	[	  8,	asterisk	]	};
	key <AE09> {	[	  9,	parenleft	]	};
	key <AE10> {	[	  0,	parenright	]	};
	key <AE11> {	[     bracketleft,	braceleft	]	};
	key <AE12> {	[     bracketright,	braceright		]	};


Russian-Ukrainian layout, I adapted an existing one I found:

// Keyboard layouts for Russia.
// AEN <>
// 2001/12/23 by Leon Kanter <>
// 2005/12/09 Valery Inozemtsev <>
// 2018/07/15 @a13 (a.k.a. @dbvvmpg) and Stepanenko Andrey <>
// 2021 - Adapted to contain Ukrainian characters -

// Windows layout
default  partial alphanumeric_keys
xkb_symbols "winkeys" {

    include "ruua(ruua)"
    name[Group1]= "Russian";

    key <AE03> { [           3,  numerosign  ] };
    key <AE04> { [           4,   semicolon  ] };
    key <AE05> { [           5,     percent  ] };
    key <AE06> { [           6,       colon  ] };
    key <AE07> { [           7,    question  ] };
    key <AE08> { [           8,    asterisk, U20BD  ] };

    key <AB10> { [      period,       comma  ] };

    // SH -- now adding the bksp and stuff and removing the Enter thing.
	key <SPCE> { [ space] };
	// Mod+Tab gives a slash, which I use often (searching etc.) 
	// Mod+Shift+Tab gives an umlaut on the next character

hidden partial alphanumeric_keys
xkb_symbols "ruua" {

    key <AE01> { [           1,      exclam  ] };
    key <AE02> { [           2,    quotedbl  ] };
    key <AE03> { [           3,  numbersign  ] };
    key <AE04> { [           4,    asterisk  ] };
    key <AE05> { [           5,       colon  ] };
    key <AE06> { [           6,       comma  ] };
    key <AE07> { [           7,      period  ] };
    key <AE08> { [           8,   semicolon  ] };
    key <AE09> { [           9,   parenleft  ] };
    key <AE10> { [           0,  parenright  ] };
    key <AE11> { [       minus,  underscore  ] };
    key <AE12> { [       equal,        plus  ] };
    key <BKSL> { [   slash,         backslash  ] };

    key <AB10> { [       slash,    question  ] };
    key <LSGT> { [       slash,         bar  ] };

    key <TLDE> { [       Cyrillic_io,	apostrophe,	U02BC,       Cyrillic_IO  ] };
    key <AD01> { [   Cyrillic_shorti,   Cyrillic_SHORTI  ] };
    key <AD02> { [      Cyrillic_tse,      Cyrillic_TSE  ] };
    key <AD03> { [        Cyrillic_u,        Cyrillic_U  ] };
    key <AD04> { [       Cyrillic_ka,       Cyrillic_KA  ] };
    key <AD05> { [       Cyrillic_ie,       Cyrillic_IE] };
    key <AD06> { [       Cyrillic_en,       Cyrillic_EN  ] };
    key <AD07> { [      Cyrillic_ghe,      Cyrillic_GHE  ] };
    key <AD08> { [      Cyrillic_sha,      Cyrillic_SHA  ] };
    key <AD09> { [    Cyrillic_shcha,    Cyrillic_SHCHA  ] };
    key <AD10> { [       Cyrillic_ze,       Cyrillic_ZE  ] };
    key <AD11> { [       Cyrillic_ha,       Cyrillic_HA  ] };
    key <AD12> { [ Cyrillic_hardsign,	Cyrillic_HARDSIGN,	Ukrainian_yi,	Ukrainian_YI] };

    key <AC01> { [       Cyrillic_ef,       Cyrillic_EF  ] };
    key <AC02> { [     Cyrillic_yeru,     Cyrillic_YERU,	Ukrainian_i,	Ukrainian_I] };
    key <AC03> { [       Cyrillic_ve,       Cyrillic_VE  ] };
    key <AC04> { [        Cyrillic_a,        Cyrillic_A  ] };
    key <AC05> { [       Cyrillic_pe,       Cyrillic_PE  ] };
    key <AC06> { [       Cyrillic_er,       Cyrillic_ER  ] };
    key <AC07> { [        Cyrillic_o,        Cyrillic_O  ] };
    key <AC08> { [       Cyrillic_el,       Cyrillic_EL  ] };
    key <AC09> { [       Cyrillic_de,       Cyrillic_DE  ] };
    key <AC10> { [      Cyrillic_zhe,      Cyrillic_ZHE  ] };
    key <AC11> { [        Cyrillic_e,        Cyrillic_E,	Ukrainian_ie,	Ukrainian_IE] };

    key <AB01> { [       Cyrillic_ya,       Cyrillic_YA  ] };
    key <AB02> { [      Cyrillic_che,      Cyrillic_CHE  ] };
    key <AB03> { [       Cyrillic_es,       Cyrillic_ES  ] };
    key <AB04> { [       Cyrillic_em,       Cyrillic_EM  ] };
    key <AB05> { [        Cyrillic_i,        Cyrillic_I] };
    key <AB06> { [       Cyrillic_te,       Cyrillic_TE  ] };
    key <AB07> { [ Cyrillic_softsign, Cyrillic_SOFTSIGN  ] };
    key <AB08> { [       Cyrillic_be,       Cyrillic_BE  ] };
    key <AB09> { [       Cyrillic_yu,       Cyrillic_YU  ] };

    include "kpdl(comma)"

Parting thoughts

Custom keyboard layouts for the win

Tweaking to your purposes something as fundamental as a keyboard layout is strangely empowering. And adapting to a new layout is like learning a foreign language - if you did it at least once in your life, the next ones are much easier. Especially if it’s small things like moving a key, or just adding more symbols to the existing layout.

Wouldn’t recommend it to everyone, though.

One thing I would recommend to everyone without exception is switching the Ctrl and Caps Lock keys. It can be easily done on any OS, including Linux, where too it can be done without editing any layout files12.

Interesting resources on topic

Thank you for reading!

  1. Visualization done with the excellent Keyboard Layout Editor ↩︎

  2. Dvorak keyboard layout - Wikipedia ↩︎

  3. xcape: Linux utility to configure modifier keys to act as other keys when pressed and released on their own. ↩︎

  4. Can be automated of course, but all tutorials I found gave me the impression it’s a worse can of worms than the one I already had, and I never tried. ↩︎

  5. We set <LALT> as a one-level key, that is not affected by anything. (If a key can be changed only by Shift it’d be two-level, for example.) And we make it act as Level3 modifier, basically another kind of Shift, closer to AltGr originally (and still in countries like Germany).

    key <LALT> { type[Group1] = "ONE_LEVEL", symbols[Group1] = [ ISO_Level3_Shift ] };

    Any keys that accept it have to also accept Shift and therefore have to be at least four-level. ↩︎

  6. I don’t feel like doing the “(keycode, group, state) → keysym” thing in this post, it’s not meant to be a tutorial ↩︎

  7. The beauty of two layouts instead of more can only be appreciated by someone who constantly had to switch between multiple ones. ↩︎

  8. X keyboard extension - ArchWiki ↩︎

  9. X keyboard extension - ArchWiki ↩︎

  10. Arrow keys - Wikipedia ↩︎

  11. v6.cpp was born from my wish to have syntax highlighting in vim, it being late and a .cpp extension being the easiest way to get some adequate highlighting going. ↩︎

  12. keyboard - How do I remap the Caps Lock and Ctrl keys? - Ask Ubuntu ↩︎