In the middle of the desert you can say anything you want
Edited my “someday” report:
report.sd.filter=status:pending sprint:s sprint.isnt:srv
sprint:s
seems to catch srv
too, which I don’t want. Not anymore. Also Taskwarrior - FAQ is the list of such modifiers.
Attribute modifiers make filters more precise. Supported modifiers are:
Modifiers Example Equivalent Meaning
---------------- ----------------- ------------------- -------------------------
due:today due = today Fuzzy match
not due.not:today due != today Fuzzy non-match
before, below due.before:today due < tomorrow Exact date comparison
after, above due.after:today due > tomorrow Exact date comparison
none project.none: project == '' Empty
any project.any: project !== '' Not empty
is, equals project.is:x project == x Exact match
isnt project.isnt:x project !== x Exact non-match
has, contains desc.has:Hello desc ~ Hello Pattern match
hasnt, desc.hasnt:Hello desc !~ Hello Pattern non-match
startswith, left desc.left:Hel desc ~ '^Hel' Beginning match
endswith, right desc.right:llo desc ~ 'llo$' End match
word desc.word:Hello desc ~ '\bHello\b' Boundaried word match
noword desc.noword:Hello desc !~ '\bHello\b' Boundaried word non-match
In intellij idea you can set more options for each breakpoint after right-clicking on it; especially “disable until breakpoint X is hit”, where X can be disabled.
.. is not there by default all the time; the hard-to-find answer for this is adding model.run_eagerly=True
after model.compile()
.
Of course, the following also works:
[x[1][1]['mycast'] for x in dataset.enumerate(5).__iter__()]
… add what you tell it to add, even if you’ve use tf.one_hot()
on the data before. Then you get weird zeros in the result of the one hot encoding.
Ausstattung für die erste eigene Wohnung - Checkliste is a nice checklist :)
When you do
annotation_pred = tf.to_float(tf.argmax(out, dimension=4, name='prediction'))
, you get an index of the max value in your tensor. This index can’t be derivated, thus the gradient can’t flow throught this operation.So as your loss is only defined by this value, and the gradient can’t flow throught it, no gradient can be calculated for your network.
Argmax is okay if I don’t calculate my loss through it.
The ellipsis (three dots) indicates “as many ‘:’ as needed” This makes it easy to manipulate only one dimension of an array, letting numpy do array-wise operations over the “unwanted” dimensions. You can only really have one ellipsis in any given indexing expression, or else the expression would be ambiguous about how many ‘:’ should be put in each.
Outlook. What is the meaning of “AW” in an email header? – AW == RE in most other languages
Added the following to .ideavimrc
:
map <leader>c :action EditorToggleCase<CR>
Using ‘categorical_crossentropy’ instead of ‘sparse_categorical_crossentropy’, give weird unintuitive errors
This is a really nice tutorial with the basics that’s not too basic: Sequence Tagging with Tensorflow
So I don’t forget, Metrics ignored when using model.add_loss()
(like in VAE example) · Issue #9459 · keras-team/keras · GitHub currently happens.
It supports the following :set
commands: ideavim/set-commands.md at master · JetBrains/ideavim · GitHub. Especially relativenumbers
is nice.
Ctrl + ww
for quickly changing between splits.:source ~/.ideavimrc
works.apt-get purge
and zshzsh does its own wildcard stuff, and apt-get purge nvidia*
doesn’t work because of this. apt-get purge nvidia\*
does (or with ‘’s). Same story as with scp, I’m surprised I keep having issues with this.
/var/log/apt/history/
contains the last one and the rotated gzipped old ones.Google has nice animations for this!
I’ll be following this: 9.1. Attention Mechanism — Dive into Deep Learning 0.7 documentation
assert
statementUsingAssertionsEffectively - Python Wiki
assert condition, message
-> if condition
is false, it returns an AssertionError
.