In the middle of the desert you can say anything you want
I should read this sometime: Breakpoints - Help | IntelliJ IDEA
I should create a better ym that supports copying markdown links that have |s in them. Using Add ability to yank inline by jgkamat · Pull Request #4651 · qutebrowser/qutebrowser · GitHub most probably.
tf.boolean_mask  |  TensorFlow Core r2.0 is something similar to what I do with tensor*mask, but it removes the rows where the condition is not fulfilled.
Keras custom metrics raises error when update_state returns an op. · Issue #30711 · tensorflow/tensorflow · GitHub - forget about returning ops in custom metrics, internal Google TPU issue thing. It’s supposed not to work. Error was:
TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function Function._defun_with_scope.
.wrapped_fn at 0xb34ec5d08>, found return value of type <class ’tensorflow.python.framework.ops.Operation’>, which is not a Tensor. 
tf.assign_add - TensorFlow Python - W3cubDocs - is this another place to read readable TF documentation?
model.run_eagerly=True is not enough – when creating a custom Metric, as mentioned in metrics.py, tf.config.experimental_run_functions_eagerly(True) is also needed.
As added bonus - if this is not enabled, Intellij Idea debugging also doesn’t work. As in the breakpoints get ignored.
I really should resurrect my link DB.
Sandeep Aparajit: Tutorial: Conditional Random Field (CRF) is a nice 108-page presentation spanning basic probability theory and flowing to Bayes, marginals, CRF etc etc, very very self-contained.
Generative VS Discriminative Models - Prathap Manohar Joshi - Medium
Overview — ELI5 0.9.0 documentation “.. is a Python package which helps to debug machine learning classifiers and explain their predictions.”
If I * a tensor by another tensor I get a per element multiplication. I keep forgetting this for some reason
I can even edit EagerTensors by right click -> Edit value! Quite a weird UI but still nice
Edited my “someday” report:
 report.sd.filter=status:pending sprint:s sprint.isnt:srv
sprint:s seems to catch srv too, which I don’t want. Not anymore. Also Taskwarrior - FAQ is the list of such modifiers.
Attribute modifiers make filters more precise.  Supported modifiers are:
  Modifiers         Example            Equivalent           Meaning
  ----------------  -----------------  -------------------  -------------------------
                    due:today          due = today          Fuzzy match
  not               due.not:today      due != today         Fuzzy non-match
  before, below     due.before:today   due < tomorrow       Exact date comparison
  after, above      due.after:today    due > tomorrow       Exact date comparison
  none              project.none:      project == ''        Empty
  any               project.any:       project !== ''       Not empty
  is, equals        project.is:x       project == x         Exact match
  isnt              project.isnt:x     project !== x        Exact non-match
  has, contains     desc.has:Hello     desc ~ Hello         Pattern match
  hasnt,            desc.hasnt:Hello   desc !~ Hello        Pattern non-match
  startswith, left  desc.left:Hel      desc ~ '^Hel'        Beginning match
  endswith, right   desc.right:llo     desc ~ 'llo$'        End match
  word              desc.word:Hello    desc ~ '\bHello\b'   Boundaried word match
  noword            desc.noword:Hello  desc !~ '\bHello\b'  Boundaried word non-match
In intellij idea you can set more options for each breakpoint after right-clicking on it; especially “disable until breakpoint X is hit”, where X can be disabled.
.. is not there by default all the time; the hard-to-find answer for this is adding model.run_eagerly=True after model.compile().
Of course, the following also works:
[x[1][1]['mycast'] for x in dataset.enumerate(5).__iter__()]
… add what you tell it to add, even if you’ve use tf.one_hot() on the data before. Then you get weird zeros in the result of the one hot encoding.
Ausstattung für die erste eigene Wohnung - Checkliste is a nice checklist :)
When you do
annotation_pred = tf.to_float(tf.argmax(out, dimension=4, name='prediction')), you get an index of the max value in your tensor. This index can’t be derivated, thus the gradient can’t flow throught this operation.So as your loss is only defined by this value, and the gradient can’t flow throught it, no gradient can be calculated for your network.
Argmax is okay if I don’t calculate my loss through it.
The ellipsis (three dots) indicates “as many ‘:’ as needed” This makes it easy to manipulate only one dimension of an array, letting numpy do array-wise operations over the “unwanted” dimensions. You can only really have one ellipsis in any given indexing expression, or else the expression would be ambiguous about how many ‘:’ should be put in each.
Outlook. What is the meaning of “AW” in an email header? – AW == RE in most other languages
Added the following to .ideavimrc:
map <leader>c :action EditorToggleCase<CR>
Using ‘categorical_crossentropy’ instead of ‘sparse_categorical_crossentropy’, give weird unintuitive errors
This is a really nice tutorial with the basics that’s not too basic: Sequence Tagging with Tensorflow
So I don’t forget, Metrics ignored when using model.add_loss() (like in VAE example) · Issue #9459 · keras-team/keras · GitHub currently happens.