Day 1736
Useful literature for Masterarbeit
Linguistics basics
- Random book I found: Essentials of Linguistics, 2nd edition – Simple Book Publishing
- Also
- (172) What are some books I can read if I want to get into studying linguistics casually? : linguistics
- Steven Pinker’s “The Language Instinct” - allegedly really cool but not for my purposes
- (172) What are some books I can read if I want to get into studying linguistics casually? : linguistics
Essentials of Linguistics, 2nd edition
The online version1 has cool tests at the end!
Generally: a lot of it is about languages/power, indigenous languages etc. Might be interesting for me wrt. UA/RU and colonialism
- Chapter 5 / Morphology gets interesting
- 5.7 Inflectional morphology!
- 6: Syntax - even more interesting
- 6.2 word order
- p.264 Key grammatical terminology
- word order
- really cool and quite technical up until the end, esp. trees
-
- Semantics
-
- Pragmatics
- todo - all of it
Python self type
https://peps.python.org/pep-0673/
from typing import Self
class Shape:
def set_scale(self, scale: float) -> Self:
self.scale = scale
return self
Related: 220726-1638 Python typing classmethods return type
I remember writing about the typevar approach but cannot find it…
Meta about writing a Masterarbeit
Literature review
LessWrong
Literature Review For Academic Outsiders: What, How, and Why — LessWrong
‘Literature review’ the process is a way to become familiar with what work has already been done in a particular field or subject by searching for and studying previous work
Masterarbeit toread stack
Also: 231002-2311 Meta about writing a Masterarbeit
Relevant papers in Zotero will have a ‘toread’ tag.
When can we trust model evaluations? — LessWrong
-
How truthful is GPT-3? A benchmark for language models — LessWrong
- paper: [2109.07958] TruthfulQA: Measuring How Models Mimic Human Falsehoods
- especially the bits about constructing and validating!
- sylinrl/TruthfulQA: TruthfulQA: Measuring How Models Imitate Human Falsehoods
- paper: [2109.07958] TruthfulQA: Measuring How Models Mimic Human Falsehoods
-
lists: AI Evaluations - LessWrong