serhii.net

In the middle of the desert you can say anything you want

18 Jan 2024

RU interference masterarbeit task embeddings mapping

Goal: find identical words with diff embeddings in RU and UA, use that to generate examples.

FastText

Link broken but I think I found the download page for the vectors

Their blog is also down but they link the howto from the archive Aligning vector representations – Sam’s ML Blog

Download: fastText/docs/crawl-vectors.md at master · facebookresearch/fastText

axel https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.uk.300.bin.gz
axel https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.ru.300.bin.gz

It’s taking a while.

EDIT: Ah damn, had to be the text ones, not bin. :( starting again

EDIT2: THIS is the place: fastText/docs/pretrained-vectors.md at master · facebookresearch/fastText

https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.uk.vec
https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ru.vec

UKR has 900k lines, RUS has 1.8M — damn, it’s not going to be easy.

What do I do next, assuming this works?

Other options

Next steps

Assuming I found out that RU-кит is far in the embedding space from UKR-кіт, what do I do next?

How do I test for false friends?

Maybe these papers about Surzhyk might come in handy now, especially <_(@Sira2019) “Towards an automatic recognition of mixed languages: The Ukrainian-Russian hybrid language Surzhyk” (2019) / Nataliya Sira, Giorgio Maria Di Nunzio, Viviana Nosilia: z / http://arxiv.org/abs/1912.08582 / _>.

Back to python

Took infinite time & then got killed by Linux.

from fasttext import FastVector
#  ru_dictionary = FastVector(vector_file='wiki.ru.vec')
ru_dictionary = FastVector(vector_file='/home/sh/uuni/master/code/ru_interference/DATA/wiki.ru.vec')
uk_dictionary = FastVector(vector_file='/home/sh/uuni/master/code/ru_interference/DATA/wiki.uk.vec')

uk_dictionary.apply_transform('alignment_matrices/uk.txt')
ru_dictionary.apply_transform('alignment_matrices/ru.txt')

print(FastVector.cosine_similarity(ua_dictionary["кіт"], ru_dictionary["кот"]))

Gensim it is.

To load:

from gensim.models import KeyedVectors
from gensim.test.utils import datapath

ru_dictionary = 'DATA/small/wiki.ru.vec'
uk_dictionary = 'DATA/small/wiki.uk.vec'

model_ru = KeyedVectors.load_word2vec_format(datapath(ru_dictionary))
model_uk = KeyedVectors.load_word2vec_format(datapath(uk_dictionary))

Did ru_model.save(...) and then I can load it as >>> KeyedVectors.load("ru_interference/src/ru-model-save")

Which is faster — shouldn’t have used the text format, but that’s on me.

 from gensim.models import TranslationMatrix
tm = TranslationMatrix(model_ru,model_uk, word_pairs)

(Pdb++) r = tm2.translate(ukrainian_words,topn=3)
(Pdb++) pp(r)
OrderedDict([('сонце', ['завишня', 'скорбна', 'вишня']),
             ('квітка', ['вишня', 'груша', 'вишнях']),
             ('місяць', ['любить…»', 'гадаю…»', 'помилуй']),
             ('дерево', ['яблуко', '„яблуко', 'яблуку']),
             ('вода', ['вода', 'риба', 'каламутна']),
             ('птах', ['короваю', 'коровай', 'корова']),
             ('книга', ['читати', 'читати»', 'їсти']),
             ('синій', ['вишнях', 'зморшках', 'плакуча'])])

OK, then definitely more words would be needed for the translation.

Either way I don’t need it, I need the space, roughly described here: mapping - How do i get a vector from gensim’s translation_matrix - Stack Overflow

Next time:

Vector blues

  • Jpsaris/transvec: Translate word embeddings across models fixes the things I wanted to fix myself in the original implementation w/ new gensim version — note to self, forking things is allowed and is better than editing files locally The wiki vectors are kinda garbage, with most_similar returning not semantically similar words, but ones looking like candidatte next words. And a lot of random punctuation inside the tokens. Maybe I’m oding sth wrong?

Anyway - my only reason for them was ft multilingual, I can do others now.

*** RuntimeError: scikit-learn estimators should always specify their parameters in the signature of their __init__ (no varargs). <class 'transvec.transformers.TranslationWordVectorizer'> with constructor (self, target: 'gensim.models.keyedvectors.KeyedVectors', *sources: 'gensim.models.keyedvectors.KeyedVectors', alpha: float = 1.0, max_iter: Optional[int] = None, tol: float = 0.001, solver: str = 'auto', missing: str = 'raise', random_state: Union[int, numpy.random.mtrand.RandomState, NoneType] = None) doesn't  follow this convention.

ah damn. Wasn’t an issue with the older one, though the only thing that changed is https://github.com/big-o/transvec/compare/master...Jpsaris:transvec:master

So

Decided to leave this till better times, but play with this one more hour today.

Coming back to mapping - How do i get a vector from gensim’s translation_matrix - Stack Overflow, I need mapped_source_space.

I should have used pycharm at a much earlier stage in the process.

  • mapped_source_space contains a matrix with the 4 vectors mapped to the target space.
  • A Space is a matrix w/ vectors, and the dicts that tell you which word is where.
  • For my purposes, I can ’translate’ the interesting (to me) words and then compare their vectors to the vectors of the corresponding words in the target space.

Why does source_space have 1.8k words, while the source embedding space has 200k?

Ah, tmp.translate() can translate words not found in source space. Interesting!

AHA - source/target space gets build only based on the words provided for training, 1.8k in my case. Then it builds the translation matrix based on that.

BUT in translate() the target matrix gets build based on the entire vector!

Which means:

  • for rus/source words, I can just use the word in the original rus embedding space, not tm’s source_space.
  • for ukr words, I build the target space the same way

Results!

real
картошка/картопля -> 0.28
дом/дім -> 1.16
чай/чай -> 1.17
паспорт/паспорт -> 0.40
зерно/зерно -> 0.46
нос/ніс -> 0.94

false
неделя/неділя -> 0.34
город/город -> 0.35
он/он -> 0.77
речь/річ -> 0.89
родина/родина -> 0.32
сыр/сир -> 0.99
папа/папа -> 0.63
мать/мати -> 0.52

Let’s normalize:

real
картошка/картопля -> 0.64
дом/дім -> 0.64
чай/чай -> 0.70
паспорт/паспорт -> 0.72
зерно/зерно -> 0.60

false
неделя/неділя -> 0.55
город/город -> 0.44
он/он -> 0.33
речь/річ -> 0.54
родина/родина -> 0.50
сыр/сир -> 0.66
папа/папа -> 0.51
мать/мати -> 0.56

OK, so it mostly works! With good enough tresholds it can work. Words that are totally different aren’t similar (он), words that have some shared meanings (мать/мати) are closer.

Ways to improve this:

  • Remove partly matching words from the list of reference translations used to build this
  • Find some lists of all words in both languages
  • Test the hell out of them, find the most and least similar ones

Onwards

sorted by similarity (lower values = more fake friend-y). Nope, doesn’t make sense mostly. But rare words seem to be the most ‘different’ ones:

{'поза': 0.3139531, 'iphone': 0.36648884, 'галактика': 0.39758587, 'Роман': 0.40571105, 'дюйм': 0.43442175, 'араб': 0.47358453, 'друг': 0.4818558, 'альфа': 0.48779228, 'гора': 0.5069237, 'папа': 0.50889325, 'проспект': 0.5117553, 'бейсбол': 0.51532406, 'губа': 0.51682216, 'ранчо': 0.52178365, 'голова': 0.527564, 'сука': 0.5336818, 'назад': 0.53545296, 'кулак': 0.5378426, 'стейк': 0.54102343, 'шериф': 0.5427336, 'палка': 0.5516712, 'ставка': 0.5519752, 'соло': 0.5522958, 'акула': 0.5531602, 'поле': 0.55333376, 'астроном': 0.5556448, 'шина': 0.55686104, 'агентство': 0.561674, 'сосна': 0.56177, 'бургер': 0.56337166, 'франшиза': 0.5638794, 'фунт': 0.56592, 'молекула': 0.5712515, 'браузер': 0.57368404, 'полковник': 0.5739758, 'горе': 0.5740198, 'шапка': 0.57745415, 'кампус': 0.5792211, 'дрейф': 0.5800869, 'онлайн': 0.58176875, 'замок': 0.582287, 'файл': 0.58236635, 'трон': 0.5824338, 'ураган': 0.5841942, 'диван': 0.584252, 'фургон': 0.58459675, 'трейлер': 0.5846335, 'приходить': 0.58562565, 'сотня': 0.585832, 'депозит': 0.58704704, 'демон': 0.58801174, 'будка': 0.5882363, 'царство': 0.5885376, 'миля': 0.58867997, 'головоломка': 0.5903712, 'цент': 0.59163713, 'казино': 0.59246653, 'баскетбол': 0.59255254, 'марихуана': 0.59257627, 'пастор': 0.5928912, 'предок': 0.5933549, 'район': 0.5940658, 'статистика': 0.59584284, 'стартер': 0.5987516, 'сайт': 0.5988183, 'демократ': 0.5999011, 'оплата': 0.60060596, 'тендер': 0.6014088, 'орел': 0.60169894, 'гормон': 0.6021177, 'метр': 0.6023728, 'меню': 0.60291564, 'гавань': 0.6029945, 'рукав': 0.60406476, 'статуя': 0.6047057, 'скульптура': 0.60497975, 'вагон': 0.60551536, 'доза': 0.60576916, 'синдром': 0.6064756, 'тигр': 0.60673815, 'сержант': 0.6070389, 'опера': 0.60711193, 'таблетка': 0.60712767, 'фокус': 0.6080196, 'петля': 0.60817575, 'драма': 0.60842395, 'шнур': 0.6091568, 'член': 0.6092182, 'сервер': 0.6094157, 'вилка': 0.6102615, 'мода': 0.6106603, 'лейтенант': 0.6111004, 'радар': 0.6117528, 'галерея': 0.61191505, 'ворота': 0.6125873, 'чашка': 0.6132187, 'крем': 0.6133907, 'бюро': 0.61342597, 'черепаха': 0.6146957, 'секс': 0.6151523, 'носок': 0.6156026, 'подушка': 0.6160687, 'бочка': 0.61691606, 'гольф': 0.6172053, 'факультет': 0.6178817, 'резюме': 0.61848575, 'нерв': 0.6186257, 'король': 0.61903644, 'трубка': 0.6194198, 'ангел': 0.6196466, 'маска': 0.61996806, 'ферма': 0.62029755, 'резидент': 0.6205579, 'футбол': 0.6209573, 'квест': 0.62117445, 'рулон': 0.62152386, 'сарай': 0.62211347, 'слава': 0.6222329, 'блог': 0.6223742, 'ванна': 0.6224452, 'пророк': 0.6224489, 'дерево': 0.62274456, 'горло': 0.62325376, 'порт': 0.6240524, 'лосось': 0.6243047, 'альтернатива': 0.62446254, 'кровоточить': 0.62455964, 'сенатор': 0.6246379, 'спортзал': 0.6246594, 'протокол': 0.6247676, 'ракета': 0.6254694, 'салат': 0.62662274, 'супер': 0.6277698, 'патент': 0.6280118, 'авто': 0.62803495, 'монета': 0.628338, 'консенсус': 0.62834597, 'резерв': 0.62838227, 'кабель': 0.6293858, 'могила': 0.62939847, 'небо': 0.62995523, 'поправка': 0.63010347, 'кислота': 0.6313528, 'озеро': 0.6314377, 'телескоп': 0.6323617, 'чудо': 0.6325846, 'пластик': 0.6329929, 'процент': 0.63322043, 'маркер': 0.63358307, 'датчик': 0.6337889, 'кластер': 0.633797, 'детектив': 0.6341895, 'валюта': 0.63469064, 'банан': 0.6358283, 'фабрика': 0.6360865, 'сумка': 0.63627976, 'газета': 0.6364525, 'математика': 0.63761103, 'плюс': 0.63765526, 'урожай': 0.6377103, 'контраст': 0.6385834, 'аборт': 0.63913494, 'парад': 0.63918126, 'формула': 0.63957334, 'арена': 0.6396606, 'парк': 0.6401386, 'посадка': 0.6401986, 'марш': 0.6403458, 'концерт': 0.64061844, 'перспектива': 0.6413666, 'статут': 0.6419941, 'транзит': 0.64289963, 'параметр': 0.6430252, 'рука': 0.64307654, 'голод': 0.64329326, 'медаль': 0.643804, 'фестиваль': 0.6438755, 'небеса': 0.64397913, 'барабан': 0.64438117, 'картина': 0.6444177, 'вентилятор': 0.6454438, 'ресторан': 0.64582723, 'лист': 0.64694726, 'частота': 0.64801234, 'ручка': 0.6481528, 'ноутбук': 0.64842474, 'пара': 0.6486577, 'коробка': 0.64910173, 'сенат': 0.64915174, 'номер': 0.64946175, 'ремесло': 0.6498537, 'слон': 0.6499266, 'губернатор': 0.64999187, 'раковина': 0.6502305, 'трава': 0.6505385, 'мандат': 0.6511373, 'великий': 0.6511585, 'ящик': 0.65194154, 'череп': 0.6522753, 'ковбой': 0.65260696, 'корова': 0.65319675, 'честь': 0.65348136, 'легенда': 0.6538656, 'душа': 0.65390354, 'автобус': 0.6544202, 'метафора': 0.65446657, 'магазин': 0.65467703, 'удача': 0.65482104, 'волонтер': 0.65544796, 'сексуально': 0.6555309, 'ордер': 0.6557747, 'точка': 0.65612084, 'через': 0.6563236, 'глина': 0.65652716, 'значок': 0.65661323, 'плакат': 0.6568083, 'слух': 0.65709555, 'нога': 0.6572164, 'фотограф': 0.65756184, 'ненависть': 0.6578564, 'пункт': 0.65826315, 'берег': 0.65849876, 'альбом': 0.65849936, 'кролик': 0.6587049, 'масло': 0.6589803, 'бензин': 0.6590406, 'покупка': 0.65911734, 'параграф': 0.6596477, 'вакцина': 0.6603271, 'континент': 0.6609991, 'расизм': 0.6614046, 'правило': 0.661452, 'симптом': 0.661881, 'романтика': 0.6626457, 'атрибут': 0.66298646, 'олень': 0.66298693, 'кафе': 0.6635062, 'слово': 0.6636568, 'машина': 0.66397023, 'джаз': 0.663977, 'пиво': 0.6649644, 'слуга': 0.665489, 'температура': 0.66552, 'море': 0.666358, 'чувак': 0.6663854, 'комфорт': 0.66651237, 'театр': 0.66665906, 'ключ': 0.6670032, 'храм': 0.6673037, 'золото': 0.6678767, 'робот': 0.66861665, 'джентльмен': 0.66861814, 'рейтинг': 0.6686267, 'талант': 0.66881114, 'флот': 0.6701237, 'бонус': 0.67013747, 'величина': 0.67042017, 'конкурент': 0.6704642, 'конкурс': 0.6709986, 'доступ': 0.6712131, 'жанр': 0.67121863, 'пакет': 0.67209935, 'твердо': 0.6724718, 'клуб': 0.6724739, 'координатор': 0.6727365, 'глобус': 0.67277336, 'карта': 0.6731522, 'зима': 0.67379165, 'вино': 0.6737963, 'туалет': 0.6744124, 'середина': 0.6748006, 'тротуар': 0.67507124, 'законопроект': 0.6753582, 'земля': 0.6756074, 'контейнер': 0.6759613, 'посольство': 0.67680794, 'солдат': 0.6771952, 'канал': 0.677311, 'норма': 0.67757475, 'штраф': 0.67796284, 'маркетинг': 0.67837185, 'приз': 0.6790007, 'дилер': 0.6801595, 'молитва': 0.6806114, 'зона': 0.6806243, 'пояс': 0.6807122, 'автор': 0.68088144, 'рабство': 0.6815858, 'коридор': 0.68208706, 'пропаганда': 0.6826943, 'журнал': 0.6828874, 'портрет': 0.68304217, 'фермер': 0.6831401, 'порошок': 0.6831531, 'сюрприз': 0.68327177, 'камера': 0.6840434, 'фаза': 0.6842661, 'природа': 0.6843757, 'лимон': 0.68452585, 'гараж': 0.68465877, 'рецепт': 0.6848821, 'свинина': 0.6863143, 'атмосфера': 0.6865022, 'режим': 0.6870908, 'характеристика': 0.6878463, 'спонсор': 0.6879278, 'товар': 0.6880773, 'контакт': 0.6888988, 'актриса': 0.6891222, 'диск': 0.68916976, 'шоколад': 0.6892894, 'банда': 0.68934155, 'панель': 0.68947715, 'запуск': 0.6899455, 'травма': 0.690045, 'телефон': 0.69024855, 'список': 0.69054323, 'кредит': 0.69054526, 'актив': 0.69087565, 'партнерство': 0.6909646, 'спорт': 0.6914842, 'маршрут': 0.6915196, 'репортер': 0.6920864, 'сегмент': 0.6920909, 'бунт': 0.69279015, 'риторика': 0.69331145, 'школа': 0.6933826, 'оператор': 0.69384277, 'ветеран': 0.6941337, 'членство': 0.69435036, 'схема': 0.69441277, 'манера': 0.69451445, 'командир': 0.69467854, 'формат': 0.69501007, 'сцена': 0.69557995, 'секрет': 0.6961215, 'курс': 0.6964162, 'компонент': 0.69664925, 'патруль': 0.69678336, 'конверт': 0.6968681, 'символ': 0.6973544, 'насос': 0.6974678, 'океан': 0.69814134, 'критик': 0.6988366, 'доброта': 0.6989736, 'абсолютно': 0.6992678, 'акцент': 0.6998319, 'ремонт': 0.70108724, 'мама': 0.7022723, 'тихо': 0.70254886, 'правда': 0.7040037, 'транспорт': 0.704239, 'книга': 0.7051158, 'вода': 0.7064695, 'кухня': 0.7070433, 'костюм': 0.7073295, 'дикий': 0.70741034, 'прокурор': 0.70768344, 'консультант': 0.707697, 'квартира': 0.7078515, 'шанс': 0.70874536, 'сила': 0.70880103, 'хаос': 0.7089504, 'дебют': 0.7092187, 'завтра': 0.7092679, 'горизонт': 0.7093906, 'модель': 0.7097884, 'запах': 0.710207, 'сама': 0.71082854, 'весна': 0.7109366, 'орган': 0.7114152, 'далекий': 0.7118393, 'смерть': 0.71213734, 'медсестра': 0.71224624, 'молоко': 0.7123647, 'союз': 0.71299064, 'звук': 0.71361446, 'метод': 0.7138604, 'корпус': 0.7141677, 'приятель': 0.71538115, 'центр': 0.716277, 'максимум': 0.7162813, 'страх': 0.7166886, 'велосипед': 0.7168154, 'контроль': 0.7171681, 'ритуал': 0.71721196, 'команда': 0.7175366, 'молоток': 0.71759546, 'цикл': 0.71968937, 'жертва': 0.7198437, 'статус': 0.7203152, 'пульс': 0.7206338, 'тренер': 0.72116625, 'сектор': 0.7221448, 'музей': 0.72323525, 'сфера': 0.7245963, 'пейзаж': 0.7246053, 'вниз': 0.72528857, 'редактор': 0.7254647, 'тема': 0.7256167, 'агент': 0.7256874, 'дизайнер': 0.72618955, 'деталь': 0.72680634, 'банк': 0.7270782, 'союзник': 0.72750694, 'жест': 0.7279984, 'наставник': 0.7282404, 'тактика': 0.72968495, 'спектр': 0.7299538, 'проект': 0.7302779, 'художник': 0.7304505, 'далеко': 0.7306006, 'ресурс': 0.73075294, 'половина': 0.7318293, 'явно': 0.7323554, 'день': 0.7337892, 'юрист': 0.73461473, 'широко': 0.73490566, 'закон': 0.7372453, 'психолог': 0.7373602, 'сигарета': 0.73835427, 'проблема': 0.7388488, 'аргумент': 0.7389784, 'старший': 0.7395191, 'продукт': 0.7395814, 'ритм': 0.7406945, 'широкий': 0.7409786, 'голос': 0.7423325, 'урок': 0.74272805, 'масштаб': 0.74474066, 'критика': 0.74535364, 'правильно': 0.74695253, 'авторитет': 0.74697924, 'активно': 0.74720675, 'причина': 0.7479735, 'сестра': 0.74925977, 'сигнал': 0.749686, 'алкоголь': 0.7517742, 'регулярно': 0.7521055, 'мотив': 0.7527843, 'бюджет': 0.7531772, 'плоский': 0.754082, 'посол': 0.75505507, 'скандал': 0.75518423, 'дизайн': 0.75567746, 'персонал': 0.7561288, 'адвокат': 0.7561835, 'принцип': 0.75786924, 'фонд': 0.7583069, 'структура': 0.75888604, 'дискурс': 0.7596848, 'вперед': 0.76067656, 'контур': 0.7607424, 'спортсмен': 0.7616756, 'стимул': 0.7622434, 'партнер': 0.76245433, 'стиль': 0.76301545, 'сильно': 0.7661394, 'текст': 0.7662303, 'фактор': 0.76729685, 'герой': 0.7697237, 'предмет': 0.775718, 'часто': 0.7780384, 'план': 0.77855974, 'рано': 0.78059715, 'факт': 0.782439, 'конкретно': 0.78783923, 'сорок': 0.79080343, 'аспект': 0.79219675, 'контекст': 0.7926827, 'роль': 0.796745, 'президент': 0.8007479, 'результат': 0.80227, 'десять': 0.8071967, 'скоро': 0.80976427, 'тонкий': 0.8100516, 'момент': 0.8120169, 'нести': 0.81280494, 'документ': 0.8216758, 'просто': 0.8222313, 'очевидно': 0.8242744, 'точно': 0.83183587, 'один': 0.83644223, 'пройти': 0.84026355}

ways to improve:

  • remove potential bad words from training set

  • expand looking for candidate words by doing predictable changes a la <_(@Sira2019) “Towards an automatic recognition of mixed languages: The Ukrainian-Russian hybrid language Surzhyk” (2019) / Nataliya Sira, Giorgio Maria Di Nunzio, Viviana Nosilia: z / http://arxiv.org/abs/1912.08582 / _>

  • add weighting based on frequency, rarer words will have less stable embeddings

  • look at other trained vectors, ideally sth more processed

  • And actually thinking about it — is there anything I can solve through this that I can’t solve by parsing one or more dictionaries, maybe even making embeddings of the definitions of the various words?

    • That said most other research on the topic of automatically finding cognates had this issue as well
    • And no one did this that way, and no one ever did this for RU/UA

Fazit: leaving this alone till after the masterarbeit as a side project. It’s incredibly interesting but probably not directly practical. Sad.

Nel mezzo del deserto posso dire tutto quello che voglio.