In the middle of the desert you can say anything you want
So, this: https://chat.openai.com/share/764434d6-ceba-4b9d-8cfc-7899f73f9cd3
You can enter a dummy markdown file or whatever and ask it to generate some other similar files! Like lorem ipsum but way way cooler and more open to nuance
TIL wowchemy exists, and wowchemy/starter-hugo-research-group: 👥 轻松创建研究组或组织网站 Easily create a stunning Research Group, Team, or Business Website with no-code is one template there that I’ll use to start learning about it.
This will be much messier than the average post in the Diensttagebuch
Their documentation seems to be undergoing some overhaul and half of the links don’t work and half don’t open in qutebrowser, will do what I can
The main idea seems to be that blocks can live in different .md
files in a folder, and are shown in the page based on their “weight” argument.
Wowchemy has different block types, one is Page Collection | Wowchemy for a collection of pages. A la page list in vanilla Hugo.
Actually there’s 🧱 Build your pages with blocks: no-code required! | Wowchemy Docs that’s closer to that
https://university.wowchemy.com/reference/page-features/
type: widget_page
means it’ll parse the pages AND DIRECTORIES inside the dir it’s located in as widgets, example of this is the home page.
I see no way to include two different lists of pages inside the directory without having all these pages also appear as widgets - in other words, how to “include” pages in that subfolder from some of the widgets but not the widgets page itself.
But - now I see why the home page is inside ./content/home
…
— Синку, як ти мене знайшов? — запитав батько. — Коли вже так, віднеси обід до джерела, я туди прийду і поїмо
QUESTION: — Ні, батьку, — сказав ______ .
OPTIONS: {'хлопець', 'хлопчик', 'син', 'цар'}
Будь ______ , пообідайте з нами!', options={'ласка', 'ножа', 'жаль', 'візир', 'дозволь'}, answer='ласка')
│ context = 'Ein Mann und eine Frau hatten einen goldenen Ring. Das war ein │
│ Glücksring, und wer ihn besaß, hatte immer genug zu leben. Sie │
│ wußten es aber nicht und verkauften den Ring für wenig Geld. Kaum │
│ war der Ring aus dem Hause, da wurden sie immer ärmer und wußten │
│ schließlich nicht mehr, woher sie genug zum Essen nehmen sollten. │
│ Sie hatten auch einen Hund und eine Katze, die mußten mit ihnen │
│ Hunger leiden. Da ratschlagten die Tiere miteinander, wie sie den │
│ Leuten wieder zu ihrem alten Glück verhelfen könnten.' │
I'll be using "Label all tasks" then it would show me the next CBT after I submit.
Keybindings are nice for classifying text.
When importing the things, I should try to do text highlighting or whatever to make it easier visually.
Sometimes it gives multiple options
[
Parse(
word='корів',
tag=OpencorporaTag('NOUN,inan plur,gent'),
normal_form='кір',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'корів', 498, 11),)
),
Parse(
word='корів',
tag=OpencorporaTag('NOUN,anim plur,gent'),
normal_form='корова',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'корів', 2063, 8),)
),
Parse(
word='корів',
tag=OpencorporaTag('NOUN,anim plur,accs'),
normal_form='корова',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'корів', 2063, 10),)
)
]
I can find the right one:
by morphology gotten from spacy?
by base form gotten from spacy?
token.lemma_
there’s no score for UA language in pymorphy, but they have an article on how to do this in general: Руководство пользователя — Морфологический анализатор pymorphy2
If I have to get an intermediate representation:
lang-uk/ukrainian-heteronyms-dictionary: Dictionary of heteronyms in the Ukrainian language dictionary of cases where птАшки/пташкИ
ChatGPT suggested using spacy for this which led to googling for other options:
Ukrainian UD UD
Jena Oct 2003 orga for UA NLP: About / UkrNLP-Corpora TODO ADD
pymorphy2 code is quite interesting and I have much to learn from it! Decided then.
Writing the package
Adding the newly created disambiguator to CBT-UA
make_agree_with_number
pymorphy2.analyzer — Морфологический анализатор pymorphy2 uses self.inflect(self.tag.numeral_agreement_grammemes(num))
(Pdb++) t.tag.numeral_agreement_grammemes(1)
{'sing', 'nomn'}
(Pdb++) t.tag.numeral_agreement_grammemes(2)
{'sing', 'gent'}
(Pdb++) t.tag.numeral_agreement_grammemes(3)
{'sing', 'gent'}
(Pdb++) t.tag.numeral_agreement_grammemes(4)
{'sing', 'gent'}
(Pdb++) t.tag.numeral_agreement_grammemes(5)
{'plur', 'gent'}
(Pdb++) t.tag.numeral_agreement_grammemes(6)
{'plur', 'gent'}
Узгодження власнекількісних простих числівників з іменниками — урок. Українська мова, 6 клас НУШ.
I have a suspicion that in Ukrainian it doesn’t parse singular as singular, and therefore can’t make agree with singular numbers. To plural and parsing words that are plural to begin with works fine!
(Pdb++) self.morph.parse("стіл")[1].inflect({'plur'}).tag.number
'plur'
(Pdb++) self.morph.parse("стіл")[1].tag.number
(Pdb++)
Yes:
(Pdb++) pp self.morph.parse("столи")[1].lexeme
[Parse(word='стіл', tag=OpencorporaTag('NOUN,inan masc,nomn'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'стіл', 2710, 0),)),
Parse(word='стола', tag=OpencorporaTag('NOUN,inan masc,gent'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'стола', 2710, 1),)),
Parse(word='столу', tag=OpencorporaTag('NOUN,inan masc,gent'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столу', 2710, 2),)),
Parse(word='столові', tag=OpencorporaTag('NOUN,inan masc,datv'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столові', 2710, 3),)),
Parse(word='столу', tag=OpencorporaTag('NOUN,inan masc,datv'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столу', 2710, 4),)),
Parse(word='стіл', tag=OpencorporaTag('NOUN,inan masc,accs'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'стіл', 2710, 5),)),
Parse(word='стола', tag=OpencorporaTag('NOUN,inan masc,accs'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'стола', 2710, 6),)),
Parse(word='столом', tag=OpencorporaTag('NOUN,inan masc,ablt'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столом', 2710, 7),)),
Parse(word='столі', tag=OpencorporaTag('NOUN,inan masc,loct'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столі', 2710, 8),)),
Parse(word='столові', tag=OpencorporaTag('NOUN,inan masc,loct'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столові', 2710, 9),)),
Parse(word='столу', tag=OpencorporaTag('NOUN,inan masc,loct'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столу', 2710, 10),)),
Parse(word='столе', tag=OpencorporaTag('NOUN,inan masc,voct'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столе', 2710, 11),)),
Parse(word='столи', tag=OpencorporaTag('NOUN,inan plur,nomn'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столи', 2710, 12),)),
Parse(word='столів', tag=OpencorporaTag('NOUN,inan plur,gent'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столів', 2710, 13),)),
Parse(word='столам', tag=OpencorporaTag('NOUN,inan plur,datv'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столам', 2710, 14),)),
Parse(word='столи', tag=OpencorporaTag('NOUN,inan plur,accs'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столи', 2710, 15),)),
Parse(word='столами', tag=OpencorporaTag('NOUN,inan plur,ablt'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столами', 2710, 16),)),
Parse(word='столах', tag=OpencorporaTag('NOUN,inan plur,loct'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столах', 2710, 17),)),
Parse(word='столи', tag=OpencorporaTag('NOUN,inan plur,voct'), normal_form='стіл', score=1.0, methods_stack=((DictionaryAnalyzer(), 'столи', 2710, 18),))]
Maybe it’s an artifact in conversion?
In either case I think I see the problem
..may help.
English: https://chat.openai.com/share/30ee2770-1195-4013-9d81-974278ac2a67
https://chat.openai.com/share/07250c3a-fcfc-4706-af82-74bbc497ee6f
Stories as graph networks
https://chat.openai.com/share/29f47a7d-8267-4b3b-bc55-3e1fe8e8baea+
The LMentry example generated dataset is nice as example: lmentry/data/all_words_from_category.json at main · aviaefrat/lmentry Not all of it needs code and regexes! lmentry/data/bigger_number.json at main · aviaefrat/lmentry
I can ask gpt4 if it gets it, if it does - assume the individual instance is good enough
3.5-turbo can’t do this!
but gpt4 can!
spacy’s largest model is not perfect either:
(Pdb++) doc[10:17]
жило сто корів і тридцять кіз,
(Pdb++) doc[15]
кіз
(Pdb++) doc[15].norm_
'кіз'
(Pdb++) doc[15].morph
Animacy=Inan|Case=Gen|Gender=Fem|Number=Plur
Вік живи вік учись… https://goroh.pp.ua/%D0%A2%D0%BB%D1%83%D0%BC%D0%B0%D1%87%D0%B5%D0%BD%D0%BD%D1%8F/%D0%BA%D1%96%D0%B7
Oh wait there’s also some kind of transformer model
python3 -m spacy download uk_core_news_trf
:
https://spacy.io/models/uk#uk_core_news_trf,
based on ukr-models/xlm-roberta-base-uk · Hugging Face. It doesn’t get it either
322 mb
// aside - this feels so satisfying to look at!
GPT4 is really good at writing stories based on graph-like descriptions which don’t use chunks found on Google and which aren’t continued in the same way by ChatGPT itself when I give it the first couple of sentences.
Even more complex ones with multiple characters: https://chat.openai.com/share/d42debd7-d80a-4030-ac7c-a18ecb1416a9
This is based on graph descriptions of stories generated by ChatGPT itself with the already mentioned prompt of ‘do easop but in graph form and not about grapes’.
231214-1503 Asking ChatGPT to make its own prompts is a superpower, now definitely:
“Generate an abstract graph structure for a narrative involving multiple animate characters. The graph should include nodes for entities, goals, challenges, interactions, outcomes, and moral lessons. Each node should abstractly represent the core elements of the story, focusing on thematic and moral aspects rather than the literal narrative. The format should be similar to a semantic web ontology, emphasizing relationships and abstract concepts. Please provide the graph in a Python dictionary format, with complexity and depth akin to an advanced semantic network.”
These will help me:
Aarne–Thompson–Uther Index - Wikipedia
https://chat.openai.com/share/147df026-9e4c-483f-b6b7-601debd1cdf1
I think I have gold. I either make it describe ATU stories as generic graphs and then make it generate stories matching these graphs, or make it generate graphs of existing stories w/ replacements and make it generate stories based on these changed (but still concrete) graphs.
Bonus points: find some parseable ATU index online and pick the ATU indexes that have no Ukrainian or Russian folktales associated with it.
> ag ath | sed "s/.*: \(.*\)/\\1/g" > motivs_present
gave me the list of motifs with slavic tales.
I think this is the best prompt (https://chat.openai.com/share/32f40f4e-3973-42cf-8621-2eb8ebb94e76):
Generate an abstract graph structure for a narrative involving multiple animate characters. The graph should include nodes for entities, goals, challenges, interactions, outcomes, and moral lessons. Each node should abstractly represent the core elements of the story, focusing on thematic and moral aspects rather than the literal narrative. The format should be similar to a semantic web ontology, emphasizing relationships and abstract concepts. Please provide the graph in a Python dictionary format, with complexity and depth akin to an advanced semantic network. It should be less than 15 lines long.
The story is ATU motif 1: “the theft of fish”
Trivially editable to be about not-fish.
narrative_graph = {
"Entities": {
"Thief": {"Characteristics": ["Cunning", "Resourceful"], "Role": "Protagonist"},
"Fish": {"Characteristics": ["Valuable", "Symbolic"], "Role": "Object"},
"Owner": {"Characteristics": ["Victimized", "Unaware"], "Role": "Antagonist"}
},
"Goals": {
"Thief": "Steal Fish",
"Owner": "Protect Property"
},
"Challenges": {
"Thief": "Avoid Detection",
"Owner": "Secure Property"
},
"Interactions": {
("Thief", "Fish"): "Theft",
("Thief", "Owner"): "Avoidance",
("Owner", "Fish"): "Ownership"
},
"Outcomes": {
"Immediate": "Successful Theft",
"Long-term": "Loss of Trust"
},
"Moral Lessons": {
"Actions Have Consequences",
"Importance of Trust",
"Greed Leads to Loss"
}
}
Yes, this is the solution. Wooho! I’ll prolly manually pick some good ATUs and craft the different JSONs descriptions from which multiple stories will be generated.
Also, gpt3.5 creates faster and very comparable quality graphs than gpt4. This is 3.5:
{
"Entities": [
{
"Name": "Thief",
"Type": "Character"
},
{
"Name": "Fisherman",
"Type": "Character"
},
{
"Name": "Fish",
"Type": "Object"
}
],
"Goals": [
{
"Entity": "Thief",
"Goal": "Steal Fish"
},
{
"Entity": "Fisherman",
"Goal": "Protect Fish"
}
],
"Challenges": [
{
"Challenge": "Conceal Theft",
"Entity": "Thief"
},
{
"Challenge": "Catch Thief",
"Entity": "Fisherman"
}
],
"Interactions": [
{
"Interaction": "Theft",
"Entities": ["Thief", "Fisherman"]
}
],
"Outcomes": [
{
"Outcome": "Thief Caught",
"Entities": ["Thief"]
},
{
"Outcome": "Fish Returned",
"Entities": ["Fisherman"]
}
],
"MoralLessons": [
{
"Lesson": "Stealing leads to consequences",
"Entities": ["Thief"]
},
{
"Lesson": "Protecting what's yours is important",
"Entities": ["Fisherman"]
}
]
}
231010-1003 Masterarbeit Tagebuch#231024-1704 Master thesis task CBT Results of discussion: - general idea about label-studio to filter bad ones is solid - -> what are the chances that a LM gets the answer right randomly? - some examples are bad not because impossible, but because no context needed to narrow down solutions - e.g. in the example below, it’s clearly ‘vater’ or ‘mutter’ regardless of what the context says: - … leading to a probability not of 1/4(..10) but 1/2 - one way to filter out such bad examples is to get a LM to solve the task without providing context, or even better - look at the distribution of probabilities over the answers and see if some are MUCH more likely than the others - Issue with 2-3-4 plurals: I can just create three classes of nouns, singular, 2-3-4, and >=5 - don’t forget to discuss the morphology complexities in the masterarbeit - Conveying the issues in English is hard, but I can (for a given UA example) - provide the morphology info for the English words - provide a third German translation
What if iI use the same approach I did in LMentry, with capitalizing the words I feel are worth replacing and then doing regex/templating magic?
I should refactor the code that given a specific word and options replaces the word with a blank and puts the options in the correct morphology, that way I will be able to use it as a more generic thing.
Similar tasks:
In a certain sense, The Winograd Schema Challenge1 is trying to do basically the same thing as I am and describes many of the same pitfalls. WinoGrande2 is the same but larger and formulated as a fill-in-the-blanks thing and the paper contains info about how they used things like Roberta etc. to benchmark on it — input formats and all that.
"Одного разу селянин пішов у поле орати. Дружина зібрала йому обід. У селянина був семирічний син. Каже він матері: — Мамо, дай-но я віднесу обід батькові. — Синку, ти ще малий, не знайдеш батька, — відповіла мати. — Не бійтеся, матінко. Дорогу я знаю, обід віднесу. Мати врешті погодилась, зав’язала хліб у вузлик, приладнала йому на спину, вариво налила у миску, дала синові в ______ та й відправила у поле. Малий не заблукав, доніс обід батькові. — Синку, як ти мене знайшов? — запитав батько. — Коли вже так, віднеси обід до джерела, я туди прийду і поїмо. — Ні, батьку, — сказав син."
Замість _______ має бути:
"цар",
"рибки",
"хлопця",
"сина",
"руки"
?
On Perplexity Labs:
2024-01-11 12:58
2024-02-07 20:55
Hopefully last problem of this type.
(Pdb++) x
Грізний
(Pdb++) x.morph
Animacy=Anim|Case=Gen|Gender=Masc|NameType=Sur|Number=Sing
(Pdb++) print(self.disamb.pymorphy_analyzer.parse(x.text))
[
Parse(
word='грізний',
tag=OpencorporaTag('ADJF,compb masc,nomn'),
normal_form='грізний',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'грізний', 76, 0),)
),
Parse(
word='грізний',
tag=OpencorporaTag('ADJF,compb masc,accs'),
normal_form='грізний',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'грізний', 76, 4),)
),
Parse(
word='грізний',
tag=OpencorporaTag('ADJF,compb masc,voct'),
normal_form='грізний',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'грізний', 76, 8),)
)
]
This happens for only some of them e.g. Швидкий is sometimes OK:
(Pdb++) x
Швидкий
(Pdb++) x.morph
Animacy=Anim|Case=Nom|Gender=Masc|NameType=Sur|Number=Sing
(Pdb++) print(self.disamb.pymorphy_analyzer.parse(x.text))
[
Parse(
word='швидкий',
tag=OpencorporaTag('ADJF,compb masc,nomn'),
normal_form='швидкий',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'швидкий', 76, 0),)
),
Parse(
word='швидкий',
tag=OpencorporaTag('ADJF,compb masc,accs'),
normal_form='швидкий',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'швидкий', 76, 4),)
),
Parse(
word='швидкий',
tag=OpencorporaTag('ADJF,compb masc,voct'),
normal_form='швидкий',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'швидкий', 76, 8),)
),
Parse(
word='швидкий',
tag=OpencorporaTag('NOUN,anim masc,nomn'),
normal_form='швидкий',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'швидкий', 95, 0),)
),
Parse(
word='швидкий',
tag=OpencorporaTag('NOUN,anim masc,voct'),
normal_form='швидкий',
score=1.0,
methods_stack=((DictionaryAnalyzer(), 'швидкий', 95, 7),)
)
]
(and then my get_with_disambiguation works)
What can I do?
I’ll try the last one.
['кравчиня', 'грізний', 'звір', 'швидкий', 'лев', 'грізного']
Only one is strictly speaking a female noun, bad example.
['кравчиня']
BUT I don’t want to inflect them by gender, because the lion Грізний != Грізна.
ALSO lemmas are a bundle of joy I forgot about.
(Pdb++) x1,x1.lemma_,x1.morph
(Повільна, 'повільний', Case=Nom|Degree=Pos|Gender=Fem|Number=Sing)
(Pdb++) x2,x2.lemma_,x2.morph
(Грізного, 'грізного', Animacy=Anim|Case=Gen|Gender=Masc|NameType=Sur|Number=Sing)
(Pdb++) [v for k,v in lemmas.items()]
[жителі, власником, заєць, Швидкий, кравчиня, працівницею, левів, Грізний, Грізному, Грізного, звірів, конкуренти, лисиця, Хитра, вовк, Звірі,пліткам, Злий, конкурентки]
'Masc'], ['Masc'], ['Masc'], ['Masc'], ['Fem'], ['Fem'], ['Masc'], ['Masc'], ['Masc'], ['Masc'], ['Masc'], ['Masc'], ['Fem'], ['Fem'], ['Masc'], [], ['Masc'], ['Masc'], ['Fem'
Looking even deeper — spacy doesn’t get the more frequent names just as well. And counting the number of capitalized occurrences to build a dictionary etc. is not worth the effort. Giving up.
Also, how interesting:
(Pdb++) x
Шакал
(Pdb++) x.morph
Animacy=Anim|Case=Gen|Gender=Fem|NameType=Sur|Number=Sing|Uninflect=Yes
# lowercase version is interesting as well, inan ins?
Animacy=Inan|Case=Ins|Gender=Masc|Number=Sing
Also looking at “Лео” I realize that pymorphy3 is really much better. I sees it as a NOUN/Name, as opposed to UNKN. Is it too late?
Switched to pymorphy3, except two weird words incl. Жаба where there’s no case, I can see no difference. Left it so.
What spacy sees as VERB pymorphy sees as GRND:
> cand
відчувши
> cand.morph
Aspect=Perf|Tense=Past|VerbForm=Conv
> cand.pos_
'VERB'
> self.disamb.get_with_disambiguation(cand).tag
OpencorporaTag('GRND,perf')
<_(@winograd) “The winograd schema challenge” (2012) / Hector Levesque, Ernest Davis, Leora Morgenstern: z / / _> ↩︎
<_(@Sakaguchi2019) “WinoGrande: An Adversarial Winograd Schema Challenge at Scale” (2019) / Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi: z / http://arxiv.org/abs/1907.10641 / _> ↩︎
This[^1] redirects python logging to tqdm.write()
that plays better with tqdm progress bars:
from tqdm.contrib.logging import logging_redirect_tqdm
# ..
with logging_redirect_tqdm():
pass
Add a total=
int parameter to tqdm.tqdm()
1:
for index, row in tqdm(df.iterrows(), total=df.shape[0]):
print("index",index)
print("row",row)
Also, you can manually update tqdm bars:
with tqdm(total=num_articles, desc="total") as pbar:
# ...
pbar.update(10)
# or just pbar.update()
Python package speedtest-cli
does what it says on the tin.
BUT! TIL it has options, most useful ones being:
--no-download Do not perform download test
--no-upload Do not perform upload test
--simple Suppress verbose output, only show basic information
--csv Suppress verbose output, only show basic information in CSV format. Speeds listed in bit/s and not affected by --bytes
--json Suppress verbose output, only show basic information in JSON format. Speeds listed in bit/s and not affected by --bytes
--mini MINI URL of the Speedtest Mini server
Also: allegedly all providers treat speedtest-net as special for PR purposes.
Ergo:
wget
Is there a way to adjust the zoom level of the displayed pdf? - Overleaf, Online-LaTeX-Editor: “move your mouse near to the top-left of the PDF preview panel” and then you see the settings. Can’t call that intuitive
dateparser – python parser for human readable dates — DateParser 1.1.2 documentation
Directly from its docu:
>>> import dateparser
>>> dateparser.parse('12/12/12')
datetime.datetime(2012, 12, 12, 0, 0)
>>> dateparser.parse('Fri, 12 Dec 2014 10:55:50')
datetime.datetime(2014, 12, 12, 10, 55, 50)
>>> dateparser.parse('Martes 21 de Octubre de 2014') # Spanish (Tuesday 21 October 2014)
datetime.datetime(2014, 10, 21, 0, 0)
>>> dateparser.parse('Le 11 Décembre 2014 à 09:00') # French (11 December 2014 at 09:00)
datetime.datetime(2014, 12, 11, 9, 0)
>>> dateparser.parse('13 января 2015 г. в 13:34') # Russian (13 January 2015 at 13:34)
datetime.datetime(2015, 1, 13, 13, 34)
>>> dateparser.parse('1 เดือนตุลาคม 2005, 1:00 AM') # Thai (1 October 2005, 1:00 AM)
datetime.datetime(2005, 10, 1, 1, 0)
but I installed it because it can also do:
dateparser.parse("yesterday")
>>> datetime.datetime(2023, 10, 15, 15, 15, 37, 652629)
TODO: understand what happens here. pandas
not possible because numpy
, but installing first numpy and then pandas works.
> poetry add pandas
Using version ^2.1.1 for pandas
Updating dependencies
Resolving dependencies... (0.5s)
The current project's Python requirement (>=3.9,<4.0) is not compatible with some of the required packages Python requirement:
- numpy requires Python <3.13,>=3.9, so it will not be satisfied for Python >=3.13,<4.0
- numpy requires Python <3.13,>=3.9, so it will not be satisfied for Python >=3.13,<4.0
Because no versions of numpy match >1.26.0,<1.26.1 || >1.26.1
and numpy (1.26.0) requires Python <3.13,>=3.9, numpy is forbidden.
And because numpy (1.26.1) requires Python <3.13,>=3.9, numpy is forbidden.
Because no versions of pandas match >2.1.1,<3.0.0
and pandas (2.1.1) depends on numpy (>=1.26.0), pandas (>=2.1.1,<3.0.0) requires numpy (>=1.26.0).
Thus, pandas is forbidden.
So, because up-crawler depends on pandas (^2.1.1), version solving failed.
• Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties
For numpy, a possible solution would be to set the `python` property to ">=3.9,<3.13"
For numpy, a possible solution would be to set the `python` property to ">=3.9,<3.13"
https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies,
https://python-poetry.org/docs/dependency-specification/#using-environment-markers
(up-crawler-py3.10) 14:15:49 ~/uuni/master/code/up_crawler/ 1
> poetry add numpy
Using version ^1.26.1 for numpy
Updating dependencies
Resolving dependencies... (0.2s)
Package operations: 1 install, 0 updates, 0 removals
• Installing numpy (1.26.1)
Writing lock file
(up-crawler-py3.10) 14:16:03 ~/uuni/master/code/up_crawler/ 0
> poetry add pandas
Using version ^2.1.1 for pandas
Updating dependencies
Resolving dependencies... (0.2s)
Package operations: 2 installs, 0 updates, 0 removals
• Installing tzdata (2023.3)
• Installing pandas (2.1.1)
Writing lock file
Refs:
When trying to use pyenv
to use python 3.10
in my new poetry project, got issues, one of them being basically this: ModuleNotFoundError: No module named ‘cleo’ after poetry self:update · Issue #553 · python-poetry/poetry
Reinstalling alone didn’t help, had to delete the poetry dir too.
So solution:
# pyenv local 3.10.0 but shouldn't matter
# delete the poetry directory completely
rm -rf ~/.poetry
# reinstall it
curl -sSL https://install.python-poetry.org | python3 -
# in my project -
poetry env use 3.10
# poetry shell, poetry install - works!
The poetry docu on poetry+pyenv: Managing environments | Documentation | Poetry - Python dependency management and packaging made easy
> poetry env use 3.10.0
# has to be available in $PATH I think..
> poetry env info
Virtualenv
Python: 3.10.0
Implementation: CPython
Path: /home/sh/uuni/master/code/up_crawler/.venv
Executable: /home/sh/uuni/master/code/up_crawler/.venv/bin/python
Valid: True
System
Platform: linux
OS: posix
Python: 3.10.0
Path: /home/sh/.pyenv/versions/3.10.0
Executable: /home/sh/.pyenv/versions/3.10.0/bin/python3.10
14:02:42 ~/uuni/master/code/up_crawler/ 0
> python3
Python 3.8.10 (default, May 26 2023, 14:05:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
It lies about the system
one, and it’s not the initial pyenv local 3.10.0
setting, but I don’t care - it works :sparkles: