o
    hX                     @   s"  d Z ddlZddlZddlZddlZddlZddlZddlmZ ddlm	Z	 ddl
mZmZ ddlmZ ddlmZ ddlmZmZmZ dd	lmZ dd
lmZ ddlmZmZmZmZmZmZmZm Z  ddl!m"Z" e#dZ$e#dZ%dd Z&dd Z'dddZ(dddZ)e*dkre)  dS dS )a8!  A command line interface to a shift reduce constituency parser.

This follows the work of
Recurrent neural network grammars by Dyer et al
In-Order Transition-based Constituent Parsing by Liu & Zhang

The general outline is:

  Train a model by taking a list of trees, converting them to
    transition sequences, and learning a model which can predict the
    next transition given a current state
  Then, at inference time, repeatedly predict the next transition until parsing is complete

The "transitions" are variations on shift/reduce as per an
intro-to-compilers class.  The idea is that you can treat all of the
words in a sentence as a buffer of tokens, then either "shift" them to
represent a new constituent, or "reduce" one or more constituents to
form a new constituent.

In order to make the runtime a more competitive speed, effort is taken
to batch the transitions and apply multiple transitions at once.  At
train time, batches are groups together by length, and at inference
time, new trees are added to the batch as previous trees on the batch
finish their inference.

There are a few minor differences in the model:
  - The word input is a bi-lstm, not a uni-lstm.
    This gave a small increase in accuracy.
  - The combination of several constituents into one constituent is done
    via a single bi-lstm rather than two separate lstms.  This increases
    speed without a noticeable effect on accuracy.
  - In fact, an even better (in terms of final model accuracy) method
    is to combine the constituents with torch.max, believe it or not
    See lstm_model.py for more details
  - Initializing the embeddings with smaller values than pytorch default
    For example, on a ja_alt dataset, scores went from 0.8980 to 0.8985
    at 200 iterations averaged over 5 trials
  - Training with AdaDelta first, then AdamW or madgrad later improves
    results quite a bit.  See --multistage

A couple experiments which have been tried with little noticeable impact:
  - Combining constituents using the method in the paper (only a trained
    vector at the start instead of both ends) did not affect results
    and is a little slower
  - Using multiple layers of LSTM hidden state for the input to the final
    classification layers didn't help
  - Initializing Linear layers with He initialization and a positive bias
    (to avoid dead connections) had no noticeable effect on accuracy
    0.8396 on it_turin with the original initialization
    0.8401 and 0.8427 on two runs with updated initialization
    (so maybe a small improvement...)
  - Initializing LSTM layers with different gates was slightly worse:
    forget gates of 1.0
    forget gates of 1.0, input gates of -1.0
  - Replacing the LSTMs that make up the Transition and Constituent
    LSTMs with Dynamic Skip LSTMs made no difference, but was slower
  - Highway LSTMs also made no difference
  - Putting labels on the shift transitions (the word or the tag shifted)
    or putting labels on the close transitions didn't help
  - Building larger constituents from the output of the constituent LSTM
    instead of the children constituents hurts scores
    For example, an experiment on ja_alt went from 0.8985 to 0.8964
    when built that way
  - The initial transition scheme implemented was TOP_DOWN.  We tried
    a compound unary option, since this worked so well in the CoreNLP
    constituency parser.  Unfortunately, this is far less effective
    than IN_ORDER.  Both specialized unary matrices and reusing the
    n-ary constituency combination fell short.  On the ja_alt dataset:
      IN_ORDER, max combination method:           0.8985
      TOP_DOWN_UNARY, specialized matrices:       0.8501
      TOP_DOWN_UNARY, max combination method:     0.8508
  - Adding multiple layers of MLP to combine inputs for words made
    no difference in the scores
    Tried both before the LSTM and after
    A simple single layer tensor multiply after the LSTM works well.
    Replacing that with a two layer MLP on the English PTB
    with roberta-base causes a notable drop in scores
    First experiment didn't use the fancy Linear weight init,
    but adding that barely made a difference
      260 training iterations on en_wsj dev, roberta-base
      model as of bb983fd5e912f6706ad484bf819486971742c3d1
      two layer MLP:                    0.9409
      two layer MLP, init weights:      0.9413
      single layer:                     0.9467
  - There is code to rebuild models with a new structure in lstm_model.py
    As part of this, we tried to randomly reinitialize the transitions
    if the transition embedding had gone to 0, which often happens
    This didn't help at all
  - We tried something akin to attention with just the query vector
    over the bert embeddings as a way to mix them, but that did not
    improve scores.
    Example, with a self.bert_layer_mix of size bert_dim x 1:
        mixed_bert_embeddings = []
        for feature in bert_embeddings:
            weighted_feature = self.bert_layer_mix(feature.transpose(1, 2))
            weighted_feature = torch.softmax(weighted_feature, dim=1)
            weighted_feature = torch.matmul(feature, weighted_feature).squeeze(2)
            mixed_bert_embeddings.append(weighted_feature)
        bert_embeddings = mixed_bert_embeddings
    It seems just finetuning the transformer is already enough
    (in general, no need to mix layers at all when finetuning bert embeddings)


The code breakdown is as follows:

  this file: main interface for training or evaluating models
  constituency/trainer.py: contains the training & evaluation code
  constituency/ensemble.py: evaluation code specifically for letting multiple models
    vote on the correct next transition.  a modest improvement.
  constituency/evaluate_treebanks.py: specifically to evaluate multiple parsed treebanks
    against a gold.  in particular, reports whether the theoretical best from those
    parsed treebanks is an improvement (eg, the k-best score as reported by CoreNLP)

  constituency/parse_tree.py: a data structure for representing a parse tree and utility methods
  constituency/tree_reader.py: a module which can read trees from a string or input file

  constituency/tree_stack.py: a linked list which can branch in
    different directions, which will be useful when implementing beam
    search or a dynamic oracle
  constituency/lstm_tree_stack.py: an LSTM over the elements of a TreeStack
  constituency/transformer_tree_stack.py: attempts to run attention over the nodes
    of a tree_stack.  not as effective as the lstm_tree_stack in the initial experiments.
    perhaps it could be refined to work better, though

  constituency/parse_transitions.py: transitions and a State data structure to store them
  constituency/transition_sequence.py: turns ParseTree objects into
    the transition sequences needed to make them

  constituency/base_model.py: operates on the transitions to turn them in to constituents,
    eventually forming one final parse tree composed of all of the constituents
  constituency/lstm_model.py: adds LSTM features to the constituents to predict what the
    correct transition to make is, allowing for predictions on previously unseen text

  constituency/retagging.py: a couple utility methods specifically for retagging
  constituency/utils.py: a couple utility methods

  constituency/dyanmic_oracle.py: a dynamic oracle which currently
    only operates for the inorder transition sequence.
    uses deterministic rules to redo the correct action sequence when
    the parser makes an error.

  constituency/partitioned_transformer.py: implementation of a transformer for self-attention.
     presumably this should help, but we have yet to find a model structure where
     this makes the scores go up.
  constituency/label_attention.py: an even fancier form of transformer based on labeled attention:
     https://arxiv.org/abs/1911.03875
  constituency/positional_encoding.py: so far, just the sinusoidal is here.
     a trained encoding is in partitioned_transformer.py.
     this should probably be refactored to common, especially if used elsewhere.

  stanza/pipeline/constituency_processor.py: interface between this model and the Pipeline

  stanza/utils/datasets/constituency: various scripts and tools for processing constituency datasets

Some alternate optimizer methods:
  adabelief: https://github.com/juntang-zhuang/Adabelief-Optimizer
  madgrad: https://github.com/facebookresearch/madgrad

    N)constant)utils)add_peft_argsresolve_peft_args)parser_training)	retagging)ConstituencyCompositionSentenceBoundaryStackHistory)TransitionScheme)load_model_parse_text)DEFAULT_LEARNING_EPSDEFAULT_LEARNING_RATESDEFAULT_MOMENTUMDEFAULT_LEARNING_RHODEFAULT_WEIGHT_DECAYNONLINEARITYadd_predict_output_argspostprocess_predict_output_args)DEFAULT_MODEL_DIRstanzazstanza.constituency.trainerc               
   C   s>  t  } | jdtddd | jdtddd | jdtd	d
d | jdtddd | jdtdd | jdtddd | jdtddd | jdtddd | jdddddd | jdtddd | jdddd d!d" | jd#d$d%d&d' | jd(d)d*d+d, | jd-dtd.d/ | jd0dtd1d/ | jd2dtd3d/ | jd4d5td6d/ | jd7dtd8d/ | jd9d:td;d/ | jd<dd%d=d' | jd>d?d*d@d, t|  | jdAtdBdCd | jdDtdEdFd | jdGtddHd | jdIdJd*dKdLdM | jdNtddOd | jdPd$d%dQd' | jdRtddHd | jdStddTd | jdUtddVd | jdWtddXd | jdYdZg d[d\ | jd]td^d_d t|  | jd`tdadb | jdctdddb | jdetdBdfd | jdgtdBdhd | jditj	djdk dl
dmdndo tD d/ | jdpdtdqd/ | jdrtj	dsdk dl
dmdtdo tD d/ | jdudvtdqd/ | jdwtdxdyd | jdztd{d | jd|td}d~d | jdtddd | jddJd%dd' | jddd*dd, | jdtddd | jdtddd | jdtddd | jdtddd | jdtddd | jdtddd | jdtddd | jdtddd | jdtddd | jdtddd | jdtddd | jdtddd | jdtddd | jdddJd*dd | jdtdd | jddJd*dddM | jddJd*dddM t|  | jddtd
td/ | jddtd
td/ | jddtd
td/ | jddtdd/ | jdttdd/ | jddtdd/ | jdddd | jddtdĠ
td d/ | jdd^tdd/ | jddtdd/ | jddtdd/ | jddtdd/ | jddtdd/ | jddtdd/ | jddtdd/ | jdddddd" | jdddd | jddtdd/ | jddtdd/ | jddtdd/ | jddtdd/ | jddtdd/ | jddtdd/ | jdtjddk d
dmddo tD d/ | jdd$d%dd' | jddJd%dd' | jddd*dd, | jddt dd | jddtdd/ | jddJdd%d d | jddJdd*d d | jddtdd/ | jddtdd/ | jddtd	d/ | jd
dtdd/ | jddtdd/ | jddtdd/ | jdtjddk d
dmddo tD d/ | jdtjddk d
dmddo tD d/ | jddvtdd/ | jddtdd/ | jdd%dd | jd d%d!d | jd"tdd#d | jd$d%d*d&d, | jd'tdd(d | jd)tdd*d t|  | jd+d,td-d/ | jd.dtd/d/ | jd0dxtd1d/ | jd2dvtd3d/ | jd4d5td6d/ | jd7d8td9d/ | jd:d;td<d/ | jd=dtd>d/ | jd?dtd@d/ | jdAd^tdBd/ | jdCd$d%dDd' | jdEdFdGdFgdHd | jdIdtdJd/ | jdKd5tdLd/ | jdMd5tdNd/ | jdOdJd%dPd' | jdQdJd%dRd' | jdSd$d%dTd' | jdUdJd%dVd' | jdWdJd*dXdVdM | jdYd$d%dZd' | jd[d\td]d/ | jd^dtd_d/ | jd`d8tdad/ | jdbdtdcd/ | jdddtded/ | jdfdJd%dgd' | jdhd$d%did' | jdjdkd*did, | jdldmd*dnd, | jdod$d%dpd' | jdqd$d%drd' | jdsddtd | jdud%dvd | jdwddxd | jdyddzd | S ({  z
    Adds the arguments for building the con parser

    For the most part, defaults are set to cross-validated values, at least for WSJ
    z
--data_dirzdata/constituencyzDirectory of constituency data.)typedefaulthelpz--wordvec_dirzextern_data/wordveczDirectory of word vectorsz--wordvec_file zFile that contains word vectorsz--wordvec_pretrain_fileNz'Exact name of the pretrain file to readz--pretrain_max_vocabi )r   r   z--charlm_forward_filez$Exact path to use for forward charlmz--charlm_backward_filez%Exact path to use for backward charlmz--bert_modelz>Use an external bert model (requires the transformers package)z--no_bert_model
bert_modelstore_constzDon't use bert)destactionconstr   z--bert_hidden_layers   z;How many layers of hidden state to use from the transformerz--bert_hidden_layers_originalbert_hidden_layersz&Use layers 2,3,4 of the Bert embedding)r   r   r   r   z--bert_finetuneF
store_truez(Finetune the bert (or other transformer))r   r   r   z--no_bert_finetunebert_finetunestore_falsez.Don't finetune the bert (or other transformer))r   r   r   z--bert_finetune_layersz3Only finetune this many layers from the transformer)r   r   r   z--bert_finetune_begin_epochz/Which epoch to start finetuning the transformerz--bert_finetune_end_epochz.Which epoch to stop finetuning the transformerz--bert_learning_rateg;On?z?Scale the learning rate for transformer finetuning by this muchz--stage1_bert_learning_ratez^Scale the learning rate for transformer finetuning by this much only during an AdaDelta warmupz--bert_weight_decayg-C6?z>Scale the weight decay for transformer finetuning by this muchz--stage1_bert_finetunezuFinetune the bert (or other transformer) during an AdaDelta warmup, even if the second half doesn't use bert_finetunez--no_stage1_bert_finetunestage1_bert_finetunez{Don't finetune the bert (or other transformer) during an AdaDelta warmup, even if the second half doesn't use bert_finetunez--tag_embedding_dim   z2Embedding size for a tag.  0 turns off the featurez--delta_embedding_dimd   z$Embedding size for a delta embeddingz--train_filezInput file for data loader.z--no_train_remove_duplicatesTtrain_remove_duplicateszlDo/don't remove duplicates from the training file.  Could be useful for intentionally reweighting some trees)r   r   r   r   z--silver_filezSecondary training file.z--silver_remove_duplicateszsDo/don't remove duplicates from the silver training file.  Could be useful for intentionally reweighting some treesz--eval_filez--xml_tree_filez?Input file of VLSP formatted trees for parsing with parse_text.z--tokenized_filez9Input file of tokenized text for parsing with parse_text.z--tokenized_dirz>Input directory of tokenized text for parsing with parse_text.z--modetrain)r)   
parse_textpredictremove_optimizer)r   choicesz--num_generater   zLWhen running a dev set, how many sentences to generate beyond the greedy onez--langLanguage)r   r   z--shorthandzTreebank shorthandz--transition_embedding_dimzEmbedding size for a transitionz--transition_hidden_sizez#Embedding size for transition stackz--transition_stackc                 S      t |   S Nr
   upperx r5   \/var/www/html/env_mimamsha/lib/python3.10/site-packages/stanza/models/constituency_parser.py<lambda>e      z build_argparse.<locals>.<lambda>z*How to track transitions over a parse.  {}z, c                 s       | ]}|j V  qd S r0   name.0r4   r5   r5   r6   	<genexpr>f      z!build_argparse.<locals>.<genexpr>z--transition_headszCHow many heads to use in MHA *if* the transition_stack is Attentionz--constituent_stackc                 S   r/   r0   r1   r3   r5   r5   r6   r7   i  r8   c                 s   r9   r0   r:   r<   r5   r5   r6   r>   j  r?   z--constituent_heads   z--hidden_sizei   z?Size of the output layers for constituency stack and word queuez--epochsi  z--epoch_sizei  zRuns this many trees in an 'epoch' instead of going through the training dataset exactly once.  Set to 0 to do the whole training setz--silver_epoch_sizezNRuns this many trees in a silver 'epoch'.  If not set, will match --epoch_sizez--multistagez^1/2 epochs with adadelta no pattn or lattn, 1/4 with chosen optim and no lattn, 1/4 full modelz--no_multistage
multistagez don't do the multistage learningz--oracle_initial_epoch   z_Epoch where we start using the dynamic oracle to let the parser keep going with wrong decisionsz--oracle_frequencyg?zHHow often to use the oracle vs how often to force the correct transitionz--oracle_forced_errorsgMbP?z`Occasionally have the model randomly walk through the state space to try to learn how to recoverz--oracle_levelziRestrict oracle transitions to this level or lower.  0 means off.  None means use all oracle transitions.z--additional_oracle_levelszpAdd some additional experimental oracle transitions.  Basically for A/B testing transitions we expect to be bad.z--deactivated_oracle_levelszhTemporarily turn off a default oracle level.  Basically for A/B testing transitions we expect to be bad.z--train_batch_size   z7How many trees to train before taking an optimizer stepz--eval_batch_size2   z)How many trees to batch when running evalz
--save_dirzsaved_models/constituencyzRoot dir for saving models.z--save_namez2{shorthand}_{embedding}_{finetune}_constituency.ptzFile name to save the modelz--save_each_namez@Save each model in sequence to this pattern.  Mostly for testingz--save_each_startzWhen to start saving each modelz--save_each_frequencyz!How frequently to save each modelz--no_save_each_optimizersave_each_optimizerz1Don't save the optimizer when saving 'each' model)r   r   r   r   z--seedi  z--no_check_valid_statescheck_valid_stateszDon't check the constituents or transitions in the dev set when starting a new parser.  Warning: the parser will never guess unknown constituentsz--no_strict_check_constituentsstrict_check_constituentsz`Don't check the constituents between the train & dev set.  May result in untrainable transitionsz--learning_ratezLearning rate for the optimizer.  Reasonable values are 1.0 for adadelta or 0.001 for SGD.  None uses a default for the given optimizer: {}z--learning_epszSeps value to use in the optimizer.  None uses a default for the given optimizer: {}z--learning_momentumz:Momentum.  None uses a default for the given optimizer: {}z--learning_weight_decayz1Weight decay (eg, l2 reg) to use in the optimizerz--learning_rhozRho parameter in Adadeltaz--learning_beta2g+?zBeta2 argument for AdamWz--optimz8Optimizer type: SGD, AdamW, Adadelta, AdaBelief, Madgrad)r   r   z--stage1_learning_ratezTLearning rate to use in the first stage of --multistage.  None means use default: {}adadeltaz--learning_rate_warmupzNumber of epochs to ramp up learning rate from 0 to full.  Set to 0 to always use the chosen learning rate.  Currently not functional, as it didn't do anythingz--learning_rate_factorg333333?z-Plateau learning rate decreate when plateauedz--learning_rate_patience   zPlateau learning rate patiencez--learning_rate_cooldown
   zPlateau learning rate cooldownz--learning_rate_min_lrzPlateau learning rate minimumz--stage1_learning_rate_min_lrz'Plateau learning rate minimum (stage 1)z--grad_clippingzPClip abs(grad) to this amount.  Use --no_grad_clipping to turn off grad clippingz--no_grad_clippinggrad_clippingz0Use --no_grad_clipping to turn off grad clippingz--losscrosszMcross, large_margin, or focal.  Focal requires `pip install focal_loss_torch`z--loss_focal_gamma   zgamma value for a focal lossz--early_dropoutzWhen to turn off dropoutz--word_dropoutg?zDropout on the word embeddingz--predict_dropoutz%Dropout on the final prediction layerz--lstm_layer_dropoutg        zDropout in the LSTM layersz--lstm_input_dropoutzDropout on the input to an LSTMz--transition_schemec                 S   r/   r0   )r   r2   r3   r5   r5   r6   r7   c  r8   zTransition scheme to use.  {}c                 s   r9   r0   r:   r<   r5   r5   r6   r>   d  r?   z
--reversedz#Do the transition sequence reversedz--combined_dummy_embeddingzWUse the same embedding for dummy nodes and the vectors used when combining constituentsz--no_combined_dummy_embeddingcombined_dummy_embeddingz]Don't use the same embedding for dummy nodes and the vectors used when combining constituentsz--nonlinearityreluzMNonlinearity to use in the model.  relu is a noticeable improvement over tanh)r   r-   r   z
--maxout_kzAUse maxout layers instead of a nonlinearity for the output layersz--use_silver_wordsuse_silver_wordszCTrain/don't train word vectors for words only in the silver dataset)r   r   r   r   z--no_use_silver_wordsz--rare_word_unknown_frequency{Gz?z7How often to replace a rare word with UNK when trainingz--rare_word_thresholdzEHow many words to consider as rare words as a fraction of the datasetz--tag_unknown_frequencyz1How often to replace a tag with UNK when trainingz--num_lstm_layersz#How many layers to use in the LSTMsz--num_tree_lstm_layerszHow many layers to use in the TREE_LSTMs, if used.  This also increases the width of the word outputs to match the tree lstm inputs.  Default 2 if TREE_LSTM or TREE_LSTM_CX, 1 otherwisez--num_output_layers   z.How many layers to use at the prediction levelz--sentence_boundary_vectorsc                 S   r/   r0   )r	   r2   r3   r5   r5   r6   r7     r8   z5Vectors to learn at the start & end of sentences.  {}c                 s   r9   r0   r:   r<   r5   r5   r6   r>     r?   z--constituency_compositionc                 S   r/   r0   )r   r2   r3   r5   r5   r6   r7     r8   z5How to build a new composition from its children.  {}c                 s   r9   r0   r:   r<   r5   r5   r6   r>     r?   z--reduce_headszhNumber of attn heads to use when reducing children into a parent tree (constituency_composition == attn)z--reduce_positionzDimension of position vector to use when reducing children.  None means 1/4 hidden_size, 0 means don't use (constituency_composition == key | untied_key)z--relearn_structurezStarting from an existing checkpoint, add or remove pattn / lattn.  One thing that works well is to train an initial model using adadelta with no pattn, then add pattn with adamw)r   r   z
--finetunez=Load existing model during `train` mode from `load_name` pathz--checkpoint_save_namez,File name to save the most recent checkpointz--no_checkpoint
checkpointzDon't save checkpointsz--load_namezKModel to load when finetuning, evaluating, or manipulating an existing filez--load_packagezIDownload an existing stanza package & use this for tests, finetuning, etcz--pattn_d_modeli   z*Partitioned attention model dimensionalityz--pattn_morpho_emb_dropoutzFDropout rate for morphological features obtained from pretrained modelz--pattn_encoder_max_lenz?Max length that can be put into the transformer attention layerz--pattn_num_headsz5Partitioned attention model number of attention headsz--pattn_d_kv@   z Size of the query and key vectorz--pattn_d_ffi   z=Size of the intermediate vectors in the feed-forward sublayerz--pattn_relu_dropoutg?z1ReLU dropout probability in feed-forward sublayerz--pattn_residual_dropoutz9Residual dropout probability for all residual connectionsz--pattn_attention_dropoutzAttention dropout probabilityz--pattn_num_layerszENumber of layers for the Partitioned Attention.  Currently turned offz--pattn_biasz(Whether or not to learn an additive biasz--pattn_timingsinlearnedz*Use a learned embedding or a sin embeddingz--lattn_d_input_projzNIf set, project the non-positional inputs down to this size before proceeding.z--lattn_d_kvz!Dimension of the key/query vectorz--lattn_d_projz=Dimension of the output vector from each label attention headz--lattn_resdropz&Whether or not to use Residual Dropoutz--lattn_pwffz8Whether or not to use a Position-wise Feed-forward Layerz--lattn_q_as_matrixzNWhether or not Label Attention uses learned query vectors. False means it doesz--lattn_partitionedz Whether or not it is partitionedz--no_lattn_partitionedlattn_partitionedz--lattn_combine_as_selfz@Whether or not the layer uses concatenation. False means it doesz--lattn_d_l    zNumber of labelsz--lattn_attention_dropoutzDropout for attention layerz--lattn_d_ffz#Dimension of the Feed-forward layerz--lattn_relu_dropoutz$Relu dropout for the label attentionz--lattn_residual_dropoutz(Residual dropout for the label attentionz--lattn_combined_inputz4Combine all inputs for the lattn, not just the pattnz--use_lattnz+Use the lattn layers - currently turned offz--no_use_lattn	use_lattnz--no_lattn_combined_inputlattn_combined_inputz:Don't combine all inputs for the lattn, not just the pattnz--log_normsz=Log the parameters norms while training.  A very noisy optionz--log_shapesz*Log the parameters shapes at the beginningz--watch_regexz<regex to describe which weights and biases to output, if anyz--wandbzStart a wandb session and write the results of training.  Only applies to training.  Use --wandb_name instead to specify a namez--wandb_namezWName of a wandb session to start when training.  Will default to the dataset short namez--wandb_norm_regexzMLog on wandb any tensor whose norm matches this matrix.  Might get cluttered?)argparseArgumentParseradd_argumentstrintfloatr   r   r
   LSTMformatjoinr   add_device_argsr   r   r   r   r   IN_ORDERr   keysr	   
EVERYTHINGr   MAXr   add_retag_args)parserr5   r5   r6   build_argparse   sF  m	
X		
	
 rl   c                 C   s   t | }| d s| d rdnd}| d d urd| d  nd}| d j| d | d	 |||| d
 j dd| d
 j| d | d d	}tdd|}t	
d| tj|d }|| d krgtj| d |}|S )Nr#   r%   	finetunedr   bert_finetune_begin_epochz%d	save_name	shorthandoracle_leveltransition_scheme_r!   seed)	rp   rq   	embeddingfinetunetransformer_finetune_beginrr   tschemetrans_layersrt   z_+zExpanded save_name: %sr   save_dir)r   embedding_namerc   r;   lowerreplace
short_nameresubloggerinfoospathsplitrd   )argsru   maybe_finetunerw   model_save_file	model_dirr5   r5   r6   build_model_filename  s&   
	r   c              
   C   s  t  }|j| d} t| tdd | js+| jr+t| jjddddkr+| jdd | _| jd u r4| j	| _| j
d u r| jd	kr| jsSd
| _
| jrR| jsRtd d| _n8| jsY| jrbtd d| _
n)zdd l}d| _
td W n ty } ztd d| _
W Y d }~nd }~ww | jd	kr| jd u rt| j
 d | _| jd u rt| j
 d | _| jd u rt| j
 d | _| jd u rt| j
 d | _| jd u rtd
 | _| jd u r| j| _| jd u r| jd | _| j d u r| jd | _ | j!d u r| j"d | _!| j#d u r| j$t%j&t%j'fv rd| _#nd| _#| j(s | j)r#d| _*t+| } t,-|  t.|  t/| }|| d< | d rRt0j12| d | d }t34|}|| d< nt0j15| d }|d d |d  }|| d< | d rzt36| d || d | d< | S )Nr   F)check_bert_finetuners   rB   maxsplitrM   r   r)   rH   z0--use_peft set.  setting --bert_finetune as wellTzMultistage training is set, optimizer is not chosen, and bert finetuning is active.  Will use AdamW as the second stage optimizer.adamwmadgradzMultistage training is set, optimizer is not chosen, and MADGRAD is available.  Will use MADGRAD as the second stage optimizer.zMultistage training is set.  Best models are with MADGRAD, but it is not installed.  Will use AdamW for the second stage optimizer.  Consider installing MADGRADrR   r    ro   save_each_namerz   z_%04drT   checkpoint_save_name)7rl   
parse_argsr   r   langrp   lenr   stage1_bert_learning_ratebert_learning_rateoptimmoderA   use_peftr#   r   r%   r   ModuleNotFoundErrorwarninglearning_rater   getr|   learning_epsr   learning_momentumr   learning_weight_decayr   stage1_learning_ratelearning_rate_min_lrstage1_learning_rate_min_lrreduce_positionhidden_sizenum_tree_lstm_layersconstituency_compositionr   	TREE_LSTMTREE_LSTM_CX
wandb_namewandb_norm_regexwandbvarsr   postprocess_argsr   r   r   r   rd   r   build_save_each_filenamesplitextcheckpoint_name)r   rk   r   er   model_save_each_filepiecesr5   r5   r6   r     s   $


















r   c              
   C   s"  t | d} t| d  td| d  td| d  | d }| d r=tj| d r1| d }ntj	| d	 | d }nx| d
 r| d du ry| d
 j
ddd}z	t|d }W n tyn } z	td| d
  |d}~ww || d< |d | d
< tj| d dd| d
 id tj	t| d d| d
 d }tj|std| d | d
 |f td| d | d
 | | d dkrtjtjkrttj td t| }| d dkrt| || dS | d dkrt| || dS | d dkrt| || dS | d dkrt| | d | dS dS )zy
    Main function for building con parser

    Processes args, calls the appropriate function for the chosen --mode
    r   rt   z&Running constituency parser in %s moder   zUsing device: %sdevicero   	load_namerz   load_packager   Nrs   rB   r   r   z--lang not specified, and the start of the --load_package name, %s, is not a known language.  Please check the values of those parametersconstituency)
processorspackagez.ptzExpected the downloaded model file for language %s package %s to be in %s, but there is nothing there.  Perhaps the package name doesn't exist?z)Model for language %s package %s is in %sr)   z"Set trainer logging level to DEBUGr+   r*   r,   )r   r   set_random_seedr   r   debugr   r   existsrd   r   r   lang_to_langcode
ValueErrorr   downloadr   FileNotFoundErrortloggerlevelloggingNOTSETsetLevelDEBUGr   build_retag_pipeliner   r)   evaluater   r,   )r   model_load_filelang_piecesr   r   retag_pipeliner5   r5   r6   main;  sN   



r   __main__r0   )+__doc__r\   r   r   r   torchr   stanza.models.commonr   r    stanza.models.common.peft_configr   r   stanza.models.constituencyr   r   %stanza.models.constituency.lstm_modelr   r	   r
   ,stanza.models.constituency.parse_transitionsr   *stanza.models.constituency.text_processingr    stanza.models.constituency.utilsr   r   r   r   r   r   r   r   stanza.resources.commonr   	getLoggerr   r   rl   r   r   r   __name__r5   r5   r5   r6   <module>   s>     !(

    

X5
