gen
– Performs generative model evaluations and plots results
gen.py¶
usage: gen.py [-h] [--model MODEL] [--val] [--start START] [--count COUNT]
[--search SEARCH] [-p PLOT] [--exp EXP]
- -h, --help¶
show this help message and exit
- --model <model>¶
Model to use for pretraining.
- --val¶
Run with the validation dataset instead of the test.
- --start <start>¶
Index of the first audit log to use for the demo.
- --count <count>¶
Number of audit logs to use for the demo.
- --search <search>, -s <search>¶
Search method to use for decoding. If beam, :k is the beam size.
- -p <plot>, --plot <plot>¶
- --exp <exp>¶
Experiment to run.
- class gen.GenerationExperiment(config, path_prefix, vocab, model, *args, **kwargs)¶
- Parameters:
config (dict)
path_prefix (str)
vocab (EHRVocab)
model (str)
- eval_generation(output_df=None, label_df=None, output_tokens=None, label_tokens=None)¶
- examples_seen()¶
- min_size()¶
- on_finish()¶
- plot()¶
- stopping_criteria(context_length=0, total_length=0)¶
- Parameters:
context_length (int)
total_length (int)
- window_size()¶
- class gen.NextActionExperiment(config, path_prefix, vocab, model, *args, **kwargs)¶
- Parameters:
config (dict)
path_prefix (str)
vocab (EHRVocab)
model (str)
- eval_generation(output_df=None, label_df=None, output_tokens=None, label_tokens=None)¶
- examples_seen()¶
- on_finish()¶
- plot()¶
- stopping_criteria(context_length=0, total_length=0)¶
- Parameters:
context_length (int)
total_length (int)
- window_size()¶
- class gen.ScoringExperiment(config, path_prefix, vocab, model, *args, **kwargs)¶
- Parameters:
config (dict)
path_prefix (str)
vocab (EHRVocab)
model (str)
- eval_generation(output_df=None, label_df=None, output_tokens=None, label_tokens=None)¶
- examples_seen()¶
- on_finish()¶
- plot()¶
- stopping_criteria(context_length=0, total_length=0)¶
- Parameters:
context_length (int)
total_length (int)
- window_size()¶