research_town.evaluators package
Submodules
research_town.evaluators.evaluator_base module
class research_town.evaluators.evaluator_base.BaseEvaluator(model_name: str, config: Config)
Bases: object
research_town.evaluators.evaluator_output module
Bases: BaseModel
dimension_scores : list[int]
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
overall_score : int
pk : str
classmethod validate_dimension_scores(v: list[int]) → list[int]
classmethod validate_overall_score(v: int) → int
Bases: BaseEvalOutput
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
Bases: BaseEvalOutput
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
Bases: BaseEvalOutput
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
Bases: BaseEvalOutput
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
Bases: BaseEvalOutput
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
Bases: BaseEvalOutput
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
Bases: Exception
research_town.evaluators.evaluator_quality module
class research_town.evaluators.evaluator_quality.BaseQualityEvaluator(model_name: str, output_model: type[BaseEvalOutput], config: Config | None = None, *args: Any, **kwargs: Any)
Bases: object
eval(*args: Any, **kwargs: Any) → BaseEvalOutput
parse(raw_output: str, output_model: type[T]) → T
class research_town.evaluators.evaluator_quality.IdeaQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
eval(*args: Any, **kwargs: Any) → IdeaEvalOutput
class research_town.evaluators.evaluator_quality.InsightQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
Bases: BaseQualityEvaluator
class research_town.evaluators.evaluator_quality.ProposalQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
class research_town.evaluators.evaluator_quality.RebuttalQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
class research_town.evaluators.evaluator_quality.ReviewQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
eval(*args: Any, **kwargs: Any) → ReviewEvalOutput
Module contents
class research_town.evaluators.BaseEvaluator(model_name: str, config: Config)
Bases: object
Bases: BaseEvalOutput
dimension_scores : List[int]
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
overall_score : int
pk : str
class research_town.evaluators.IdeaQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
eval(*args: Any, **kwargs: Any) → IdeaEvalOutput
Bases: BaseEvalOutput
dimension_scores : List[int]
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
overall_score : int
pk : str
class research_town.evaluators.InsightQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
Bases: BaseEvalOutput
dimension_scores : List[int]
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
overall_score : int
pk : str
Bases: BaseQualityEvaluator
Bases: Exception
Bases: BaseEvalOutput
dimension_scores : List[int]
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
overall_score : int
pk : str
class research_town.evaluators.ProposalQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
Bases: BaseEvalOutput
dimension_scores : List[int]
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
overall_score : int
pk : str
class research_town.evaluators.RebuttalQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
Bases: BaseEvalOutput
dimension_scores : List[int]
model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
model_fields : ClassVar[dict[str, FieldInfo]] *= ‘dimension_scores’: FieldInfo(annotation=list[int], required=False, default=[]), ‘overall_score’: FieldInfo(annotation=int, required=False, default=-1), ‘pk’: FieldInfo(annotation=str, required=False, default=’0’) *
Metadata about the fields defined on the model,
mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model._fields_ from Pydantic V1.
overall_score : int
pk : str
class research_town.evaluators.ReviewQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)
Bases: BaseQualityEvaluator
eval(*args: Any, **kwargs: Any) → ReviewEvalOutput