Skip to main content

research_town.evaluators package

Submodules

research_town.evaluators.evaluator_base module

class research_town.evaluators.evaluator_base.BaseEvaluator(model_name: str, config: Config)

Bases: object

evaluate_idea_quality(insights: List[Insight], idea: Idea) → IdeaEvalOutput

evaluate_insight_quality(insight: Insight) → InsightEvalOutput

evaluate_metareview_quality(insights: List[Insight], idea: Idea, paper: Proposal, reviews: List[Review], rebuttals: List[Rebuttal], metareview: MetaReview) → MetaReviewEvalOutput

evaluate_paper_quality(insights: List[Insight], idea: Idea, paper: Proposal) → ProposalEvalOutput

evaluate_rebuttal_quality(insights: List[Insight], idea: Idea, paper: Proposal, review: Review, rebuttal: Rebuttal) → RebuttalEvalOutput

evaluate_review_quality(insights: List[Insight], idea: Idea, paper: Proposal, review: Review) → ReviewEvalOutput

pipeline_eval(insights: List[Insight], idea: Idea, paper: Proposal, reviews: List[Review], rebuttals: List[Rebuttal], metareview: MetaReview) → Tuple[List[InsightEvalOutput], IdeaEvalOutput, ProposalEvalOutput, List[ReviewEvalOutput], List[RebuttalEvalOutput], MetaReviewEvalOutput]

research_town.evaluators.evaluator_output module

class research_town.evaluators.evaluator_output.BaseEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseModel

dimension_scores : list[int]

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

overall_score : int

pk : str

classmethod validate_dimension_scores(v: list[int]) → list[int]

classmethod validate_overall_score(v: int) → int

class research_town.evaluators.evaluator_output.IdeaEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

class research_town.evaluators.evaluator_output.InsightEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

class research_town.evaluators.evaluator_output.MetaReviewEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

class research_town.evaluators.evaluator_output.ProposalEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

class research_town.evaluators.evaluator_output.RebuttalEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

class research_town.evaluators.evaluator_output.ReviewEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

research_town.evaluators.evaluator_output_format module

exception research_town.evaluators.evaluator_output_format.OutputFormatError(message: str = 'Output format error')

Bases: Exception

research_town.evaluators.evaluator_quality module

class research_town.evaluators.evaluator_quality.BaseQualityEvaluator(model_name: str, output_model: type[BaseEvalOutput], config: Config | None = None, *args: Any, **kwargs: Any)

Bases: object

eval(*args: Any, **kwargs: Any) → BaseEvalOutput

parse(raw_output: str, output_model: type[T]) → T

class research_town.evaluators.evaluator_quality.IdeaQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → IdeaEvalOutput

class research_town.evaluators.evaluator_quality.InsightQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → InsightEvalOutput

class research_town.evaluators.evaluator_quality.MetaReviewQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → MetaReviewEvalOutput

class research_town.evaluators.evaluator_quality.ProposalQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → ProposalEvalOutput

class research_town.evaluators.evaluator_quality.RebuttalQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → RebuttalEvalOutput

class research_town.evaluators.evaluator_quality.ReviewQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → ReviewEvalOutput

Module contents

class research_town.evaluators.BaseEvaluator(model_name: str, config: Config)

Bases: object

evaluate_idea_quality(insights: List[Insight], idea: Idea) → IdeaEvalOutput

evaluate_insight_quality(insight: Insight) → InsightEvalOutput

evaluate_metareview_quality(insights: List[Insight], idea: Idea, paper: Proposal, reviews: List[Review], rebuttals: List[Rebuttal], metareview: MetaReview) → MetaReviewEvalOutput

evaluate_paper_quality(insights: List[Insight], idea: Idea, paper: Proposal) → ProposalEvalOutput

evaluate_rebuttal_quality(insights: List[Insight], idea: Idea, paper: Proposal, review: Review, rebuttal: Rebuttal) → RebuttalEvalOutput

evaluate_review_quality(insights: List[Insight], idea: Idea, paper: Proposal, review: Review) → ReviewEvalOutput

pipeline_eval(insights: List[Insight], idea: Idea, paper: Proposal, reviews: List[Review], rebuttals: List[Rebuttal], metareview: MetaReview) → Tuple[List[InsightEvalOutput], IdeaEvalOutput, ProposalEvalOutput, List[ReviewEvalOutput], List[RebuttalEvalOutput], MetaReviewEvalOutput]

class research_town.evaluators.IdeaEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

dimension_scores : List[int]

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

overall_score : int

pk : str

class research_town.evaluators.IdeaQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → IdeaEvalOutput

class research_town.evaluators.InsightEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

dimension_scores : List[int]

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

overall_score : int

pk : str

class research_town.evaluators.InsightQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → InsightEvalOutput

class research_town.evaluators.MetaReviewEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

dimension_scores : List[int]

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

overall_score : int

pk : str

class research_town.evaluators.MetaReviewQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → MetaReviewEvalOutput

exception research_town.evaluators.OutputFormatError(message: str = 'Output format error')

Bases: Exception

class research_town.evaluators.ProposalEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

dimension_scores : List[int]

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

overall_score : int

pk : str

class research_town.evaluators.ProposalQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → ProposalEvalOutput

class research_town.evaluators.RebuttalEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

dimension_scores : List[int]

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

overall_score : int

pk : str

class research_town.evaluators.RebuttalQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → RebuttalEvalOutput

class research_town.evaluators.ReviewEvalOutput(*, overall_score: int = -1, pk: str = '0', dimension_scores: list[int] = [], **extra_data: Any)

Bases: BaseEvalOutput

dimension_scores : List[int]

model_computed_fields : ClassVar[dict[str, ComputedFieldInfo]] *= *

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config : ClassVar[ConfigDict] *= 'extra': 'allow' *

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields : ClassVar[dict[str, FieldInfo]] *= 'dimension_scores': FieldInfo(annotation=list[int], required=False, default=[]), 'overall_score': FieldInfo(annotation=int, required=False, default=-1), 'pk': FieldInfo(annotation=str, required=False, default='0') *

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model._fields_ from Pydantic V1.

overall_score : int

pk : str

class research_town.evaluators.ReviewQualityEvaluator(model_name: str, config: Config | None = None, *args: Any, **kwargs: Any)

Bases: BaseQualityEvaluator

eval(*args: Any, **kwargs: Any) → ReviewEvalOutput