Skip to main content
Sports Predictor includes two evaluation mechanisms: a test accuracy score printed during training, and evaluate_bet_teamwins() which translates a win probability into an actionable bet recommendation.

Test accuracy

After fitting the model, train_model_teamwins() prints accuracy on the held-out 30 % test split:
preds = model.predict(X_test)
print("Test Accuracy:", metrics.accuracy_score(y_test, preds))
# Example output: Test Accuracy: 0.XXX
This gives a quick measure of how often the model correctly predicts wins and losses on unseen games.

Bet evaluation

evaluate_bet_teamwins(probability) takes the float returned by predict_game_teamwins() and returns one of three string labels:
def evaluate_bet_teamwins(probability):
    if probability > 0.60:
        return "Good Bet"
    elif probability > 0.52:
        return "Slight Edge"
    return "Avoid"

Probability thresholds

ConditionLabelMeaning
probability > 0.60"Good Bet"Strong model confidence — the edge over 50 % is meaningful
probability > 0.52"Slight Edge"Marginal advantage — proceed with caution
probability <= 0.52"Avoid"Not enough edge to justify a wager

Test scenarios

The following cases reflect the test inputs defined in test_betting.py. Note that the test file references older function names (predict_game, evaluate_bet) that have since been renamed to predict_game_teamwins and evaluate_bet_teamwins. Use the current function names as shown below:
prob = predict_game_teamwins(
    points_diff=10,
    team_reb_roll=5,
    opponent_reb_roll=3,
    team_ast_roll=4,
    opponent_ast_roll=2,
    home=1
)
# Expected: probability well above 0.60 → "Good Bet"
print(evaluate_bet_teamwins(prob))
Use rolling stats from the most recent five games when building your inputs. Fresher averages reflect current form and produce more accurate probability estimates than season-long averages.

Build docs developers (and LLMs) love