Feature Engineering and Representation

Day 109: Feature Engineering and Representation

Feature engineering matters because a model cannot learn a relationship that the representation keeps hidden.


Today's "Aha!" Moment

Keep using the churn problem from the last lessons. Suppose you give the model raw columns like last_login_at, signup_date, plan_type, tickets_last_30d, and total_payments_failed. That looks like plenty of information. But raw storage fields are not the same thing as a good representation of the prediction problem.

What the model often needs is not the timestamp itself, but days_since_last_login. Not the raw support log, but tickets_last_30d. Not income and debt as unrelated columns, but a ratio that expresses financial pressure. A model can only learn from what is visible in the feature space you hand it.

That is the aha. Feature engineering is not cosmetic preprocessing before the "real ML" begins. It is part of the modeling act. You are deciding which relationships are obvious, which are obscured, and which are impossible for the model to discover efficiently.

This is also why changing the representation can beat changing the algorithm. A simpler model with features aligned to the real decision can outperform a more sophisticated model that receives a clumsy, leakage-prone, or inconsistent view of the world.


Why This Matters

The problem: Operational data is usually stored for transactions, logging, or product behavior, not for learning. The raw schema reflects how the business runs, not how the prediction should think.

Before:

After:

Real-world impact: In many production ML systems, larger gains come from better representation and cleaner feature pipelines than from swapping one model family for another.


Learning Objectives

By the end of this session, you will be able to:

  1. Explain why representation changes learnability - Describe how engineered features expose structure that raw fields may hide.
  2. Treat preprocessing as part of the model - Understand why scaling, encoding, and missing-value rules must live inside a reproducible pipeline.
  3. Spot feature leakage and availability mistakes - Check whether a feature is valid at the actual moment of prediction.

Core Concepts Explained

Concept 1: Good Features Expose the Relationship the Model Actually Needs

Start with a simple example. A credit model sees debt = 20,000 for two applicants. That number means very different things if one earns 30,000 a year and the other earns 120,000. The raw value exists in the database, but the useful relationship is relative burden, not absolute debt alone.

That is where engineered features come from. You create a representation that makes the important relationship easier to learn:

def debt_to_income_ratio(debt, income):
    return debt / (income + 1)

This is not feature engineering for its own sake. It is feature engineering because the model should see the world in a way that is closer to the decision logic of the task.

The same pattern appears everywhere:

raw operational fields
    |
    +--> transform/aggregate/normalize
    |
    +--> representation closer to the decision

The trade-off is effort and judgment. Better features can make learning much easier, but they require domain understanding and careful evaluation rather than blind feature proliferation.

Concept 2: Scaling, Encoding, and Missing-Value Rules Are Not Housekeeping

Once you accept that representation is part of the model, preprocessing stops looking secondary.

If you use KNN or SVM, unscaled numeric features can distort distance or margins. If you use linear or boosted models, categorical encoding changes what patterns the model can express. If missing values are handled inconsistently, the deployed system is no longer using the same model you evaluated.

This is why preprocessing belongs inside a pipeline:

raw input
   |
   +--> numeric scaling / imputation
   +--> categorical encoding
   +--> feature assembly
   |
   +--> model

The pipeline is not just convenience. It is the formal definition of the input space the model expects. If training code and inference code compute features differently, then evaluation results are about one system and production is running another.

The trade-off is operational complexity for correctness. A proper pipeline takes more discipline, but it protects you from one of the most common sources of silent ML failure: train-serving mismatch.

Concept 3: A Feature Is Only Valid If It Exists at Prediction Time

The most dangerous feature-engineering mistake is leakage. The model appears brilliant because you accidentally gave it information from the future or from downstream outcomes that would not exist when the real prediction is made.

In churn prediction, imagine using a feature like account_closed_within_14_days. That may correlate beautifully with churn, but it is useless and invalid if the prediction is supposed to happen before the account closes.

The right question for every proposed feature is simple:

At the exact moment of prediction,
could the system have known this value honestly?

If the answer is no, the feature is not just risky. It is invalid.

Leakage can be subtle:

The trade-off is between convenience and truth. Leaky features can make offline metrics look fantastic, but they destroy trust because the apparent signal disappears the moment the model meets reality.

Troubleshooting

Issue: Adding many engineered features and seeing little improvement.

Why it happens / is confusing: More columns can feel like more signal.

Clarification / Fix: Feature count is not the goal. Add transformations that reflect plausible task structure, then test whether they improve validation performance.

Issue: Doing preprocessing manually in notebooks and assuming production will match.

Why it happens / is confusing: Exploration code is faster to write, so it becomes the accidental system definition.

Clarification / Fix: If a transformation matters for prediction, put it inside the reproducible training-and-inference pipeline.

Issue: Missing leakage because the feature looks business-reasonable.

Why it happens / is confusing: Many leaked features are semantically sensible, just temporally impossible.

Clarification / Fix: Audit every feature against the real prediction timestamp, not just against domain intuition.


Advanced Connections

Connection 1: Feature Engineering ↔ Inductive Bias

The parallel: Good features inject assumptions about what structure matters, which can make learning easier for simpler models.

Real-world case: Ratios, counts, rolling windows, and interaction terms often encode domain knowledge more cheaply than moving to a more complex algorithm.

Connection 2: Feature Engineering ↔ Data Contracts

The parallel: Features are a contract between raw data sources and the model.

Real-world case: Many production failures come from broken feature definitions, stale joins, or train-serving mismatch rather than from the learning algorithm itself.


Resources

Optional Deepening Resources


Key Insights

  1. The model only sees the representation you build - Hidden relationships stay hidden unless the feature space exposes them.
  2. Preprocessing is part of the model, not a prelude to it - Scaling, encoding, imputation, and aggregation define the actual input space.
  3. A feature must be valid at prediction time - Leakage produces fake performance by showing the model information it could never have honestly known.

Knowledge Check (Test Questions)

  1. Why can an engineered ratio be more useful than two raw columns?

    • A) Because it can express the relationship the decision really depends on more directly than the raw values alone.
    • B) Because engineered features are always causal.
    • C) Because ratios automatically eliminate all noise.
  2. Why should preprocessing live inside a reproducible pipeline?

    • A) Because training and inference must compute the same feature representation.
    • B) Because preprocessing is unrelated to model behavior.
    • C) Because pipelines remove the need for feature choices.
  3. What is the clearest test for whether a feature is leaky?

    • A) Ask whether the system could have known that value honestly at the exact moment of prediction.
    • B) Check only whether the feature is strongly correlated with the label.
    • C) Use the feature if it improves offline accuracy enough.

Answers

1. A: Good engineered features often expose the relationship that matters more directly than raw storage fields do.

2. A: If feature computation differs between training and serving, you are no longer deploying the model you evaluated.

3. A: Leakage is fundamentally a timing and availability mistake, not just a correlation issue.



← Back to Learning