In order to understand how the above code works, you just need to understand the anatomy of a feature transform (note: this may change in the future):
Feature Transforms are tuples/lists in the form:
( CHILD, FILTER, TRANSFORM, REMOVE)
- CHILD - A string or None specifying what the resulting columns will be named. A child of None means no columns will be added.
- FILTER - This can be a lot of things. If it's an integer or slice, it gets those specified column numbers. If it's a string, it's treated as a regular expression to filter out column names. If it's a function, it can be applied to column names, and if the result evaluates to true, the column name is used, and it could be a list of any of the above.
- TRANSFORM - This takes in the data specified by the filter. This can be either a class which has fit/fit_transform/transform/predict/etc. called on it with the data, a function that takes in the data, or just raw data whose columns can be added to the Feature's data.
- REMOVE - An optional parameter. If set to True, the columns specified by the filter are removed after the transform.
- Anyone can make their own custom transforms (I actually made 3 in the sample: one for performing a Barnes-Hut t-SNE, one for randomized PCA, and the other to print some plots, since we're still working on our visualization stuff). My hope was that data scientists would actually build their own personal libraries of transforms, and whenever they work on a new dataset, it's simply a matter of mixing and matching.
- It's super easy to change the dataflow by just commenting out a feature transform or reordering the transforms.
- Best of all, exactly what you're doing to the data is clear as crystal.
- This just mostly shows the basic Feature class, and we're planning on adding integration with a lot of external libraries soon.