Monday, September 9, 2013

Prototype In The Making

This is just a quick update for our project. We're currently in the works of creating a small working prototype within a week or so. More news will be posted when complete.

Wednesday, September 4, 2013

RCOS Fall 2013 Proposal

ProtoML is a Machine Learning prototyping library built to allow developers to transparently and easily design a workflow for data analysis. In this semester, we will be rewriting ProtoML to work primarily with a distributed cluster as opposed to a single machine. The goal of the project is to ultimately reduce the amount of time and effort that needs to be put in to build up the infrastructure of the typical data science project while allowing for an easy way to balance the workload among multiple machines.

At its core ProtoML is a master/client server setup which will use a REST API to transfer data and execute tasks. The core runtime runs on the master server and interfaces with the user, first through a command line interface and later through a Javascript based Web UI, both of which will be separate projects that use the REST API. The tasks themselves (hereafter referred to as transforms) are encapsulated as modules, meaning that anyone can write their own custom transforms for ProtoML to use. Additionally, it is possible to write these transforms in any language you want, and we will provide documentation to make it as easy as possible.

However, this requires a standard to be set among the actual transforms in order to serialize the data and models needed to execute. This will be done by defining a type system for the data and the transforms, allowing you to specify exactly what type of data a module can use. Modules will then be encapsulated in a JSON schema which specifies things like input/output types, how to execute the script, parameters, etc.

This is important for the next core part of ProtoML -- the error-checker. Before executing a workflow it’s important to ensure that no incompatibilities between modules exist, and doing a check beforehand will prevent wasting a lot of time waiting on data transfers and model training that will ultimately fail due to semantic reasons.

All of the core ProtoML code will be done in Google’s Go language for a variety of reasons: including but not limited to its many features as a systems-level language and its built in high-performance server support.

The execution model will be controlled by a scheduling module. This modular structure allows for dropping in different schedulers for supporting distributed-processing frameworks such as Hadoop and single-server systems.

Tuesday, April 23, 2013

Post-Kaggle Updates

The Kaggle competition finally ended and we updated ProtoML with all the cool tools that we made. This was definitely one of those examples of not knowing what we didn't know, having previously only done theoretical machine learning.

It's pretty amazing looking back how much we've learned since we started the project, and some of the stuff I've learned from this, I know use everyday (like how to use the amazing data analysis library, Pandas).

Moving forward, we have a lot of work to do. We have great ideas of things to implement. While I can't speak for Bharath, I know that I'll be redoing a lot of the old stuff I made by merging the Feature class into the proto_col, a command line tool for doing human-assisted data analysis, into something much much more powerful. Stay tuned for updates!

Tuesday, March 26, 2013

Practical Machine Learning

Bharath and I have been working on a Kaggle competition for a few months: The Blue Book For Bulldozers Challenge. Our job is to predict as accurately as possible the price a bulldozer will sell for given specs of the machine and other such observations, given the selling price of over 400,000 past sales, and the competition goes on to the next round on April 10th!

Throughout the competition, we've made a lot of functions that would be just generally helpful for real world practical machine learning that we had no clue about when we only studied theory. One example of this was with caching intermediate results. We found out using Python's pickle to serialize and deserialize a Pandas DataFrame or Numpy array actually was less space efficient and way slower than simply storing the objects as csv files! This was pretty amazing, because the csv files are more portable (e.g., we can use it with R or MATLAB) and faster! Another intuitiveness discovery was that our function of writing an array or DataFrame into svmlight format (a popular format for machine learning algorithms).

For now, we've been working hard on the competition, and for the sake of ease (i.e., we don't have to update the library every time we need a new function), writing these useful functions into the project code for now, but when we have free time, they'll be added to ProtoML.

Friday, March 8, 2013

"sofia-kmeans" and Progress on Documentation

Just as a quick update, I've been working on integrating other libraries. I've spend most of the time (failing) on WiseRF, a tool for extremely efficient random forests, and a good amount of time of sofia-kmeans, a clustering tool. On my to-do list, I have:

  1. Fixing the WiseRF wrapper.
  2. Making a Vowpal Wabbit wrapper.
  3. Making a sofia-ml wrapper.
But here's the awesome news. Documentation is coming nicely. My wrapper for sofia-kmeans is about 50% docstrings (and I've begun retroactively documenting some utility functions):

Monday, March 4, 2013

ProtoML Features In Action!

As promised, here's the sample! This is a small snippet of code for analyzing a dataset that we've been playing with. I actually had hundreds of lines of non-ProtoML analysis already made that I replaced with these ~40 lines, and I can verify that it makes the analysis much easier.

In order to understand how the above code works, you just need to understand the anatomy of a feature transform (note: this may change in the future):
Feature Transforms are tuples/lists in the form: 
( CHILD, FILTER, TRANSFORM, REMOVE)

  1. CHILD -  A string or None specifying what the resulting columns will be named. A child of None means no columns will be added.
  2. FILTER - This can be a lot of things. If it's an integer or slice, it gets those specified column numbers. If it's a string, it's treated as a regular expression to filter out column names. If it's a function, it can be applied to column names, and if the result evaluates to true, the column name is used, and it could be a list of any of the above.
  3. TRANSFORM - This takes in the data specified by the filter. This can be either a class which has fit/fit_transform/transform/predict/etc. called on it with the data, a function that takes in the data, or just raw data whose columns can be added to the Feature's data.
  4. REMOVE - An optional parameter. If set to True, the columns specified by the filter are removed after the transform.


Key things:

  • Anyone can make their own custom transforms (I actually made 3 in the sample: one for performing a Barnes-Hut t-SNE, one for randomized PCA, and the other to print some plots, since we're still working on our visualization stuff). My hope was that data scientists would actually build their own personal libraries of transforms, and whenever they work on a new dataset, it's simply a matter of mixing and matching.
  • It's super easy to change the dataflow by just commenting out a feature transform or reordering the transforms.
  • Best of all, exactly what you're doing to the data is clear as crystal.
  • This just mostly shows the basic Feature class, and we're planning on adding integration with a lot of external libraries soon.

Wednesday, February 27, 2013

The Feature Class

The class that I was spending the most time on appears to be complete. All that's missing now is a lot of test cases to make sure everything is working as intended and some smoothing of the design. I'll have some sample code up shortly (hopefully tonight). I'm actually working on converting my machine learning research code to use ProtoML, so I can show a side by side comparison. Some of the cool stuff though are:
  • automated feature transforms
  • regex indexing
  • easy concatenation of data frames (still making it more elegant but it works)
  • lazy hashing for caching
  • and pretty much everything normal data frames can do

Other than that, we're still working on tests and docs, and implementing more features!